Steepest Descent, CG, and
Iterative Regularization of Ill-Posed Problems
J. Nagy and K. Palmer
The state of the art iterative method for solving large linear
systems is the conjugate gradient (CG) algorithm.
Theoretical convergence analysis suggests that
CG converges more rapidly than steepest descent. This paper
argues that steepest descent may be preferable to CG
when solving linear systems arising from the discretization
of ill-posed problems. Specifically, it is shown that, for
ill-posed problems, steepest descent has a more stable convergence
behavior than CG, which may be explained by the fact that
the so called ``filter factors'' for steepest descent behave
much less erratically than those for CG. Moreover, it is shown
that, with proper preconditioning, the convergence rate of
steepest descent is competitive with that of CG.