Numerical Linear Algebra: Question sheet 4

Numerical Linear Algebra:
Question sheet 4
1. Consider the SOR algorithm, from lecture 5, applied to A with the property that
˜+U
˜ ) = det(cI + γ L
˜ + γ −1 U
˜)
det(cI + L
(1)
for any c, γ > 0. This is a class of matrices typical in finite difference approximations of
partial differential equations. Show that for matrices satisfying (1) that if µ is an eigenvalue
for the Jacobi algorithm matrix TJ and λ is an eigenvalue of the SOR algorithm matrix
Tw then
w+λ−1
µ=
.
(2)
wλ1/2
Contrast the implications for the convergence rates for the Jacobi, Gauss-Seidel (w = 1)
and SOR algorithms.
2. Consider an algorithm of the form x(k+1) = x(k) + αk A(b − Ax(k) ) for hermitian matrices,
A∗ = A. How should αk be determined to minimize ke(k+1) k2 ? Provide a bound on the
error of the form
ke(k+1) k2 ≤ δke(k) k2 ,
where δ is a function of the condition number of A. Contrast the algorithm and the bound
with the steepest descent algorithm for the normal system of equations.
3. What are the flops per iteration, for A ∈ Rm×m , for Orthomin(1), steepest descent (SD),
conjugate gradient (CG)? Count the flops only to leading order. Based upon the flops per
iteration and the convergence rate bounds for SD and CG, is there any condition number
of A such that SD has a greater rate of reduction of ke(k) kA per flop than CG?
4. Consider the Arnoldi algorithm, lecture 8, for a symmetric matrix with A∗ = A. Show
that the algorithm can be simplified to the Lanczos algorithm (see below) and state what
the matrix H from Arnoldi simplifies to in the Lanczos notation.
Lanczos
Input: A ∈ Rn×n with A∗ = A, b ∈ Rn , and k
Initialization: Set q1 = b/kbk2 , β0 = 0, and q0 = 0
for j = 1 to k − 1
ν = Aqj
αj = qj∗ ν
ν = ν − αj qj − βj−1 qj−1
βj = kνk
qj+1 = ν/βj
5. Implement the following algorithms: Conjugate Gradient and Orthomin(j) for general
j. Apply them with initial estimate x(0) = 0 with relative convergence accuracy T ol =
10−8 , to an m × m matrix constructed as: [U, ]=qr(randn(m)); d=linspace(1,25,m);
A=U*diag(d)*U’;. Contrast the convergence rate and times of these methods, along with
the time taken by the Matlab LU factorization algorithm. Vary m and determine how
large m should be so that each iterative method takes less time than LU factorization.
Vary j and determine the choice of j that typically results in: a) the least recover time
and b) the least number of iterations.
1