0% found this document useful (0 votes)
6 views

Linear Systems Iterative Techniques I Problem Sheet Solutions

The document provides solutions to problems related to iterative techniques for solving linear systems, specifically using the Jacobi and Gauss-Seidel methods. It includes detailed calculations for approximations after multiple iterations, demonstrating the convergence of the methods. Additionally, it discusses properties of positive definite matrices and their implications for matrix decomposition.

Uploaded by

王健
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Linear Systems Iterative Techniques I Problem Sheet Solutions

The document provides solutions to problems related to iterative techniques for solving linear systems, specifically using the Jacobi and Gauss-Seidel methods. It includes detailed calculations for approximations after multiple iterations, demonstrating the convergence of the methods. Additionally, it discusses properties of positive definite matrices and their implications for matrix decomposition.

Uploaded by

王健
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

MATH2033 (2023–2024)

Introduction to Scientific Computation

Linear Systems: Iterative Techniques I

PROBLEM SHEET SOLUTIONS

Solution to Problem 1
a The approximations obtained after k iterations of the Jacobi method are
given by
(k)1 (k−1) (k−1)

x1 = 1 + x2 − x3 ,
3
(k) 1 (k−1) (k−1)

x2 = −3x1 − 2x3
6
and
(k) 1 (k−1) (k−1)

x3 = 4 − 3x1 − 3x2 .
7
(0) (0) (0)
Starting with x1 = x2 = x3 = 0, we can compute that
(1) 1
x1 = = 0.3333333333 . . . ,
3
(1)
x2 = 0
and
(1) 4
x3 = = 0.5714285714 . . . .
7
So  
0.3333333333 . . .
x(1) = 0 .
0.5714285714 . . .
We can then compute that
(2) 1
x1 = (1 − 0.5714281754 . . .) = 0.1428571428 . . . ,
3
(2) 1
x2 = (−1 − 1.1428571428) = −0.3571428571 . . .
6
and
(2) 1
x3 = (4 − 1) = 0.4285714285 . . . .
7
So  
0.1428571428 . . .
x(2) =  −0.3571428571 . . .  .
0.4285714285 . . .

Page 1 of 5
b The approximations obtained after k iterations of the Gauss–Seidel method
are given by
(k) 1 (k−1) (k−1)

x1 = 1 + x2 − x3 ,
3
(k) 1 (k) (k−1)

x2 = −3x1 − 2x3
6
and
(k) 1 (k) (k)

x3 = 4 − 3x1 − 3x2 .
7
(0) (0) (0)
Starting with x1 = x2 = x3 = 0, we can compute that

(1) 1
x1 = = 0.3333333333 . . . ,
3
(1) −1
x2 = = −0.1666666666 . . .
6
and
(1) 4 − 1 + 0.5
x3 = = 0.5.
7
So  
0.3333333333 . . .
x(1) =  −0.1666666666 . . .  .
0.5
We can then compute that

(2) 1
x1 = (1 − 0.1666666666 . . . − 0.5) = 0.1111111111 . . . ,
3
(2) 1
x2 = (−0.3333333333 . . . − 1) = −0.2222222222 . . .
6
and
(2) 1
x3 = (4 − 0.3333333333 . . . + 0.6666666666 . . .) = 0.6190476190 . . . .
7
So  
0.1111111111 . . .
x(2) =  −0.2222222222 . . .  .
0.6190476190 . . .

Solution to Problem 2
a The approximations obtained after k iterations of the Jacobi method are
given by
(k) 1  (k−1)

x1 = 9 + x2 ,
10
(k) 1  (k−1) (k−1)

x2 = 7 + x1 + 2x3
10
and
(k) 1  (k−1)

x3 = 6 + 2x2 .
10
Page 2 of 5
(0) (0) (0)
Starting with x1 = x2 = x3 = 0, we can compute that

(1) 9
x1 = = 0.9,
10
(1) 7
x2 = = 0.7
10
and
(1) 6
x3 = = 0.6.
10
So  
0.9
x(1) =  0.7  .
0.6
We can then compute that

(2) 1
x1 = (9 + 0.7) = 0.97,
10
(2) 1
x2 = (7 + 0.9 + 1.2) = 0.91
10
and
(2) 1
x3 = (6 + 1.4) = 0.74.
10
So  
0.97
x(1) =  0.91  .
0.74

b The approximations obtained after k iterations of the Gauss–Seidel method


are given by
(k) 1  (k−1)

x1 = 9 + x2 ,
10
(k) 1  (k) (k−1)

x2 = 7 + x1 + 2x3
10
and
(k) 1  (k)

x3 = 6 + 2x2 .
10
(0) (0) (0)
Starting with x1 = x2 = x3 = 0, we can compute that

(1) 9
x1 = = 0.9,
10
(1) 7 + 0.9
x2 = = 0.79
10
and
(1) 6 + 1.58
x3 = = 0.758.
10

Page 3 of 5
So  
0.9
x(1) =  0.79  .
0.758
We can then compute that

(2) 1
x1 = (9 + 0.79) = 0.979,
10
(2) 1
x2 = (7 + 0.979 + 1.516) = 0.9495
10
and
(2) 1
x3 = (6 + 1.899) = 0.7899.
10
So  
0.979
x(2) =  0.9495  .
0.7899

Solution to Problem 3
a As A is positive definite, xT Ax > 0 for every nonzero x ∈ Rn . For
i = 1, 2, . . . , n, let xi ∈ Rn be the vector with entries

1 if j = i,
xij =
0 if j 6= i,

for j = 1, 2, . . . , n. Then, for i = 1, 2, . . . , n,


 
a1i
 a2i 
T 
T 
xi Axi = xi  .  = aii
 .. 
ani
T
and (xi ) Axi > 0 as A is positive definite. Therefore, all of the entries on
the main diagonal of A are positive.

b We can write A = D − L − U , where D is a diagonal matrix, L is a strictly


lower triangular matrix and U is a strictly upper triangular matrix. Now, the
entries on the main diagonal of D are all positive because the entries on
the main diagonal of A are all positive. Moreover, since A is symmetric,
D − L − U = A = AT = D T − LT − U T = D − LT − U T and so
L + U = LT + U T from which we can conclude that U = LT because
LT is a strictly upper triangular matrix and U T is a strictly lower triangular
matrix. Therefore, we can write A = D − L − LT , where D is a diagonal
matrix where the entries on the main diagonal are all positive and L is a
strictly lower triangular matrix.

Page 4 of 5
c Since the entries on the main diagonal of D are all positive, the entries on
the main diagonal of D − L are all positive. So, since D − L is a lower
triangular matrix, det(D − L) 6= 0. Hence, D − L is nonsingular.

d Since A = D − L − LT , LT = D − L − A and so

M = (D − L)−1 LT
= (D − L)−1 (D − L − A)
= (D − L)−1 (D − L) − (D − L)−1 A
= I − (D − L)−1 A.

References

• Burden, Faires & Burden, Numerical Analysis, 10E

– Section 7.3

Page 5 of 5

You might also like