Answer on Question #60840 – Math – Algorithms | Quantitative Methods
Question
b) Set up the Gaussi-Jacobi iteration scheme in matrix form for the linear system of equations
```
4 3
4 2
4 3
2 3
1 2 3
1 2
- + =
- + - =
- =
x x
x x x
x x
```
Show that the iteration scheme is convergent. Hence find the rate of convergence of this method.
Solution
We have the system of equations A x = b Ax = b A x = b :
( 4 − 1 0 − 1 4 − 1 0 − 1 4 ) ( x 1 x 2 x 3 ) = ( 3 2 3 ) \left( \begin{array}{ccc}
4 & -1 & 0 \\
-1 & 4 & -1 \\
0 & -1 & 4
\end{array} \right)
\left( \begin{array}{c}
x_1 \\
x_2 \\
x_3
\end{array} \right)
=
\left( \begin{array}{c}
3 \\
2 \\
3
\end{array} \right) ⎝ ⎛ 4 − 1 0 − 1 4 − 1 0 − 1 4 ⎠ ⎞ ⎝ ⎛ x 1 x 2 x 3 ⎠ ⎞ = ⎝ ⎛ 3 2 3 ⎠ ⎞
By Gauss-Jacobi method we split A A A into:
A = D − L − U A = D - L - U A = D − L − U
We have
D = ( 4 0 0 0 4 0 0 0 4 ) , L = ( 0 0 0 1 0 0 0 1 0 ) , U = ( 0 1 0 0 0 1 0 0 0 ) D = \left( \begin{array}{ccc}
4 & 0 & 0 \\
0 & 4 & 0 \\
0 & 0 & 4
\end{array} \right), \quad
L = \left( \begin{array}{ccc}
0 & 0 & 0 \\
1 & 0 & 0 \\
0 & 1 & 0
\end{array} \right), \quad
U = \left( \begin{array}{ccc}
0 & 1 & 0 \\
0 & 0 & 1 \\
0 & 0 & 0
\end{array} \right) D = ⎝ ⎛ 4 0 0 0 4 0 0 0 4 ⎠ ⎞ , L = ⎝ ⎛ 0 1 0 0 0 1 0 0 0 ⎠ ⎞ , U = ⎝ ⎛ 0 0 0 1 0 0 0 1 0 ⎠ ⎞
Then
x ( k + 1 ) = D − 1 ( L + U ) x ( k ) + D − 1 b x^{(k+1)} = D^{-1}(L + U) x^{(k)} + D^{-1} b x ( k + 1 ) = D − 1 ( L + U ) x ( k ) + D − 1 b
At the first iteration x ( 0 ) = ( 0 0 0 ) x^{(0)} = \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix} x ( 0 ) = ⎝ ⎛ 0 0 0 ⎠ ⎞ :
x ( 1 ) = D − 1 ( L + U ) x ( 0 ) + D − 1 b = D − 1 b = ( 4 0 0 0 4 0 0 0 4 ) − 1 ( 3 2 3 ) = ( 1 / 4 0 0 0 1 / 4 0 0 0 1 / 4 ) ( 3 2 3 ) x^{(1)} = D^{-1}(L + U) x^{(0)} + D^{-1} b = D^{-1} b = \left( \begin{array}{ccc}
4 & 0 & 0 \\
0 & 4 & 0 \\
0 & 0 & 4
\end{array} \right)^{-1} \left( \begin{array}{c}
3 \\
2 \\
3
\end{array} \right)
=
\left( \begin{array}{ccc}
1/4 & 0 & 0 \\
0 & 1/4 & 0 \\
0 & 0 & 1/4
\end{array} \right)
\left( \begin{array}{c}
3 \\
2 \\
3
\end{array} \right) x ( 1 ) = D − 1 ( L + U ) x ( 0 ) + D − 1 b = D − 1 b = ⎝ ⎛ 4 0 0 0 4 0 0 0 4 ⎠ ⎞ − 1 ⎝ ⎛ 3 2 3 ⎠ ⎞ = ⎝ ⎛ 1/4 0 0 0 1/4 0 0 0 1/4 ⎠ ⎞ ⎝ ⎛ 3 2 3 ⎠ ⎞
Then
x ( 1 ) = ( 0.75 0.5 0.75 ) x ^ {(1)} = \left( \begin{array}{c} 0.75 \\ 0.5 \\ 0.75 \end{array} \right) x ( 1 ) = ⎝ ⎛ 0.75 0.5 0.75 ⎠ ⎞
At the second iteration
x ( 2 ) = D − 1 ( L + U ) x ( 1 ) + D − 1 b = ( 0.25 0 0 0 0.25 0 0 0 0.25 ) ( 0 1 0 1 0 1 0 1 0 ) ( 0.75 0.5 0.75 ) + ( 0.75 0.5 0.75 ) = = ( 0 0.25 0 0.25 0 0.25 0 0.25 0 ) ( 0.75 0.5 0.75 ) + ( 0.75 0.5 0.75 ) = ( 0.125 0.1875 0.125 ) + ( 0.75 0.5 0.75 ) = ( 0.875 0.6875 0.875 ) \begin{array}{l} x ^ {(2)} = D ^ {- 1} (L + U) x ^ {(1)} + D ^ {- 1} b = \left( \begin{array}{c c c} 0.25 & 0 & 0 \\ 0 & 0.25 & 0 \\ 0 & 0 & 0.25 \end{array} \right) \left( \begin{array}{c c c} 0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{array} \right) \left( \begin{array}{c} 0.75 \\ 0.5 \\ 0.75 \end{array} \right) + \left( \begin{array}{c} 0.75 \\ 0.5 \\ 0.75 \end{array} \right) = \\ = \left( \begin{array}{c c c} 0 & 0.25 & 0 \\ 0.25 & 0 & 0.25 \\ 0 & 0.25 & 0 \end{array} \right) \left( \begin{array}{c} 0.75 \\ 0.5 \\ 0.75 \end{array} \right) + \left( \begin{array}{c} 0.75 \\ 0.5 \\ 0.75 \end{array} \right) = \left( \begin{array}{c} 0.125 \\ 0.1875 \\ 0.125 \end{array} \right) + \left( \begin{array}{c} 0.75 \\ 0.5 \\ 0.75 \end{array} \right) = \left( \begin{array}{c} 0.875 \\ 0.6875 \\ 0.875 \end{array} \right) \\ \end{array} x ( 2 ) = D − 1 ( L + U ) x ( 1 ) + D − 1 b = ⎝ ⎛ 0.25 0 0 0 0.25 0 0 0 0.25 ⎠ ⎞ ⎝ ⎛ 0 1 0 1 0 1 0 1 0 ⎠ ⎞ ⎝ ⎛ 0.75 0.5 0.75 ⎠ ⎞ + ⎝ ⎛ 0.75 0.5 0.75 ⎠ ⎞ = = ⎝ ⎛ 0 0.25 0 0.25 0 0.25 0 0.25 0 ⎠ ⎞ ⎝ ⎛ 0.75 0.5 0.75 ⎠ ⎞ + ⎝ ⎛ 0.75 0.5 0.75 ⎠ ⎞ = ⎝ ⎛ 0.125 0.1875 0.125 ⎠ ⎞ + ⎝ ⎛ 0.75 0.5 0.75 ⎠ ⎞ = ⎝ ⎛ 0.875 0.6875 0.875 ⎠ ⎞
At the third iteration
x ( 3 ) = D − 1 ( L + U ) x ( 2 ) + D − 1 b = ( 0.25 0 0 0 0.25 0 0 0 0.25 ) ( 0 1 0 1 0 1 0 1 0 ) ( 0.875 0.6875 0.875 ) + ( 0.75 0.5 0.75 ) = ( 0.21875 0.34375 0.21875 ) + ( 0.75 0.5 0.75 ) = ( 0.96875 0.84275 0.96875 ) \begin{array}{l} x ^ {(3)} = D ^ {- 1} (L + U) x ^ {(2)} + D ^ {- 1} b = \left( \begin{array}{c c c} 0.25 & 0 & 0 \\ 0 & 0.25 & 0 \\ 0 & 0 & 0.25 \end{array} \right) \left( \begin{array}{c c c} 0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{array} \right) \left( \begin{array}{c} 0.875 \\ 0.6875 \\ 0.875 \end{array} \right) + \left( \begin{array}{c} 0.75 \\ 0.5 \\ 0.75 \end{array} \right) = \\ \left( \begin{array}{c} 0.21875 \\ 0.34375 \\ 0.21875 \end{array} \right) + \left( \begin{array}{c} 0.75 \\ 0.5 \\ 0.75 \end{array} \right) = \left( \begin{array}{c} 0.96875 \\ 0.84275 \\ 0.96875 \end{array} \right) \\ \end{array} x ( 3 ) = D − 1 ( L + U ) x ( 2 ) + D − 1 b = ⎝ ⎛ 0.25 0 0 0 0.25 0 0 0 0.25 ⎠ ⎞ ⎝ ⎛ 0 1 0 1 0 1 0 1 0 ⎠ ⎞ ⎝ ⎛ 0.875 0.6875 0.875 ⎠ ⎞ + ⎝ ⎛ 0.75 0.5 0.75 ⎠ ⎞ = ⎝ ⎛ 0.21875 0.34375 0.21875 ⎠ ⎞ + ⎝ ⎛ 0.75 0.5 0.75 ⎠ ⎞ = ⎝ ⎛ 0.96875 0.84275 0.96875 ⎠ ⎞
As we see it converges to
x = ( 1 1 1 ) x = \left( \begin{array}{c} 1 \\ 1 \\ 1 \end{array} \right) x = ⎝ ⎛ 1 1 1 ⎠ ⎞
The scheme is convergent, because the matrix A = ( 4 − 1 0 − 1 4 − 1 0 − 1 4 ) A = \begin{pmatrix} 4 & -1 & 0 \\ -1 & 4 & -1 \\ 0 & -1 & 4 \end{pmatrix} A = ⎝ ⎛ 4 − 1 0 − 1 4 − 1 0 − 1 4 ⎠ ⎞ is positive-definite by Sylvester's criterion.
Indeed,
Δ 1 = 4 > 0 , Δ 2 = ∣ 4 − 1 − 1 4 ∣ = 4 ⋅ 4 − ( − 1 ) ⋅ ( − 1 ) = 16 − 1 = 15 > 0 , Δ 3 = ∣ 4 − 1 0 − 1 4 − 1 0 − 1 4 ∣ = 4 ∣ 4 − 1 − 1 4 ∣ + ∣ − 1 − 1 0 4 ∣ = 4 ⋅ 15 − 4 = 56 > 0 \begin{array}{l} \Delta_ {1} = 4 > 0, \Delta_ {2} = \left| \begin{array}{c c} 4 & - 1 \\ - 1 & 4 \end{array} \right| = 4 \cdot 4 - (- 1) \cdot (- 1) = 16 - 1 = 15 > 0, \\ \Delta_ {3} = \left| \begin{array}{c c c} 4 & - 1 & 0 \\ - 1 & 4 & - 1 \\ 0 & - 1 & 4 \end{array} \right| = 4 \left| \begin{array}{c c} 4 & - 1 \\ - 1 & 4 \end{array} \right| + \left| \begin{array}{c c} - 1 & - 1 \\ 0 & 4 \end{array} \right| = 4 \cdot 15 - 4 = 56 > 0 \\ \end{array} Δ 1 = 4 > 0 , Δ 2 = ∣ ∣ 4 − 1 − 1 4 ∣ ∣ = 4 ⋅ 4 − ( − 1 ) ⋅ ( − 1 ) = 16 − 1 = 15 > 0 , Δ 3 = ∣ ∣ 4 − 1 0 − 1 4 − 1 0 − 1 4 ∣ ∣ = 4 ∣ ∣ 4 − 1 − 1 4 ∣ ∣ + ∣ ∣ − 1 0 − 1 4 ∣ ∣ = 4 ⋅ 15 − 4 = 56 > 0
It follows from
x ( k + 1 ) = D − 1 ( L + U ) x ( k ) + D − 1 b x ^ {(k + 1)} = D ^ {- 1} (L + U) x ^ {(k)} + D ^ {- 1} b x ( k + 1 ) = D − 1 ( L + U ) x ( k ) + D − 1 b
that
x ( k + 1 ) = T x ( k ) + c , where k ≥ 0 , x ^ {(k + 1)} = T x ^ {(k)} + c, \text { where } k \geq 0, x ( k + 1 ) = T x ( k ) + c , where k ≥ 0 , T = D − 1 ( L + U ) = ( 4 0 0 0 4 0 0 0 4 ) − 1 ⋅ ( ( 0 0 0 1 0 0 0 1 0 ) + ( 0 1 0 0 0 1 0 0 0 ) ) = ( 1 4 0 0 0 1 4 0 0 0 1 4 ) ⋅ ( 0 1 0 1 0 1 0 1 0 ) = = ( 0 1 / 4 0 1 / 4 0 1 / 4 0 1 / 4 0 ) , c = D − 1 b = ( 4 0 0 0 4 0 0 0 4 ) − 1 ( 3 2 3 ) = ( 1 / 4 0 0 0 1 / 4 0 0 0 1 / 4 ) ( 3 2 3 ) = ( 0.75 0.5 0.75 ) . \begin{array}{l}
T = D^{-1}(L + U) = \left( \begin{array}{ccc} 4 & 0 & 0 \\ 0 & 4 & 0 \\ 0 & 0 & 4 \end{array} \right)^{-1} \cdot \left( \left( \begin{array}{ccc} 0 & 0 & 0 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{array} \right) + \left( \begin{array}{ccc} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{array} \right) \right) = \left( \begin{array}{ccc} \frac{1}{4} & 0 & 0 \\ 0 & \frac{1}{4} & 0 \\ 0 & 0 & \frac{1}{4} \end{array} \right) \cdot \left( \begin{array}{ccc} 0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{array} \right) = \\
= \left( \begin{array}{ccc} 0 & 1/4 & 0 \\ 1/4 & 0 & 1/4 \\ 0 & 1/4 & 0 \end{array} \right), \\
c = D^{-1}b = \left( \begin{array}{ccc} 4 & 0 & 0 \\ 0 & 4 & 0 \\ 0 & 0 & 4 \end{array} \right)^{-1} \left( \begin{array}{c} 3 \\ 2 \\ 3 \end{array} \right) = \left( \begin{array}{ccc} 1/4 & 0 & 0 \\ 0 & 1/4 & 0 \\ 0 & 0 & 1/4 \end{array} \right) \left( \begin{array}{c} 3 \\ 2 \\ 3 \end{array} \right) = \left( \begin{array}{c} 0.75 \\ 0.5 \\ 0.75 \end{array} \right).
\end{array} T = D − 1 ( L + U ) = ⎝ ⎛ 4 0 0 0 4 0 0 0 4 ⎠ ⎞ − 1 ⋅ ⎝ ⎛ ⎝ ⎛ 0 1 0 0 0 1 0 0 0 ⎠ ⎞ + ⎝ ⎛ 0 0 0 1 0 0 0 1 0 ⎠ ⎞ ⎠ ⎞ = ⎝ ⎛ 4 1 0 0 0 4 1 0 0 0 4 1 ⎠ ⎞ ⋅ ⎝ ⎛ 0 1 0 1 0 1 0 1 0 ⎠ ⎞ = = ⎝ ⎛ 0 1/4 0 1/4 0 1/4 0 1/4 0 ⎠ ⎞ , c = D − 1 b = ⎝ ⎛ 4 0 0 0 4 0 0 0 4 ⎠ ⎞ − 1 ⎝ ⎛ 3 2 3 ⎠ ⎞ = ⎝ ⎛ 1/4 0 0 0 1/4 0 0 0 1/4 ⎠ ⎞ ⎝ ⎛ 3 2 3 ⎠ ⎞ = ⎝ ⎛ 0.75 0.5 0.75 ⎠ ⎞ .
Eigenvalues of matrix T T T are λ 1 = − 1 2 2 \lambda_1 = -\frac{1}{2\sqrt{2}} λ 1 = − 2 2 1 , λ 2 = 1 2 2 \lambda_2 = \frac{1}{2\sqrt{2}} λ 2 = 2 2 1 , λ 3 = 0 \lambda_3 = 0 λ 3 = 0 , hence the spectral radius is
ρ ( T ) = max { ∣ λ 1 ∣ , ∣ λ 2 ∣ , ∣ λ 3 ∣ } = 1 2 2 < 1. \rho(T) = \max \{ |\lambda_1|, |\lambda_2|, |\lambda_3| \} = \frac{1}{2\sqrt{2}} < 1. ρ ( T ) = max { ∣ λ 1 ∣ , ∣ λ 2 ∣ , ∣ λ 3 ∣ } = 2 2 1 < 1.
Choose a matrix norm and calculate
∥ T ∥ 2 = ∑ i = 1 3 ∑ j = 1 3 ∣ t i j ∣ 2 = ( 1 4 ) 2 + ( 1 4 ) 2 + ( 1 4 ) 2 + ( 1 4 ) 2 = 4 ⋅ ( 1 4 ) 2 = 2 ⋅ 1 4 = 1 2 < 1. \|T\|_2 = \sqrt{ \sum_{i=1}^{3} \sum_{j=1}^{3} |t_{ij}|^2 } = \sqrt{ \left( \frac{1}{4} \right)^2 + \left( \frac{1}{4} \right)^2 + \left( \frac{1}{4} \right)^2 + \left( \frac{1}{4} \right)^2 } = \sqrt{ 4 \cdot \left( \frac{1}{4} \right)^2 } = 2 \cdot \frac{1}{4} = \frac{1}{2} < 1. ∥ T ∥ 2 = i = 1 ∑ 3 j = 1 ∑ 3 ∣ t ij ∣ 2 = ( 4 1 ) 2 + ( 4 1 ) 2 + ( 4 1 ) 2 + ( 4 1 ) 2 = 4 ⋅ ( 4 1 ) 2 = 2 ⋅ 4 1 = 2 1 < 1.
The following error bound hold:
∥ x − x ( k ) ∥ ≤ ∥ T ∥ k ∥ x ( 0 ) − x ∥ , \|x - x^{(k)}\| \leq \|T\|^k \|x^{(0)} - x\|, ∥ x − x ( k ) ∥ ≤ ∥ T ∥ k ∥ x ( 0 ) − x ∥ , ∥ x − x ( k ) ∥ ≤ ∥ T ∥ k 1 − ∥ T ∥ ∥ x ( 1 ) − x ( 0 ) ∥ . \|x - x^{(k)}\| \leq \frac{ \|T\|^k }{1 - \|T\|} \|x^{(1)} - x^{(0)}\|. ∥ x − x ( k ) ∥ ≤ 1 − ∥ T ∥ ∥ T ∥ k ∥ x ( 1 ) − x ( 0 ) ∥.
If ∥ T ∥ k < ε \|T\|^k < \varepsilon ∥ T ∥ k < ε , then k < log ( ε ) log ( ∥ T ∥ ) k < \frac{\log(\varepsilon)}{\log(\|T\|)} k < l o g ( ∥ T ∥ ) l o g ( ε ) .
Asymptotically the error vector e k e_k e k behaves at worst like ( ρ ( T ) ) k (\rho(T))^k ( ρ ( T ) ) k , where e k = x − x ( k ) e_k = x - x^{(k)} e k = x − x ( k ) , e 0 = x − x ( 0 ) e_0 = x - x^{(0)} e 0 = x − x ( 0 ) .
www.AssignmentExpert.com
Comments