Iterative methods top solve the Linear system of equation.pptx
geetadma
0 views
27 slides
Oct 15, 2025
Slide 1 of 27
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
About This Presentation
Discussion of the Iterative methods
Size: 539.4 KB
Language: en
Added: Oct 15, 2025
Slides: 27 pages
Slide Content
Inverse of a Matrix by Gauss-Jordan Method The important application of the Gauss-Jordan method is to find the inverse of a non-singular matrix A . We start with the augmented matrix of A with the identity matrix I of the same order. When the Gauss-Jordan procedure is completed, we obtain
Ex: Find the inverse of the matrix
Consider the augmented matrix [ A I ] Perform the elementary row transformations and do the eliminations to reduce A to I and hence I will be converted to A -1
Iterative methods Gauss Jacobi Gauss Seidel
Iterative methods Iterative methods are based on the idea of successive approximations.
Gauss-Jacobi Iteration Method
Gauss-Jacobi Iteration Method For the solution of the system of algebraic equations We assume that the pivots aii ≠ 0, for all i .
Gauss-Jacobi Iteration Method Write the equations as The Jacobi iteration method is defined as
Gauss-Jacobi Iteration Method Since, we replace the complete vector x ( k ) in the right hand side at the end of each iteration, this method is also called the method of simultaneous displacement.
A sufficient condition A sufficient condition for convergence of the Jacobi method is that the system of equations is diagonally dominant , that is, the coefficient matrix A is diagonally dominant. This implies that convergence may be obtained even if the system is not diagonally dominant. If the system is not diagonally dominant, we may exchange the equations, if possible, such that the new system is diagonally dominant and convergence is guaranteed
Initial approximations to start the iteration If the system is diagonally dominant, then the iteration converges for any initial solution vector. If no suitable approximation is available, we can choose x = 0, that is xi = 0 for all i. Then, the initial approximation becomes xi = bi / aii , for all i .
Ex: Solve the system of equations using the Jacobi iteration method. Use the initial approximations as xi = 0, i = 1, 2, 3, x 1 = 0.5, x 2 = – 0.5, x 3 = – 0.5 . Perform five iterations in each case.
Sol Jacobi method gives the iterations as We have the following results
Sol cont..
Part ii)
Disadvantage of the Gauss-Jacobi method At any iteration step, the value of the first variable x 1 is obtained using the values of the previous iteration. The value of the second variable x 2 is also obtained using the values of the previous iteration, even though the updated value of x 1 is available. In general, at every stage in the iteration, values of the previous iteration are used even though the updated values of the previous variables are available. If we use the updated values of x 1, x 2,..., xi –1 in computing the value of the variable xi , then we obtain a new method called Gauss-Seidel iteration method.
Gauss-Seidel Iteration Method
Gauss-Seidel iteration method The Gauss-Seidel iteration method is defined as
sufficient condition for convergence A sufficient condition for convergence of the Gauss-Seidel method is that the system of equations is diagonally dominant , that is, the coefficient matrix A is diagonally dominant. This implies that convergence may be obtained even if the system is not diagonally dominant. If both the Gauss-Jacobi and Gauss-Seidel methods converge, then Gauss-Seidel method converges at least two times faster than the Gauss-Jacobi method .
Ex: Find the solution of the system of equations 45 x 1 + 2 x 2 + 3 x 3 = 58 – 3 x 1 + 22 x 2 + 2 x 3 = 47 5x 1 + x 2 + 20x 3 = 67 correct to three decimal places, using the Gauss-Seidel iteration method.