System of equation. How to solve two unknown variables

KUMYANIK 0 views 33 slides Oct 04, 2025
Slide 1
Slide 1 of 33
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33

About This Presentation

System of linear equation


Slide Content

1
Chapter: 3c
System of Linear Equations

Dr. Asaf Varol

2
Pivoting
Some disadvantages of Gaussian elimination are as follows: Since each result
follows and depends on the previous step, for large systems the errors introduced due
to round off (or chop off) errors lead to loss of significant figures and hence to in
accurate results. The error committed in any one step propagates till the final step and
it is amplified. This is especially true for ill-conditioned systems. Of course if any of
the diagonal elements is zero, the method will not work unless the system is rearranged
so to avoid zero elements being on the diagonal. The practice of interchanging rows
with each other so that the diagonal elements are the dominant elements is called
partial pivoting. The goal here is to put the largest possible coefficient along the
diagonal by manipulating the order of the rows. It is also possible to change the order
of variables, i.e. instead of letting the unknown vector
 
{X}
T
= (x, y, z) we may let it be {X}
T
= ( y, z, x)
 
When this is done in addition to partial pivoting, this practice is called full
pivoting. In this case only the meaning of each variable changes but the system of
equations remain the same.

3
Example E3.4.2
Consider the following set of equations
 
0.0003 x + 3.0000 y = 2.0001
1.0000 x + 1.0000 y = 1.0000
 
The exact solution to which is x = 1/3 and y = 2/3
 
Solving this system by Gaussian elimination with a three significant figure
mantissa yields
  x = -3.33, y = 0.667
 
with four significant figure mantissa yields
 
x = 0.0000 and y = 0.6667
 

4
Example E3.4.2 (continued)
Partial pivoting (switch the rows so that the diagonal elements are largest)
 
1.0000 x + 1.0000 y = 1.0000
0.0003 x + 3.0000 y = 2.0001
 
The solution of this set of equations using Gaussian elimination gives
 
y = 0.667 and x = 0.333 with three significant figure arithmetic
 
y = 0.6667 and x = 0.3333 with four significant figure arithmetic

5
Example E3.4.3
Problem: Apply full pivoting to the following system to achieve a well conditioned
matrix.
 
3x + 5y - 5z = 3
2x - 4y - z = -3
6x - 5y + z = 2
 
 
 
35 5
241
651
3
3
2


































x
y
z

6
Example E3.4.3 (continued)
Solution: First switch the first column with the second column, then switch the
first column with the third column to obtain
-5z + 3x + 5y = 3
- z + 2x - 4y = -3
z + 6x - 5y = 2
Then switch second row with the third row
-5z + 3x + 5y = 3
z + 6x - 5y = 2
-z + 2x - 4y = -3
 Yielding finally a well conditioned system given by



































3
2
3
y
x
z
421
561
535

7
Gauss – Jordan Elimination
This method is very similar to Gaussian elimination method. The only difference is
in that the elimination procedure is extended to the upper diagonal elements so that
a backward substitution is no longer necessary. The elimination process begins
with the augmented matrix, and continued until the original matrix turns into an
identity matrix, of course with necessary modifications to the right hand side.
 
In short our goal is to start with the general augmented system and arrive at the
right side after appropriate algebraic manipulations.
 
Start arrive
 

===>
 
The solution can be written at once as
 
 
a a a
a a a
a a a
c
c
c
aa 12 13
21 22 23
31 32 33
1
2
3




















100
010
001
1
2
3




















c
c
c
*
*
*
xcxcxc
1 1 2 2 3 3
  
* * *
; ;

8
Pseudo Code for Gauss – Jordan Elimination
do for k = 1 to n ! Important note

do for j= k+1 to n+1 ! a(i,n+1) represents the
a
kj
= a
kj
/a
kk
! the right hand side
end do

do for i = 1 to n ; i is not equal to k
do for j = k+1 to n+1
a
ij
= a
ij
- (a
ik
)(a
kj
)
end do
end do
end do
 
cc----- The solution vector is saved in a(i,n+1), i=1, to n

9
Example E3.4.4
P r o b l e m : S o l v e t h e p r o b l e m o f E q u a t i o n s ( 3 . 4 . 3 ) u s i n g G a u s s - J o r d a n e l i m i n a t i o n
4 x1 + x 2 + x 3 = 6
-x1 - 5 x2 + 6 x 3 = 0
2 x1 - 4 x 2 + x 3 = - 1
S o l u t i o n : T o d o t h a t w e s t a r t w i t h t h e a u g m e n t e d t h e c o e f f i c i e n t m a t r i x
A
a
()0

4 1 1
1 5 6
2 4 1
6
0
1
 
 











(i ) F i r s t m u l t i p l y t h e f i r s t r o w b y 1 / 4 , t h e n m u l t i p l y i t b y - 1 a n d s u b t r a c t t h e r e s u l t f r o m
t h e s e c o n d r o w a n d r e p l a c e t h e r e s u l t o f t h e s u b t r a c t i o n w i t h t h e s e c o n d r o w . S i m i l a r l y
m u l t i p l y t h e f i r s t r o w b y 2 a n d s u b t r a c t t h e r e s u l t f r o m t h e t h i r d r o w a n d r e p l a c e t h e t h i r d
r o w w i t h t h e r e s u l t o f t h e s u b t r a c t i o n . T h e s e o p e r a t i o n s l e a d t o
A
a
(1 )













4
5.1
5.1

50.050.40
25.675.40
25.025.00.1

10
Example E3.4.4 (continued)
( ii) M u lt ip ly t h e s e c o n d r o w b y - 1 / 4 . 7 5 , t h e n m u lt ip ly t h e s e c o n d r o w b y 0 . 2 5 a n d s u b t r a c t t h e
r e s u lt f r o m t h e f ir s t r o w a n d r e p la c e t h e r e s u lt w it h t h e f ir s t r o w . S im ila r ly m u lt ip ly t h e s e c o n d
r o w b y - 4 . 5 a n d s u b t r a c t t h e r e s u lt f r o m t h e t h ir d r o w a n d r e p la c e t h e r e s u lt w it h t h e t h ir d r o w
t o o b t a in
A
a
()2















4211.5
3158.0
5789.1

4211.500
3158.110
5789.001
( iii) M u lt ip ly t h e t h ir d r o w b y - 1 / 5 . 4 2 1 1 , t h e n m u lt ip ly t h e t h ir d r o w b y 0 . 5 7 8 9 a n d s u b t r a c t t h e
r e s u lt f r o m t h e f ir s t r o w , a n d t h e n r e p la c e t h e f ir s t r o w b y t h e r e s u lt . S im ila r ly m u lt ip ly t h e t h ir d
r o w b y - 1 . 3 1 5 8 a n d s u b t r a c t t h e r e s u lt f r o m t h e s e c o n d r o w , a n d t h e n r e p la c e t h e s e c o n d r o w b y
t h e r e s u lt t o f in a lly a r r iv e a t
A
a
()3

1 0 0
0 1 0
0 0 1
10 0 0 0
10 0 0 0
10 0 0 0

.
.
.










H e n c e w e f o u n d t h e e x p e c t e d s o lu t io n x1= 1 . 0 , x2= 1 . 0 , a n d x3 = 1 . 0 . N o t e , h o w e v e r , t h a t t h e
s o lu t io n s o f 1 . 0 0 w e r e ju s t t h e s o lu t io n s . O f c o u r s e , e a c h m a t r ix w ill h a v e it s o w n s o lu t io n s , a n d
n o t a lw a y s e q u a l t o o n e !

11
Finding the Inverse using Gauss-Jordan Elimination
T h e i n v e r s e , [ B ] o f a m a t r i x , [ A ] , i s d e f i n e d s u c h t h a t
[ A ] [ B ] = [ B ] [ A ] = [ I ]
w h e r e [ I ] i s t h e i d e n t i t y m a t r i x . I n o t h e r w o r d s , t o f i n d t h e i n v e r s e o f a m a t r i x w e n e e d t o
f i n d t h e e l e m e n t s o f t h e m a t r i x [ B ] s u c h t h a t w h e n [ A ] i s m u l t i p l i e d b y [ B ] t h e r e s u l t
s h o u l d e q u a l t h e i d e n t i t y m a t r i x . I f w e r e c a l l w h a t m a t r i x m u l t i p l i c a t i o n i s , t h i s a m o u n t s t o
t h e s a m e t h i n g a s s a y i n g t h e f i r s t c o l u m n o f [ B ] m u l t i p l i e d b y [ A ] m u s t b e e q u a l t o t h e f i r s t
c o l u m n o f [ I ] ; t h e s e c o n d c o l u m n o f [ B ] m u l t i p l i e d b y [ A ] m u s t b e e q u a l t o t h e s e c o n d
c o l u m n o f [ I ] a n d s o o n .
F o r s u m m a r y , u s i n g G a u s s - J o r d a n e l i m i n a t i o n






















0.1845- 0.17476 0.1359 | 1 0 0
0.2427- 0.01942 0.1262 | 0 1 0
0.1068 0.04854- 0.185 | 0 0 1
1 0 0 | 1 4- 2
0 1 0 | 6 5- 1-
0 0 1 | 1 1 4
o b tain W e

12
LU Decomposition
Lower and Upper (LU) triangular decomposition technique is one of the most widely used
techniques used for solution of linear systems of equations due to its generality and
efficiency. This method calls for first decomposing a given matrix into a product of a
lower and an upper triangular matrices such that
[A] = [L][U]
Given that
[A]{X} = {R} [U]{X} = {Q}
for an unknown vector {Q} The additional unknown vector {Q} can be found from
[L]{Q} = {R}
The solution procedure (i.e. the algorithm) can now be summarized as
(i)Determine [L] and [U] for a given system of equations [A]{X} = {R}
(ii)Calculate {Q} from [L]{Q} = {R} by forward substitution
(iii)Calculate {X} from [U]{X} = {Q} by backward substitution

13
Example E3.4.5
[ A ] =
2 5 1
1 3 1
3 4 2

 











; { R } =
1 2
8
1 6











[ L ] =
2 0 0
1 1 2 0
3 7 2 4











/
/
; [ U ] =
1 5 2 1 2
0 1 1
0 0 1












/ /
T o f i n d t h e s o l u t i o n w e s t a r t w i t h [ L ] { Q } = { R } ; t h a t i s
2 q 1 + 0 + 0 = 1 2
-q1 + q 2/ 2+ 0 = - 8
3 q1 + 7 q 2/ 2+ 4 q 3 = 1 6
W e c o m p u t e { Q } b y f o r w a r d s u b s t i t u t i o n
q1 = 1 2 / 2 = 6 q2 = ( - 8 + q 1) / ( 1 / 2 ) = - 4 q3 = ( 1 6 - 3 q 1 - 7 q 2/ 2 ) / 4 = 3
H e n c e { Q }
T
= { 6 , - 4 , 3 }
T h e n w e s o l v e [ U ] { X } = { Q } b y b a c k w a r d s u b s t i t u t i o n
x1 - ( 5 / 2 ) x 2 + ( 1 / 2 ) x 3 = 6
0 + x 2 - x 3 = - 4
0 + 0 + x 3 = 3
x3 = 3 x2 = - 4 + x 3 = - 1 x1 = 6 + ( 5 / 2 ) x 2 - ( 1 / 2 ) x3 = 2
H e n c e t h e f i n a l s o l u t i o n i s { X}
T
= { 2 , - 1 , 3 }

14
Crout Decomposition
W e i l l u s t r a t e C r o u t - d e c o m p o s i t i o n i n d e t a i l f o r a 3 x 3 g e n e r a l m a t r i x
[ L ] [ U ] = [ A ]
l
l l
l l l
u u
u
a a a
a a a
a a a
1 1
2 1 2 2
3 1 3 2 3 3
1 2 1 3
2 3
1 1 1 2 1 3
2 1 2 2 2 3
3 1 3 2 3 3
0 0
0
1
0 1
0 0 1






























=
M u l t i p l y i n g [ L ] b y [ U ] a n d e q u a t i n g t h e c o r r e s p o n d i n g e l e m e n t s o f t h e p r o d u c t w i t h t h a t o f t h e m a t r i x
[ A ] w e g e t t h e f o l l o w i n g e q u a t i o n s
l1 1 = a 1 1 ; l 2 1 = a 2 1 ; l 3 1 = a 3 1 ;
i n i n d e x n o t a t i o n : li 1 = a i 1, I = 1 , 2 , . . . , n ( j = 1 , f i r s t c o l u m n )
( T h e f i r s t c o l u m n o f [ L ] i s e q u a l t o t h e f i r s t c o l u m n o f [ A ]
e q u a t i o n s o l u t i o n
l1 1 u 1 2 = a 1 2 = = = = > u 1 2 = a 1 2 / l1 1
l1 1 u 1 3 = a 1 3 = = = = > u 1 3 = a 13/ l1 1
i n i n d e x n o t a t i o n u 1 j = a 1 j/ l1 1 j = 2 , 3 , . . . , n ( i = 1 , f i r s t r o w )
l2 1 u 1 2 + l 2 2 = a 2 2 = = = = > l2 2 = a 2 2 - l 2 1 u 1 2
l3 1 u 1 2 + l 3 2 = a 3 2 = = = = > l3 2 = a 3 2 - l 3 1 u 1 2
i n i n d e x n o t a t i o n li 2 = a i 2 - li 1 u 1 2 ; i = 2 , 3 , . . . , n ( j = 2 , s e c o n d c o l u m n )

15
Pseudo Code for LU-Decomposition
c c - - - S i m p l e c o d i n g
d o f o r i = 1 t o n
li 1 = ai 1
e n d d o
d o f o r j = 2 t o n
u1 j = a1 j/ l1 1
e n d d o
d o f o r j = 2 t o n - 1
lj j = aj j - lu
j k
k
j
k j



1
1
d o f o r i = j + 1 t o n
li j = ai j - lu
i k
k
j
k j



1
1
uj i = aj i - ( lu
j k
k
j
k i



1
1
) /lj j
e n d d o
e n d d o
ln n = an n - l u
n k
k
n
k n



1
1
c --- F o rw a rd su b stitu tio n
q1 1 = r1 / l1 1
d o fo r i = 2 to n
qi = ri - ( lq
i j
j
i
j



1
1
)/li i
e n d d o
c --- B a c k su b st it u tio n
x n = q n
d o fo r i = n -1 to 1
xi = qi - ux
i j
ji
n
j


1
e n d d o

16
Cholesky Decomposition for Symmetric Matrices
For symmetric matrices the LU-decomposition can be further simplified to take advantage
of the symmetric property that

[A] = [A]
T
; or aij = aji (3.4.9)

Then, it follows that

[A] = [L][U] = [A]
T
= ([L][U])
T
= [U]
T
[L]
T


That is

[L] = [U]
T
; [U] = [L]
T
(3.4.10)

or in index notation
uij = lji (3.4.11)

It should be noted that the diagonal elements of [U] are no longer, necessarily, equal to
one. Comment: This algorithm should be programmed using complex arithmetic.

17
Tridiagonal Matrix Algorithm (TDMA),
also known as Thomas Algorithm
[A ] =












44
333
222
11
00
0
0
00
ba
cba
cba
cb
If w e app ly L U -deco m po sitio n to this m atrix taking into acco u nt o f its special structure and let
[L ] =












44
33
22
1
fe00
0fe0
00fe
000f
[U ] =
1 0 0
0 1 0
0 0 1
0 0 0 1
1
2
3
g
g
g












B y m ultip lying [L ] and [U ] and eq uating the co rrespo nding elem ents o f the p ro duct to that o f the o riginal m atrix
[A ] it can be sho w n that
f1 = b1
g1 = c1/f1
e2 = a2
and fo r k= 2 ,3 , ... , n-1 further
ek = ak en = an
fk = bk - ek gk-1 fn = bn - en gn-1
gk = ck / fk

18
TDMA Algorithm
c----- Decomposition
c1 = c1/b1
do for k=2 to n-1
bk = bk - ak ck-1
ck = ck/bk
end do
bn = bn - an cn-1
c----- (note here rk ; k=1,2,3, ..., n is the right hand side of the equations)
c----- Forward substitution
r1 = r1/b1
do for k = 2, n
rk = (rk - ak rk-1)/bk
end do
c----- Back substitution
xn = rn
do for k = (n-1) to 1
xk = (rk - ck xk+1)
end do

19
Example E3.4.6
P ro b lem : D eterm ine the L U -d eco m p o sitio n fo r the fo llo w ing m atrix u sing T D M A
[A ] =
2 2 0
1 4 4
0 1 2










S o lu tio n : n= 3
c1 = c1/b1 = 2 /2 = 1
k = 2
b2 = b2 - a2 c1 = 4 - (1 )(1 ) = 3
c2 = c2/b2 = 4 /3
k = n = 3
b3 = b3 - a3 c2 = 2 - (1)(4 /3 ) = 2 /3
T he o ther elem ents rem ain the sam e; hence
[L ] =
2 0 0
1 3 0
0 1 23/










; [U ] =
1 1 0
0 1 43
0 0 1
/










It can be verified that the p ro d u ct [L ][U ] is ind eed eq u al to [A ].

20
Iterative Methods for Solving Linear Systems
Iterative methods are those where an initial guess is made for the
solution vector followed by a correction sequentially until a certain
bound for error tolerance is reached. The concept of iterative
computations was introduced in Chapter 2 for nonlinear equations.
The idea is the same here except that we deal with linear systems of
equations. In this regard the fixed point iteration method presented
in Chapter 2 for two (nonlinear) equations and two unknowns will
be repeated here, because that is precisely what Jacobi iteration is
about.

21
Jacobi Iteration
Two linear equations, in general, can be written as
a11 x1 + a12 x2 = c1
a21 x1 + a22 x2 = c2
We now rearrange these equations such that each unknown is written as a function of the others,
i.e.
x1 = ( c1 - a12 x2 )/ a11= f(x1,x2)
x2 = ( c2 - a21 x1 )/ a22= g(x1,x2)
which are in the same form as those used for the fixed point iteration (Chapter 2). The functions
f(x1,x2) and g(x1,x2) are introduced for generality. It is implicitly assumed that the diagonal
elements of the coefficient matrix are not zero. In case there is a zero diagonal element this should
be avoided by changing the order of the equations, i.e. by pivoting. Jacobi iteration calls for
starting with a guess for the unknowns, x1 and x2, and then finding new values using these
equations iteratively. If certain conditions are satisfied this iteration procedure converges to the
right answer as it did in case of fixed point iteration method.

22
Example E3.5.1
P r o b l e m : S o l v e t h e f o l l o w i n g s y s t e m o f e q u a t i o n s u s i n g J a c o b i i t e r a t i o n
1 0 x 1 + 2 x 2 + 3 x 3 = 2 3
2 x 1 - 1 0 x 2 + 3 x 3 = - 9
- x1 - x 2 + 5 x 3 = 1 2
S o l u t i o n : R e a r r a n g i n g
x1 = ( 2 3 - 2 x 2 - 3 x 3 ) / 1 0
x2 = ( - 9 - 2 x 1 - 3 x 3 ) / ( - 1 0 )
x3 = ( 1 2 + x 1 + x 2 ) / 5
S t a r t i n g w i t h a s o m e w h a t a r b i t r a r y g u e s s , x 1 = 0 , x 2 = 0 , a n d x 3 = 0 . a n d i t e r a t i n g y i e l d i n t h e v a l u e s s h o w n i n t h e t a b l e
g i v e n b e l o w
I T E R X 1 X 2 X 3 E r r o r N o r m , E = x x
i
n e w
i
o l d
i
n



1
0 0 0 0 - - -
1 2 . 3 0 0 0 0 0 0 . 9 0 0 0 0 0 2 . 4 0 0 0 0 0 5 . 6 0 0 0 0 0
2 1 . 4 0 0 0 0 0 2 . 0 8 0 0 0 0 3 . 0 4 0 0 0 0 2 . 7 2 0 0 0 0
3 0 . 9 7 2 0 0 0 2 . 0 9 2 0 0 0 3 . 0 9 6 0 0 0 4 . 9 6 0 0 0 1 E - 0 1
4 0 . 9 5 2 8 0 0 2 . 0 2 3 2 0 0 3 . 0 1 2 8 0 0 1 . 7 1 2 0 0 0 E - 0 1
5 0 . 9 9 1 5 2 0 1 . 9 9 4 4 0 0 2 . 9 9 5 2 0 0 8 . 5 1 2 0 1 4 E - 0 2
6 1 . 0 0 2 5 6 0 1 . 9 9 6 8 6 4 2 . 9 9 7 1 8 4 1 . 5 4 8 8 0 3 E - 0 2
7 1 . 0 0 1 4 7 2 1 . 9 9 9 6 6 7 2 . 9 9 9 8 8 5 6 . 5 9 2 0 3 5 E - 0 3
8 1 . 0 0 0 1 0 1 2 . 0 0 0 2 6 0 3 . 0 0 0 2 2 8 2 . 3 0 6 7 0 0 E - 0 3
9 0 . 9 9 9 8 7 9 7 2 . 0 0 0 0 8 9 3 . 0 0 0 0 7 2 5 . 4 8 3 0 3 1 E - 0 4
1 0 0 . 9 9 9 9 6 0 6 1 . 9 9 9 9 9 8 2 . 9 9 9 9 9 4 2 . 5 0 6 9 7 1 E - 0 4

23
Convergence Criteria for Jacobi Iteration




f
x
f
x
1 2
 =

a1 2/ a1 1

< 1




g
x
g
x
1 2
 =

a2 1/ a2 2

< 1
I n g e n e r a l a s u f f i c i e n t ( b u t n o t n e c e s s a r y ) c o n d i t i o n f o r c o n v e r g e n c e o f J a c o b i i t e r a t i o n i s
( )/a a
i j
j ji
n
i i
 
 
1 ,
1
I n o t h e r w o r d s , t h e s u m o f t h e a b s o l u t e v a l u e o f t h e o f f d i a g o n a l e l e m e n t s o n e a c h r o w
m u s t b e l e s s t h a n t h e a b s o l u t e v a l u e o f t h e d i a g o n a l e l e m e n t o f t h e c o e f f i c i e n t m a t r i x .
T h e s e m a t r i c e s a r e c a l l e d s t r i c t l y d i a g o n a l l y d o m i n a n t m a t r i c e s . T h e r e a d e r c a n s h o w
e a s i l y t h a t t h e m a t r i x o f e x a m p l e E 3 . 5 . 1 d o e s s a t i s f y t h e c r i t e r i a g i v e n .

24
Gauss - Seidel Iteration
This method is essentially the same as the Jacobi iteration method except that the new
values of the variables (i.e. the most recently updated value) is used in subsequent
calculations without waiting for completion of one iteration for all variables. In cases the
convergence criteria is satisfied, Gauss-Seidel iteration takes fewer iteration to achieve the
same error bound compared to Jacobi iteration. To illustrate how this procedure works we
present the first two iterations of the Gauss-Seidel method applied to the example
problem:
x1 = ( 23 - 2x2 - 3x3 )/10
x2 = ( -9 - 2x1 - 3x3 )/(-10)
x3 = (12 + x1 + x2 )/5
ITER=0, x1=0, x2=0, x3=0 (initial guess)
ITER=1,
x1 = ( 23 - 0 -0 )/10 = 2.3

25
Gauss - Seidel Iteration (II)
Now this new value for x1 is used in the second equation as opposed to using x1 = 0 (the
previous value of x1) in Jacobi iteration, that is
x2 = [-9 - (2)(2.3) - 0 )/(-10) = 1.36
Similarly in the third equation the most recent values of x1 and x2 are used. Hence,
x3 = (12 + 2.3 + 1.36 )/5 = 3.132
And so an for the rest of the iterations
ITER=2
x1 = [23 - (2)(1.36) - (3)(3.132) ] /10= 0.8884
x2 = [ -9 - (2)(0.8884) - (3)(3.132) ] /(-10) = 2.10728
x3 = (12 + 0.8884 + 2.10728) /5= 2.998256

26
Example E3.5.2
Problem: Solve the following system of equations (from Example E3.5.1) using Gauss -
Seidel iteration.
10 x1 + 2x2 + 3x3 = 23
2x1 - 10x2 + 3x3 = -9
- x1 - x2 + 5x3 = 12
Table E3.5.1 Convergence trend of Gauss-Seidel method for Example E3.5.1
ITER X1 X2 X3 Error Norm, E
0 0 0 0 ---
1 2.300000E+00 1.360000E+00 3.132000E+00 6.792000E+00
2 1.088400E+00 2.057280E+00 3.029136E+00 2.011744E+00
3 9.798032E-01 2.004701E+00 2.996901E+00 1.934104E-01
4 9.999894E-01 1.999068E+00 2.999811E+00 2.872980E-02
5 1.000243E+00 1.999992E+00 3.000047E+00 1.412988E-03
6 9.999875E-01 2.000012E+00 3.000000E+00 3.223419E-04
7 9.999977E-01 1.999999E+00 3.000000E+00 2.276897E-05
8 1.000000E+00 2.000000E+00 3.000000E+00 3.457069E-06
9 1.000000E+00 2.000000E+00 3.000000E+00 3.576279E-07
10 1.000000E+00 2.000000E+00 3.000000E+00 0.000000E+00

27
Convergence Criteria for Gauss - Seidel Iteration
C o n v e r g e n c e c r i t e r i a f o r G a u s s - S e i d e l m e t h o d a r e m o r e r e l a x e d t h a n t h e e q u a t i o n f o r
J a c o b i I t e r a t i o n c o n v e r g e n c e . H e r e a s u f f i c i e n t c o n d i t i o n i s
a a
f o r a l l e q u a t i o n s e x c e p t o n e
f o r a t l e a s t o n e e q u a t i o n
i j
j j i
n
i i
 











1
1
1
,
.
.
T h i s c o n d i t i o n i s a l s o k n o w n a s t h e S c a r b o r o u g h c r i t e r i o n . W e a l s o i n t r o d u c e t h e
f o l l o w i n g t h e o r e m c o n c e r n i n g t h e c o n v e r g e n c e o f G a u s s - S e i d e l i t e r a t i o n m e t h o d .
T h e o r e m :
L e t [ A ] b e a r e a l s y m m e t r i c m a t r i x w i t h p o s i t i v e d i a g o n a l e l e m e n t s . T h e n t h e G a u s s - S e i d e l
m e t h o d s o l v i n g [ A ] { x } = { c } w i l l c o n v e r g e f o r a n y c h o i c e o f i n i t i a l g u e s s i f a n d o n l y i f [ A ]
i s a p o s i t i v e d e f i n i t e m a t r i x .
D e f i n i t i o n :
I f f o r a l l n o n z e r o c o m p l e x v e c t o r s { x } , { x }
T
[A ] { x } > 0 f o r a r e a l s y m m e t r i c m a t r i x [ A ]
t h e n [ A ] i s s a i d t o b e p o s i t i v e d e f i n i t e m a t r i x . ( N o t e a l s o t h a t [ A ] i s p o s i t i v e d e f i n i t e i f a n d
o n l y i f a l l o f i t s e i g e n v a l u e s a r e r e a l a n d p o s i t i v e .

28
Relaxation Concept
T h e i t e r a t i v e m e t h o d s s u c h a s J a c o b i a n d G a u s s - S e i d e l i t e r a t i o n a r e
s u c c e s s i v e l y p r e d i c t i o n c o r r e c t i o n m e t h o d s i n t h a t a n i n i t i a l g u e s s i s
c o r r e c t e d t o c o m p u t e a n e w a p p r o x i m a t e s o l u t i o n , t h e n t h e n e w s o l u t i o n i s
c o r r e c t e d a n d s o o n . T h e G a u s s - S e i d e l m e t h o d , f o r e x a m p l e c a n b e
f o r m u l a t e d a s
  
x x x
i
k
i
k
i
k
 
1

w h e r e k d e n o t e s t h e n u m b e r o f i t e r a t i o n s a n d

x i
( k )
i s t h e c o r r e c t i o n t o t h e
k
t h
s o l u t i o n . W h i c h i s o b t a i n e d f r o m
 
  x x x
i
k
i
e
i
k
 
T o i n t e r p r e t a n d u n d e r s t a n d t h i s e q u a t i o n b e t t e r w e l e t

x x x x
i
e
i
n e w
i
k
i
o l d
 ,
a n d w r i t e i t a s


o ld
i
n ew
i
n ew
i
x1xx 

29
Example E3.5.4
P r o b l e m : S h o w t h a t t h e G a u s s - S e id e l i t e r a t i o n m e t h o d c o n v e r g e s in 5 0 i t e r a t io n s w h e n
a p p li e d t o t h e f o ll o w i n g s y s t e m w it h o u t r e la x a t i o n ( i . e .

= 1 )
x1 + 2 x2 = - 5
x1 – 3 x2 = 2 0
F in d a n o p t i m u m r e la x a t io n f a c t o r t h a t w i ll m i n im iz e t h e n u m b e r o f it e r a t io n s t o a c h ie v e a
s o lu t io n w it h t h e e r r o r b o u n d .
| | Ei || < 0 . 5 x 1 0
- 5
w h e r e | | Ei || = x x
i
n e w
i
o l d
i



1
2
S o l u t i o n : A f e w it e r a t io n s a r e s h o w n h e r e w it h

= 0 . 8
L e t x1
o l d
= 0 a n d x2
o l d
= 0 ( in i t ia l g u e s s )
x1
n e w
= ( - 5 - 2 x2
o l d
) = - 5
A f t e r a f e w it e r a t io n s w e r e a c h
U n d e r r e l a x
x2
n e w
= ( 0 . 8 ) ( - 4 . 8 5 3 ) + ( 0 . 2 ) ( - 6 . 4 ) = - 5 . 1 6 2
x2
o l d
= - 5 . 1 6 2 ( s w i t c h o ld a n d n e w )

30
Example E3.5.4 (continued)
Table E3.5.4b Number of Iterations versus Relaxation Factor for Example E3.5.4
 Number of iterations
0.20 62
0.40 30
0.60 17
0.70 14
0.80 11
0.90 15
1.00 50
1.05 84
1.10 >1000
It is seen that minimum number of iterations is obtained with a relaxation factor of

= 0.80.

31
Case Study - The Problem of Falling Objects
m1
m2
m3
T1
T2
g - gravity
-m1 a = c1 v
2
- m1 g - T1
-m2 a = c2 v
2
- m2 g - T1 + T2
-m3 a = c3 v
2
- m3 g + T2

32
Case Study - The Problem of Falling Objects
These three equations can be used to solve for any three of the unknowns parameters: m1,
m2, m3, c1, c2, c3, v, a, T1, and T2. As an example here we set up a problem by
specifying the masses, the drag coefficients, and the acceleration as
m1 = 10 kg, m2 = 6 kg, m3 = 14 kg, c1 = 1.0 kg/m, c2 = 1.5 kg/m, c3 = 2.0 kg/m, and a =
0.5g and find the remaining three unknowns; v, T1 and T2. The acceleration of gravity
g=9.81 m/sec
2
.
Substituting these values in the given set of equations and rearranging yield
v
2
- T1 = 49.05
1.5 v
2
+ T1 - T2 = 29.43
2.0 v
2
+ T2 = 68.67
v
2
= 32.7 (v = 5.72 m/s), T1 = -16.35 N, T2 = 3.27 N (N=newton=kg.m/sec
2
)

33
References
1. Celik, Ismail, B., “Introductory Numerical Methods for Engineering Applications”,
Ararat Books & Publishing, LCC., Morgantown, 2001
2. Fausett, Laurene, V. “Numerical Methods, Algorithms and Applications”, Prentice
Hall, 2003 by Pearson Education, Inc., Upper Saddle River, NJ 07458
3. Rao, Singiresu, S., “Applied Numerical Methods for Engineers and Scientists, 2002
Prentice Hall, Upper Saddle River, NJ 07458
4. Mathews, John, H.; Fink, Kurtis, D., “Numerical Methods Using MATLAB” Fourth
Edition, 2004 Prentice Hall, Upper Saddle River, NJ 07458
5. Varol, A., “Sayisal Analiz (Numerical Analysis), in Turkish, Course notes, Firat
University, 2001
6. http://mathonweb.com/help/backgd3e.htm
Tags