METHOD OF JACOBI

jorgeduardooo 6,092 views 15 slides Jul 22, 2010
Slide 1
Slide 1 of 15
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15

About This Presentation

No description available for this slideshow.


Slide Content

Chapter 7.
Iterative Solutions of Systems
ofEquations of
 Equations
EFLUM –ENAC ‐EPFL

Contents Contents
1
Introduction
1
.
Introduction
2. First example: Scalar Equation 3
iliffi
3
.Iterat
ive so
lut
ions o
f
 a system o
f
 equat
ions: 
Jacobi iteration method
4. Iterative methods for finite difference 
equations: Back to problem 6.3
5. The Successive Over Relaxation (SOR)

Introduction: The fixed
p
oint iteration Introduction: The fixed
p
oint iteration
p p
Previous Method (used on previous class) Uses Gaussian Elimination (or “\” in MatLab)

Introduction: The fixed
p
oint iteration Introduction: The fixed
p
oint iteration
p p
(
)
1
kk
xgx
=
Themainconcept: Themainconcept:
(
)
1
kk
xgx
+
The
main
concept: The
main
concept:
Example: Example:
36x=26xx
+
=
1
3
2
k
k
x
x
+
=
−+
4
X
=4
2.5
3
3.5
4
fx
n
X
0
=
4 1
1.5
2
Value of
X
0
= 1
0
1
2
3
4
5
6
7
8
9
10
0
0.5
Iteration step
X
0
= 0

Introduction: The fixed
p
oint iteration Introduction: The fixed
p
oint iteration
p p
Theproofofconvergence: Theproofofconvergence:
1
3
k
k
x
x
=

+
The
proof
of
convergence: The
proof
of
convergence:
1
3
2
k
x
+
+
(
)
12
112 3
=
+
(
)

Introduction: The fixed
p
oint iteration Introduction: The fixed
p
oint iteration
p p
FirstIterationmethod: FirstIterationmethod: 36x=
26xx+=
1
3
2
k
k
x
x
+
=
−+
First
Iteration
method: First
Iteration
method:
G li dit ti th d G li dit ti th d
2
Always Converges Always Converges
36x=G
enera
li
ze
d
it
era
ti
on me
th
o
d
:
G
enera
li
ze
d
it
era
ti
on me
th
o
d
:
(3 ) 6xx
α
α
−+=
1
36
kk
xx
α
αα
+

=
−+
Converges? Converges?
αα

Introduction: The fixed
p
oint iteration Introduction: The fixed
p
oint iteration
p p
36Generalized iteration method: Generalized iteration method:
(3 ) 6
36
xx
α

+
36
x=
(3 ) 6
xx
α
α
−+=
Converges? Converges?
1kk
xx
α
α
+
=

+
No No
Yes Yes
1
X: 1.5 Y1
mber
0.60.8
Y
:
1
teration num
0.20.4
ude of the it
1
2
3
4
5
6
7
8
9
10
0
0.2
value of the splitting parameter
α
magnitu
X: 3
Y: 0

Introduction: The fixed
p
oint iteration Introduction: The fixed
p
oint iteration
p p
Generalized iteration method: Generalized iteration method:
36
α

1kk
xx
α
α
+
=− +
How fast does it Converges? How fast does it Converges?
The smaller
this value is
The fastest is
468

α = 1.42 α = 1.5 α = 2.0 α = 2.5 α = 3.0
40
the convergence
02
value of x
n
α
=
4
.0
α = 5.0
0
1
2
3
4
5
6
7
8
9
10
-4-2
Iteration Step

Iterative solution of a s
y
stem of e
q
uations Iterative solution of a s
y
stem of e
q
uations
yq yq
Jacobi interaction Jacobi interaction approach approach
Consider the problem
D: Diagonal elements of A
L: Lower elements of A
+U: Upper elements of A
L+U: Matrix B
andand
1 Q
<

Iterative solution of a s
y
stem of e
q
uations Iterative solution of a s
y
stem of e
q
uations
yq yq
Some notes about the vector norm Some notes about the vector norm
The vector norm calculation
Stbtthti Stbtthti S
ome no
t
es a
b
ou
t
th
e ma
t
r
i
x norm
S
ome no
t
es a
b
ou
t
th
e ma
t
r
i
x norm
The matrix norm calculation

Iterative solution of a s
y
stem of e
q
uations Iterative solution of a s
y
stem of e
q
uations
yq yq

Back to
p
roblem 6.3
(
1 Back to
p
roblem 6.3
(
1°°case case
– –
last class
)
last class
)
p( p(
) )
Back to Laplace Equation (Last class example)
M_diag=sparse(1:21,1:21,-4,21,21);
L_diag1=sparse(2:21,1:20,1,21,21);
L_diag2=sparse(8:21,1:14,1,21,21);
L_diag1(8,7)=0; L_diag1(15,14)=0;
A M di +L di 1+L di 2+L di 1'+L di 2' A
=
M
_
di
ag
+L
_
di
ag
1+L
_
di
ag
2+L
_
di
ag
1'+L
_
di
ag
2'
;
b=zeros(21,1); b(7)=-100; b(14)=-100; b(21)=-100;
convcrit
=
1e9;
L=L_diag1+L_diag2 U=L‘
;
iteration matrix is Q=-Dinv*LnU
convcrit 1e9; h_old=ones(21,1);
kount=0;
whileconvcrit>1e-3 % loop ends when fractional
kount=kount+1; % change in h < 10
-3
;
LnU=L+U;
Dinv=inv(M_diag) %D
-1
h=Q*h_old+Dinv*b;
convcrit=max(abs(h-h_old)./h);
h_old=h;
end

Iterative solution of a s
y
stem of e
q
uations Iterative solution of a s
y
stem of e
q
uations
yq yq convcrit=1e9;
h_old=ones(21,1);
kount=0; kount=0; whileconvcrit>1e-3 % loop ends when fractional
kount=kount+1; % change in h < 10
-3
h=Q*h_old+Dinv*b;
convcrit=max(abs(h-h_old)./h);
h_old=h;
end

Successive over relaxation metho
d
Successive over relaxation metho
d
Before Before NowNow
Jacobi iteration approach Jacobi iteration approach
ρ
(
Q
)
= abs
(
max
(
ei
g
(
Q
)))
ρ
(
Q
)
((
g
(
Q
)))
Successive Order Relaxation Method Successive Order Relaxation Method
(SOR) (SOR)
S(ω): Iteration matrix.
ρ(Q): Magnitude of the largest eigenvalue of the
Jacobi iteration matrix.
ω
opt
: Iteration parameter, chosen to accelerate
convergence

Iterative solution of a s
y
stem of e
q
uations Iterative solution of a s
y
stem of e
q
uations
yq yq
it 1 9
wopt=2/(1+sqrt(1-(normest(Q))^2));
y=inv(D*(1/wopt)+L);
S=-y*(U+(1-(1/wopt))*D);
it 1 9
convcr
it
=
1
e
9
;
h_old=ones(21,1);
kount=0;
whileconvcrit > 1e-3
kount=kount+1;
convcr
it
=
1
e
9
;
h_old=ones(21,1);
kount=0;
whileconvcrit > 1e-3
kount=kount+1;
h=Q*h_old+Dinv*b; convcrit=max(abs(h-h_old)./h); h_old=h;
end
h=S*h_old+y*b;
convcrit=max(abs(h-h_old)./h);
h_old=h;
end
FastFast☺ ☺ Slow Slow / /
Tags