swarm pso and gray wolf Optimization.pdf

abbasmiry 134 views 43 slides May 06, 2024
Slide 1
Slide 1 of 43
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43

About This Presentation

swarm optimization


Slide Content

Swarm Optimization Algorithms

An optimization problem seeks to find the largest (the smallest)
value of a quantity (such as maximum revenue or minimum surface
area) given certain limits to a problem.
An optimization problem can usually be expressed as “find the
maximum (or minimum) value of some quantity Q under a certain
set of given conditions”.
Definition of Optimization problems

Problems that can be modelled and solved by optimization
techniques
Scheduling Problems (production, airline, etc.)
Network Design Problems
Facility Location Problems
Inventory management
Transportation Problems
Minimum spanning tree problem
Shortest path problem
Maximum flow problem
Min-cost flow problem

1. Classical Optimization
Useful in finding the optimum solution or unconstrained maxima
or minima of continuous and differentiable functions.
Analytical methods make use of differential calculus in locating
the optimum solution

cont.…

Have limited scope in practical applications as some of them
involve objective functions which are not continuous and/or
differentiable.
Basis for developing most of numerical techniques that involved
into advanced techniques more suitable to today’s practical
problem

Linear Program (LP)
studies the case in which the objective function (f ) is linear and the set design
variable space (A) is specified Using only linear equalities and inequalities.
2. Numerical Methods

Optimization Problem Types
Non-Linear Program (NLP)
studies the general case in which the objective function or the constraints or
both contain nonlinear parts.
(P) Convex problems easy to solve
 Non-convex problems harder, not guaranteed to find global optimum

Optimization Problem Types
Integer Programs (IP)
studies linear programs in which some or all variables are constrained to
take on integer values
Quadratic programming
allows the objective functions to have quadratic terms, while the set (A) must be
specified with linear equalities and inequalities

Optimization Problem Types
Stochastic Programming
studies the case in which some of the constraints depend on random variables
Dynamic programming
studies the case in which the optimization strategy is based on splitting the
problem into smaller sub-problems.

3. Advanced Methods
Swarm Intelligence Based Algorithms
Bio-inspired (not SI-based) algorithms
Physical and chemistry based algorithms
others

Particle Swarm optimization

Introduction to Optimization
The optimization can be defined as a mechanism through which the maximum or minimum value of a
given function or process can be found.
The function that we try to minimize or maximize is called as objective function.
Variable and parameters.
Statement of optimization problem
Minimize f(x)
subject to g(x)<=0
h(x)=0.

Particle Swarm Optimization(PSO)
Inspired from the nature social behavior and dynamic movements with communications of insects,
birds and fish.

Particle Swarm Optimization(PSO)
Uses a number of agents (particles) that constitute a
swarm moving around in the search space looking for the
best solution
Each particle in search space adjusts its “flying” according to its
own flying experience as well as the flying experience of other
particles.
Each particle has three parameters position, velocity, and previous
best position, particle with best fitness value is called as global best
position.

Contd..
Collection of flying particles (swarm) - Changing solutions
Search area - Possible solutions
Movement towards a promising area to get the global optimum.
Each particle adjusts its travelling speed dynamically corresponding to the flying
experiences of itself and its colleagues.
Each particle keeps track:
its best solution, personal best, pbest.
the best value of any particle, global best, gbest.
Each particle modifies its position according to:
•its current position
•its current velocity
•the distance between its current position and pbest.
•the distance between its current position and gbest.

Algorithm -Parameters
f: Objective function
Xi: Position of the particle or agent.
Vi: Velocity of the particle or agent.
A: Population of agents.
W: Inertia weight.
C1: cognitive constant.
R1, R2: random numbers.
C2: social constant.

Algorithm -Steps
1.Create a ‘population’ of agents (particles) uniformly distributed over X
2.Evaluate each particle’s position according to the objective function( say
y=f(x)= -x^2+5x+20
1.If a particle’s current position is better than its previous best position, update it.
2.Determine the best particle (according to the particle’s previous best positions).
Y=F(x) = -x^2+5x+20

Contd..
5. Update particles’ velocities:
6. Move particles to their new positions:
7. Go to step 2 until stopping criteria are satisfied.

Contd…
Particle’s velocity
:
1. Inertia
2. Personal
Influence
3. Social
Influence
•Makes the particle move in the same direction and
with the same velocity
•Improves the individual
•Makes the particle return to a previous position,
better than the current
•Conservative
•Makes the particle follow the best neighbors
direction

Acceleration coefficients
•When , c1=c2=0then all particles continue flying at their current speed until they hit the search space’s boundary.
Therefore, the velocity update equation is calculated as: t
ij
t
ijvv
1
•When c1>0 and c2=0 , all particles are independent. The velocity update equation will be: 
t
ij
ibest
tt
j
t
ij
t
ij xPrcvv 

,
1
11
•When c1>0 and c2=0 , all particles are attracted to a single point in the entire swarm and
the update velocity will become 
t
ijbest
t
j
t
ij
t
ij xgrcvv 

22
1
•When c1=c2, all particles are attracted towards the average of pbestand gbest.

Example:

Contd..

Contd..

Contd..

Flow chart of Algorithm

Iteration 1:
r1 0.9234
r2 0.9797 w 0.80
Particle No. fitness value fitness valuev1 v2x1_newx2_new
1 -11.00-10.00205.00-11.00-10.00205.00 0.984.90-10.02-5.10
2 20.00-10.00298.00 20.00-10.00298.00-29.394.90-9.39-5.10
3 -18.00-4.00450.00-18.00-4.00450.00 7.84-0.98-10.16-4.98
4 -10.00-5.00173.00-10.00-5.00173.00 0.000.00-10.00-5.00
G_best -10.00-5.00173.00
Iteration 2:
r1 0.4302
r2 0.4389 w 0.80
Particle No. fitness value fitness valuev1 v2x1_newx2_new
1 -10.02-5.10173.13-10.02-5.10173.13 1.063.92-8.96-1.18
2 -9.39-5.10157.14 -9.39-5.10157.14-23.513.92-32.90-1.18
3 -10.16-4.98177.33-10.16-4.98177.33 6.61-0.84-3.55-5.82
4 -10.00-5.00173.00-10.00-5.00173.00 0.27-0.04-9.73-5.04
G_best -9.39-5.10157.14
X X_best
X X_best

Iteration 3:
r1 0.1848
r2 0.1111 w 0.80
Particle No. fitness value fitness valuev1 v2x1_newx2_new
1 -8.96-1.18176.89-10.02-5.10173.13 1.251.90-7.710.71
2 -20.00-1.18562.84 -9.39-5.10157.14-15.021.90-35.020.71
3 -3.55-5.82 44.35 -3.55-5.82 44.35 5.29-0.671.73 -6.49
4 -9.73-5.04165.95 -9.73-5.04165.95 0.90-0.12-8.83-5.17
G_best -3.55-5.82 44.35
Iteration 4:
r1 0.9049
r2 0.2581 w 0.80
Particle No. fitness value
1 -7.710.71 174.15
2 -20.000.71 588.49
3 1.73 -6.49 1.87
4 -8.83-5.17143.37
X
The best solution is [1.73 -6.49]
X X_best

Grey wolf optimizer (GWO)
Thesocialhierarchyconsistsoffourlevelsas
follow.
ThefirstleveliscalledAlpha(?).Thealpha
wolvesaretheleadersofthepackandtheyare
amaleandafemale.
Theyareresponsibleformakingdecisions
abouthunting,timetowalk,sleepingplaceand
soon.

Grey wolf optimizer (GWO)
Thealphawolfisconsideredthedominant
wolfinthepackandallhis/herordersshould
befollowedbythepackmembers.
Social hierarchy of grey wolf

Grey wolf optimizer (GWO)
ThesecondleveliscalledBeta(?).
Thebetasaresubordinatewolves,whichhelp
thealphaindecisionmaking.
Thebetawolfcanbeeithermaleorfemaleand
itconsiderthebestcandidatetobethealpha
whenthealphapassesawayorbecomesvery
old.
Thebetareinforcethealpha'scommands
throughoutthepackandgivesthefeedbackto
alpha.

Grey wolf optimizer (GWO)
ThethirdleveliscalledDelta(?)
Thedeltawolvesarenotalphaorbetawolves
andtheyarecalledsubordinates.
Deltawolveshavetosubmittothealphaand
betabuttheydominatetheomega(thelowest
levelinwolvessocialhierarchy).

Grey wolf optimizer (GWO)(History and main idea)
The fourth (lowest) level is called Omega (?)
Theomegawolvesareconsideredthe
scapegoatinthepack,theyhavetosubmitto
alltheotherdominantwolves.
Theymayseemarenotimportantindividuals
inthepackandtheyarethelastallowed
wolvestoeat.

Social hierarchy of grey wolf
Inthegreywolfoptimizer(GWO),we
considerthefittestsolutionasthealpha,and
thesecondandthethirdfittestsolutionsare
namedbetaanddelta,respectively.
Therestofthesolutionsareconsideredomega
InGWOalgorithm,thehuntingisguidedby
??and?
The?solutionsfollowthesethreewolves.

Grey wolf encircling prey
Duringthehunting,thegreywolvesencircle
prey.
Themathematicalmodeloftheencircling
behaviorispresentedinthefollowing
equations.

Grey wolf encircling prey(Cont.)
Wheretisthecurrentiteration,AandCare
coefficientvectors,Xpisthepositionvectorof
theprey,andXindicatesthepositionvectorof
agreywolf.
ThevectorsAandCarecalculatedasfollows:
Wherecomponentsofaarelinearly decreased
from2to0over thecourse of iterationsandr
1,
r
2arerandomvectors in[0,1]

Grey wolf Hunting
Thehuntingoperationisusuallyguidedby
thealpha.
Thebetaanddeltamightparticipatein
huntingoccasionally.
Inthemathematicalmodelofhunting
behaviorofgreywolves,weassumedthealpha
,betaanddeltahavebetterknowledgeabout
thepotentiallocationofprey.
Thefirstthreebestsolutionsaresavedandthe
otheragentareobligetoupdatetheirpositions
accordingtothepositionofthebestsearch
agentsasshowninthefollowingequations.

Grey wolf Hunting

Attacking prey (exploitation)
Thegreywolffinishthehuntbyattackingthe
preywhenitstopmoving.
ThevectorAisarandomvalueininterval
[-2a,2a],whereaisdecreasedfrom2to0over
thecourseofiterations.
When|A|<1,thewolvesattacktowardsthe
prey,whichrepresentsanexploitationprocess.

Search for prey (exploration)
TheexplorationprocessinGWOisapplied
accordingtotheposition,and,thatdiverge
fromeachothertosearchforpreyand
convergetoattackprey.
The explorationprocess modeled
mathematicallybyutilizingAwithrandom
valuesgreaterthan1orlessthan-1tooblige
thesearchagenttodivergefromtheprey.
When|A|>1,thewolvesareforcedto
divergefromthepreytofinedafitterprey.

Example (Unconstrained problem): Find the minimum of the function &#3627408486;=(&#3627408485;−3)
2

By using GWO algorithm with the following parameters:
r1a: 0.273 r1b: 0.778 r1d: 0.222 MaxIter: 50
r2a: 0.718 r2b: 0.081 r2d: 0.204


Use initial positions [
6.6506
9.0463
−6.6383
−4.032
]
Solution :
??????=&#3627409360;(&#3627409359;−
??????
??????????????????
????????????????????????
) ,

r1a 0.273 r1b 0.778 r1d 0.222 MaxIter: 50
r2a 0.718 r2b 0.081 r2d 0.204
Iteration : 1
Particle No. X fitness value A1 D_alpha X1 A2 D_beta X2 A3 D_deltaX3 X_new
1 6.651 13.327 -0.888 2.905 9.231 1.090 5.1833.398-1.0918.2965.0215.883
2 9.046 36.558 -0.888 0.509 7.103 1.090 7.5790.787-1.09110.6917.6365.175
3 -6.638 92.897 -0.888 16.19421.035 1.090 8.1060.213-1.0914.9931.4177.555
4 -4.032 49.449 -0.888 13.58818.720 1.090 5.4993.053-1.0912.387-1.4276.782
a 1.960
Alpha 6.651
beta 9.046
delta -4.032
Iteration : 2
Particle No. X fitness value A1 D_alpha X1 A2 D_beta X2 A3 D_deltaX3 X_new
1 5.883 8.314 -0.870 1.552 6.526 1.068 4.9290.621-1.0693.11610.1145.754
2 5.175 4.732 -0.888 2.261 7.183 1.068 4.2211.377-1.0692.4089.3575.972
3 7.555 20.750 -0.888 0.119 5.281 1.068 6.601-1.163-1.0694.78811.9015.340
4 6.782 14.304 -0.888 0.654 5.756 1.068 5.828-0.338-1.0694.01511.0745.497
a 1.920
Alpha 5.175
beta 5.883
delta 6.782

Iteration : 3
Particle No. X fitness value A1 D_alpha X1 A2 D_beta X2 A3 D_deltaX3 X_new
1 5.754 7.583 -0.852 1.918 6.974 1.045 4.8620.415-1.0473.4069.3195.570
2 5.972 8.835 -0.888 1.700 6.849 1.045 5.0810.187-1.0473.6259.5485.528
3 5.340 5.474 -0.888 2.332 7.411 1.045 4.4480.848-1.0472.9928.8865.715
4 5.497 6.237 -0.888 2.175 7.271 1.045 4.6060.683-1.0473.1509.0515.668
a 1.880
Alpha 5.340
beta 5.497
delta 5.754
Iteration : 4
Particle No. X fitness value
1 5.570 6.602
2 5.528 6.391
3 5.715 7.372
4 5.668 7.121
Alpha 5.528
beta 5.570
delta 5.668
The best solution is 5.528