chapter 2 Problem Solving.pdf

MeghaGupta952452 181 views 176 slides Aug 06, 2023
Slide 1
Slide 1 of 205
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66
Slide 67
67
Slide 68
68
Slide 69
69
Slide 70
70
Slide 71
71
Slide 72
72
Slide 73
73
Slide 74
74
Slide 75
75
Slide 76
76
Slide 77
77
Slide 78
78
Slide 79
79
Slide 80
80
Slide 81
81
Slide 82
82
Slide 83
83
Slide 84
84
Slide 85
85
Slide 86
86
Slide 87
87
Slide 88
88
Slide 89
89
Slide 90
90
Slide 91
91
Slide 92
92
Slide 93
93
Slide 94
94
Slide 95
95
Slide 96
96
Slide 97
97
Slide 98
98
Slide 99
99
Slide 100
100
Slide 101
101
Slide 102
102
Slide 103
103
Slide 104
104
Slide 105
105
Slide 106
106
Slide 107
107
Slide 108
108
Slide 109
109
Slide 110
110
Slide 111
111
Slide 112
112
Slide 113
113
Slide 114
114
Slide 115
115
Slide 116
116
Slide 117
117
Slide 118
118
Slide 119
119
Slide 120
120
Slide 121
121
Slide 122
122
Slide 123
123
Slide 124
124
Slide 125
125
Slide 126
126
Slide 127
127
Slide 128
128
Slide 129
129
Slide 130
130
Slide 131
131
Slide 132
132
Slide 133
133
Slide 134
134
Slide 135
135
Slide 136
136
Slide 137
137
Slide 138
138
Slide 139
139
Slide 140
140
Slide 141
141
Slide 142
142
Slide 143
143
Slide 144
144
Slide 145
145
Slide 146
146
Slide 147
147
Slide 148
148
Slide 149
149
Slide 150
150
Slide 151
151
Slide 152
152
Slide 153
153
Slide 154
154
Slide 155
155
Slide 156
156
Slide 157
157
Slide 158
158
Slide 159
159
Slide 160
160
Slide 161
161
Slide 162
162
Slide 163
163
Slide 164
164
Slide 165
165
Slide 166
166
Slide 167
167
Slide 168
168
Slide 169
169
Slide 170
170
Slide 171
171
Slide 172
172
Slide 173
173
Slide 174
174
Slide 175
175
Slide 176
176
Slide 177
177
Slide 178
178
Slide 179
179
Slide 180
180
Slide 181
181
Slide 182
182
Slide 183
183
Slide 184
184
Slide 185
185
Slide 186
186
Slide 187
187
Slide 188
188
Slide 189
189
Slide 190
190
Slide 191
191
Slide 192
192
Slide 193
193
Slide 194
194
Slide 195
195
Slide 196
196
Slide 197
197
Slide 198
198
Slide 199
199
Slide 200
200
Slide 201
201
Slide 202
202
Slide 203
203
Slide 204
204
Slide 205
205

About This Presentation

Artificial Intelligence Problem Solving


Slide Content

Chapter 2 Problem Solving
Prepared by
Mrs. Megha V Gupta
New Horizon Institute of Technology and Management

Steps in building a system to solve a particular
problem
1. Define the problem precisely –find input situations as well
as final situations for an acceptable solution
2. Analyze the problem –find few important features that
may have impact on the appropriateness of various possible
techniques for solving the problem
3. Isolate and represent task knowledge necessary to solve the
problem
4. Choose the best problem-solving technique(s) and apply to
the particular problem

PROBLEMS, PROBLEM SPACES AND SEARCH
Problem solving is a process of generating solutions from observed data.
• A ‘problem space’
The set of all possible configurations is the space of the problem state, also known as problem
space.
The environment where the search is performed is the problem space.
■A ‘state space’ of the problem is the set of all states reachable from the initial state.
• A ‘search’ refers to the search for a solution in a problem space.

State Space Search
■A state space represents a problem in terms of states and
operators that change states.
■A state space consists of:
▪A representation of the states the system can be in.
▪A set of operators that can change one state into another state. Often the operators are
represented as programs that change a state representation to represent the new state.
▪An initial state.
▪A set of final states; some of these may be desirable, others undesirable. This set is often
represented implicitly by a program that detects terminal states.

Toy Problems
8-puzzle
Water Jug
Missionaries and Cannibals

8-puzzle problem
“It has set of a 3x3 board having 9 block spaces out of which, 8 blocks are
having tiles bearing number from 1 to 8. One space is left blank. The tile
adjacent to blank space can move into it. We have to arrange the tiles in a
sequence.”

The start state is any situation of tiles,
and goal state is tiles arranged in a specific sequence.
Solution: reporting of “movement of tiles” in order to reach the
goal state.
The transition function (direction in which blank space effectively
moves either towards left or right or up or down) generates the
legal state
8-puzzle problem
Chapter 2 Problem Solving 7

Chapter 2 Problem Solving 8

Example
Initial state
4 1 3
2 6
7 5 8
1 2 3
4 5 6
7 8
Goal state

Water-Jug Problem
“You are given two jugs, a 4-gallon one and a 3-gallon one, a
pump which has unlimited water which you can use to fill the
jug, and the ground on which water may be poured. Neither
jug has any measuring markings on it. How can you get
exactly 2 gallons of water in the 4-gallon jug?

Water jug problem
■A water jug problem: 4-gallon and 3-gallon
-no marker on the bottle
-pump to fill the water into the jug
-How can you get exactly 2 gallons of water
into the 4-gallons jug?
4 3

A state space search
(x,y) :order pair
x : water in 4-gallons x = 0,1,2,3,4
y : water in 3-gallons y = 0,1,2,3
start state :(0,0)
goal state :(2,n) where n = any value
Rules :1. Fill the 4 gallon-jug (4,-)
2. Fill the 3 gallon-jug (-,3)
3. Empty the 4 gallon-jug (0,-)
4. Empty the 3 gallon-jug (-,0)

Water jug rules

Water jug rules

A water jug solution
4-Gallon Jug 3-Gallon Jug Rule Applied
0 0
0 3 2
3 0 9
3 3 2
4 2 7
0 2 5 or 12
2 0 9 or 11
Solution : path / plan

Solution 3
Chapter 2 Problem Solving 17

Missionaries and Cannibals
Three missionaries and three cannibals
wish to cross the river. They have a small
boat that will carry up to two people.
Everyone can navigate the boat. If at any
time the Cannibals outnumber the
Missionaries on either bank of the river,
they will eat the Missionaries. Find the
smallest number of crossings that will allow
everyone to cross the river safely.
https://www.youtube.com/watch?v=W9NEWxabGmg

Production Rules

Farmer, Wolf, Goat and the Cabbage
https://www.youtube.com/watch?v=go294ZR4Rdg

State Space Representation
Chapter 2 Problem Solving 22

Problem-solving agent
■Problem-solving agent
■A kind of goal-based agent
■It solves problem by
■finding sequences of actions that lead to desirable states (goals)
■To solve a problem,
■the first step is the goal formulation, based on the current situation
■The algorithms are uninformed
■No extra information about the problem other than the definition
■No extra information
■No heuristics (rules)

Goal formulation
■The goal is formulated
■as a set of world states, in which the goal is
satisfied
■Reaching from initial state -> goal state
■Actions are required
■Actionsare the operators
■causing transitions between world states
■Actionsshould be abstract enough at a
certain degree, instead of very detailed
■E.g., turn leftVS turn left 30 degree, etc.

Problem formulation
■The process of deciding
■what actions and states to consider, given a goal.
■E.g., driving Amman -> Zarqa
■in-between states and actions defined
■States: Some places in Amman & Zarqa
■Actions: Turn left, Turn right, go straight, accelerate & brake, etc.
■Because there are many ways to achieve the same goal
■Those ways are together expressed as a tree
■Multiple options of unknown value at a point,
■the agent can examine different possible sequences of actions, and choose
the best
■This process of looking for the best sequence is called search
■A search algorithm takes a problem as input and returns a solution(best
sequence)in the form of an action sequence.

“formulate, search, execute”
Once a solution is found, the actions it recommends can be
carried out. This EXECUTION is called the execution phase.
Thus, we have a simple “formulate, search, execute” design
for the agent,

Problem-solving agents
A problem-solving agent first formulates a goal and a problem, searches for a sequence of actions
that would solve the problem, and then executes the actions one at a time. When this is complete, it
formulates another goal and starts over.

Chapter 2 Problem Solving 28

Example: Romania
■On holiday in Romania; currently in Arad.
■Flight leaves tomorrow from Bucharest
■Formulate goal:
■be in Bucharest
■Formulate problem:
■states: various cities
■actions: drive between cities
■Find solution:
■sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest

Example: Romania

Well-defined problems and solutions
A problem is defined by 5 components:
■Initial state
■Actions
■Transition model or (Successor functions)
■Goal Test.
■Path Cost.

Well-defined problems and solutions
1. The initial statethat the agent starts in
2.Actions:
A description of the possible actions available to the agent.
3. Transition model: description of what each action does.
(successor): refer to any state reachable from given state by
a single action
Together the initial state, actions and transition model define
the state space
■the set of all states reachable from the initial state by any sequence of
actions.
A pathin the state space:
■a sequence of states connected by a sequence of actions.

Well-defined problems and solutions
4. The goal testwhich determines whether a given state is a goal
state
■Sometimes there is an explicit set of possible goal states, and the
test simply checks whether the given state is one of them.
■Sometimes the goal is described by abstract property rather than
explicitly enumerated set of states.
Eg. In Chess, the goal is to reach a state called “checkmate,”
where the opponent’s king is under attack and can’t escape.

Well-defined problems and solutions
5. A path costfunction,
■assigns a numeric cost to each path
■= performance measure
■denoted byg
■to distinguish the best path from others
Usually the path cost is the sum of the step costsof the individual actions (in
the action list)
The solutionof a problem is then
■a path from the initial state to a state satisfying the goal test
Optimalsolution
■the solution with lowest path cost among all solutions

Vacuum world state space graph
■states? The state is determined by both the agent location and the dirt locations.
■Initial state: any
■actions?Left, Right, Suck
■Transition model: The actions have their expected effects, except that moving Left in the leftmost square,
moving Right in the rightmost square, and Sucking in a clean square have no effect.
■goal test?no dirt at all locations
■path cost?Each step costs 1, so the path cost is the number of steps in the path.

Example: The 8-puzzle
■states?locations of tiles
■Initial state: Any state can be designated as the initial state.
■actions?move blank left, right, up, down
■Transition model:Given a state and action, this returns the resulting state; for
example, if we apply Left to the start state in Figure above, the resulting state
has the 5 and the blank switched.
■goal test?= goal state (given)
■path cost? Each step costs 1, so the path cost is the number of steps in the path.

Example: robotic assembly
■states?: real-valued coordinates of robot joint angles parts of the
object to be assembled
■actions?: continuous motions of robot joints
■goal test?: complete assembly
■path cost?: time to execute

Traveling Salesman Problem(TSP)
States: cities
■Initial state: A
■Successor function: Travel from one city to another
connected by a road
■Goal test: the trip visits each city only once that starts and
ends at A.
■Path cost: traveling time

Using only four colors, you have to color a planar map so that no two
adjacent regions have the same color.
Initial State: Planar map with no regions colored.
Goal Test: All regions of the map are colored and no two
adjacent regions have the same color.
Successor function: Choose an uncolored region and color it
with a color that is different from all adjacent regions.
Cost function: Could be 1 for each color used.

Airline Travel problems
■States: Each state obviously includes a location (e.g., an airport) and the current time.
Furthermore, because the cost of an action (a flight segment) may depend on previous
segments, their fare bases, and their status as domestic or international, the state must
record extra information about these “historical” aspects.
■Initial state: This is specified by the user’s query.
■Actions: Take any flight from the current location, in any seat class, leaving after the current
time, leaving enough time for within-airport transfer if needed.
■Transition model: The state resulting from taking a flight will have the flight’s destination as
the current location and the flight’s arrival time as the current time.
■Goal test: Are we at the final destination specified by the user?
■Path cost: This depends on monetary cost, waiting time, flight time, customs and immigration
procedures, seat quality, time of day, type of airplane, frequent-flyer mileage
awards, and so on.

QUIZ
■A vacuum Cleaner world with two location, two sensors -
location and dirt , three actions -left, right and suck will
have a state space with how many possible states ?
■A) 6
■B) 8
■C) 10
■D) 12
41

Search tree
■Initial state
■The root of the search tree is a search node
■Expanding
■applying successor function to the current state
thereby generating a new set of states
■leaf nodes
■the states having no successors
Fringe : Set of search nodes that have not been
expanded yet.

Tree search example

Search tree
■The essenceof searching
■in case the first choice is not correct
■choosing one option and keep others for later
inspection
■Hence we have thesearch strategy
■which determines the choice of which state to
expand
■good choice ->fewer work -> faster
■Important:
■state space ≠ search tree

Search tree
■A node is having five components:
■STATE: which state it is in the state space
■PARENT-NODE: from which node it is generated
■ACTION: which action applied to its parent-node
to generate it
■PATH-COST: the cost, g(n), from initial state to
the node n itself
■DEPTH: number of steps along the path from the
initial state

Implementation: states vs. nodes
■A stateis a (representation of) a physical configuration
■A nodeis a data structure constituting part of a search tree includes state, parent
node, action, path costg(x), depth
■The Expandfunction creates new nodes, filling in the various fields and using
the SuccessorFnof the problem to create the corresponding states.

Search strategies
■A search strategy is defined by picking the order of node
expansion
■Strategies are evaluated along the following dimensions:
■Completeness (guarantee to find a solution if there is one): does it
always find a solution if one exists?
■time complexity (how long does it take to find a solution): number
of nodes generated during the search
■space complexity (how much memory is needed to perform the
search): maximum number of nodes stored in memory
■Optimality (does it give highest quality solution when there are
several different solutions): does it always find a least-cost solution?

■Time and space complexity are measured in terms of
■b:branching factor of the search tree (max. no. of successors of any node)
■d: depth of the least-cost solution (shallowest goal node)
■m: the maximum length of any path in the state space (maximum depth of the state
space)
Measuring problem-solving performance
Chapter 2 Problem Solving 48

Search strategies
■Uninformed search or blind search
■no information about the number of steps
■or the path cost from the current state to the goal
■is applicable when we only distinguish goal states from
non-goal states.
■search the state space blindly
■Informed search, or heuristic search
■a cleverer strategy that searches toward the goal,
■based on the information from the current state so far
■is applied if we have some knowledge of the path cost
or the number of steps between the current state and a
goal.

Uninformed search Methods
strategies that use only the information available in the problem definition. While
searching you have no clue whether one non-goal state is better than any other.
Your search is blind.
■Breadth-first search
■Uniform cost search
■Depth-first search
■Depth-limited search
■Iterative deepening search
■Bidirectional search

Breadth-first search
■Expand shallowest unexpanded node
Implementation:
■fringeis a FIFO queue, i.e., new successors go at end of queue
Is A a goal state?

Breadth-first search
Expand:
fringe=[C,D,E]
Is C a goal state?
Expand:
fringe=[D,E,F,G]
Is D a goal state?

Example
BFS
Chapter 2 Problem Solving
53

Properties of breadth-first search
■Complete?Yes (if bis finite)if the shallowest goal node is at some
finite depth d
■Time Complexity?b+b
2
+b
3
+… +b
d
+ (b
d+1
–b)= O(b
d+1
)
■Space Complexity?O(b
d+1
)(keeps every node in memory)
■Optimal?No, optimal in general (Yes if cost = 1 per step) (if the path
cost is a non-decreasing function of depth of the node)
Spaceis the bigger problem (more than time)

Breadth First Search
Imagine searching a uniform tree where every state has b successors.
The root of the search tree generates b nodes at the first level,
each of which generates b more nodes, for a total of b
2
at the second level.
Each of these generates b more nodes, yielding b
3
nodes at the third level,
and so on.
Now suppose that the solution is at depth d.
In the worst case, it is the last node generated at that level.
Then the total number of nodes generated is
b + b
2
+ b
3
+ ・・・+ b
d
= O(b
d
) .
(If the algorithm were to apply the goal test to nodes when selected for expansion, rather
than when generated, the whole layer of nodes at depth d would be expanded before
the goal was detected and the time complexity would be O(b
d+1
).)

Breadth-first search
S
A D
B D A E
CE E B B F
DFBFCEACG
GCG F
14
191917
17 151513
G25
11
Chapter 2 Problem Solving 56

Chapter 2 Problem Solving 57

Chapter 2 Problem Solving 58

Uniform Cost Search
Chapter 2 Problem Solving 59

Uniform-cost search
Implementation: fringe= queue ordered by path cost
Equivalent to breadth-first if all step costs all equal.
Complete?Yes, if step cost exceeds some small positive constant
Time?# of nodes with path cost less than of optimal solution.
Space?# of nodes on paths with path cost less than of optimal
solution.
Optimal?Yes, for any step cost.
Chapter 2 Problem Solving 60

Depth-first search
■Always expands one of the nodes at the deepest
level of the tree
■Only when the search hits a dead end
■goes back and expands nodes at shallower levels
■Dead end -> leaf nodes but not the goal
■Backtracking search
■only one successor is generated on expansion
■rather than all successors
■fewer memory

Depth-first search
■Expand deepestunexpanded node
■Implementation:
■fringe = Last In First Out (LIFO) queue, i.e., put successors at front
Is A a goal state?
queue=[B,C]
Is B a goal state?

Depth-first search
queue=[D,E,C]
Is D = goal state?
queue=[H,I,E,C]
Is H = goal state?

Depth-first search
queue=[I,E,C]
Is I = goal state?
queue=[E,C]
Is E = goal state?

Depth-first search
queue=[J,K,C]
Is J = goal state?
queue=[K,C]
Is K = goal state?

Depth-first search
queue=[C]
Is C = goal state?
queue=[F,G]
Is F = goal state?

Depth-first search
queue=[L,M,G]
Is L = goal state?
queue=[M,G]
Is M = goal state?

Example DFS
Chapter 2 Problem Solving 68

Depth-first search
S
A D
B D A E
C E E B B F
DFBFCEACG
GCG F
14
191917
17 151513
G25
1
1
Chapter 2 Problem Solving 69

Properties of depth-first search
■Complete?No: fails in infinite path or loops
■complete in finite spaces
■Time?O(b
m
) Terrible if m(maximum depth of the state space) can be
much larger than d (the depth of the shallowest solution) and is
infinite if the tree is unbounded
May be much faster than Breadth first search if solutions are dense
■Space?O(bm), linear space memory requirement is
branching factor(b)*maximum depth(m)
■Optimal?No (It may find a non-optimal goal first) cannot guarantee
the shallowest solution.

Depth First Search
Adepth-firsttreesearchmaygeneratealloftheO(b
m
)nodesinthesearchtree,
wheremisthemaximumdepthofanynode;thiscanbemuchgreaterthanthesize
ofthestatespace.
Adepth-firsttreesearchneedstostoreonlyasinglepathfromtheroot
toaleafnode,alongwiththeremainingunexpandedsiblingnodesforeachnodeon
thepath.Onceanodehasbeenexpanded,itcanberemovedfrommemoryassoon
asallitsdescendantshavebeenfullyexplored.
Forastatespacewithbranchingfactorbandmaximumdepthm,depth-firstsearch
requiresstorageofonlyO(bm)nodes.

DFS

Depth-Limited Search
■Depth-first search is clearly dangerous
• if the tree is very deep, we risk finding a suboptimal solution;
• if the tree is infinite, we risk an infinite loop.
■The embarrassing failure of depth-first search in infinite state spaces
can be alleviated by supplying depth-first search with a predetermined
depth limit l. That is, nodes at depth are treated as if they have no
successors. This approach is called depth-limited search.
■Three possible outcomes:
■Solution
■Failure (no solution)
■Cutoff (no solution within cutoff)

Depth-limited search
■However, it is usually not easy to define the suitable
maximum depth
■too small ->no solution can be found
■too large -> the same problems are suffered from
■Anyway the search is
■complete
■but still not optimal

Depth-limited search
S
A D
B D A E
CE E B B F
DFBFCEACG
GCG F
14
191917
17 151513
G25
11
depth = 3
3
6
Chapter 2 Problem Solving 76

Chapter 2 Problem Solving 77

Iterative deepening search
■Usually we do not know a reasonable depth limit in advance.
■Iterative deepening search repeatedly runs depth-limited search for
increasing depth limits 0, 1, 2, . . .
■this essentially combines the advantages of depth-first and breadth
first search;
■the procedure is complete and optimal;
■the memory requirement is similar to that of depth-first search;

Iterative deepening search
The iterative deepening search algorithm, which repeatedly applies depth limited
search with increasing limits. It terminates when a solution is found or if the depth limited
search returns failure, meaning that no solution exists.

Iterative deepening search l =0

Iterative deepening search l =1

Iterative deepening search l =2

Iterative deepening search l =3

■Note: We visit top level nodes multiple times. The last (or max depth) level is
visited once, second last level is visited twice, and so on. It may seem expensive,
but it turns out to be not so costly, since in a tree most of the nodes are in the
bottom level. So it does not matter much if the upper levels are visited multiple
times.
■Number of nodes generated in an iterative deepening search to depth dwith
branching factor b:
N
IDS= (d+1) b
0
+(d) b
1
+ (d-1)b
2
+ … + 1b
d
Chapter 2 Problem Solving85

Iterative deepening search
■For b = 2, d = 3,
■N
BFS = (b
1
)+ (b
2
)+ b
d
+(b
d+1
–b)= 2+4+2
3
+ (2
4
-2)=6+8+(14)=28
■N
IDS= (d+1) b
0
+(d) b
1
+ (d-1)b
2
+ … + 1b
d
=(4)*1+(3)*2+(2)* 2
2
+(1)* 2
3
=4+6+8+8=26
■iterative deepening is the preferred uninformed search method when the
search space is large and the depth of the solution is not known.

Properties of iterative deepening search
■Complete?Yes
■Time?(d+1)b
0
+ d b
1
+ (d-1)b
2
+ … + b
d
= O(b
d
)
■Space?O(bd)
■Optimal?Yes, if step cost = 1

Iterative deepening search
■Supposewehaveatreehavingbranchingfactor‘b’(numberofchildrenofeachnode),
anditsdepth‘d’,i.e.,thereareb
dnodes.
■Inaniterativedeepeningsearch,thenodesonthebottomlevelareexpandedonce,
thoseonthenexttobottomlevelareexpandedtwice,andsoon,uptotherootofthe
searchtree,whichisexpandedd+1times.
■IDDFStakesthesametimeasthatofDFSandBFS,butitisindeedslowerthanboth
asithasahigherconstantfactorinitstimecomplexityexpression.
■IDDFSisbestsuitedforacompleteinfinitetree

Example IDS
Chapter 2 Problem Solving 89

Bidirectional search
■Run two simultaneous searches
■one forward from the initial state another
backward from the goal
■stop when the two searches meet
■However, computing backward is difficult
■A huge amount of goal states
■at the goal state, which actions are used to
compute it?
■can the actions be reversible to computer its
predecessors?
Chapter 2 Problem Solving 90

Chapter 2 Problem Solving 91

Comparing search strategies

Informed Search Methods
■How can we make use of other knowledge about the
problem to improve searching strategy?
■Map example:
■Heuristic: Expand those nodes closest in “straight-line” distance to goal
■8-puzzle:
■Heuristic: Expand those nodes with the most tiles in place

Heuristic
■Heuristics (Greek heuriskein= find, discover): "the study of
the methods and rules of discovery and invention".
■Heuristic -a “rule of thumb” used to help guide search
■often, something learned experientially and recalled when needed
■Heuristic Function -function applied to a state in a search
space to indicate a likelihood of success if that state is
selected

Heuristic function
■A heuristic function at a node n is an estimate of the optimum cost from
the current node to a goal. It is denoted by h(n).
■h(n) = estimated cost of the cheapest path from node n to a goal node
Example: We want a path from Kolkata to Guwahati
Heuristic for Guwahati may be straight-line distance between Kolkata
and Guwahati
h(Kolkata) = euclidean Distance(Kolkata, Guwahati)
Heuristics can also help speed up exhaustive, blind search, such as depth-first and breadth-first search.

Informed Search Methods
■How can we make use of other knowledge about the
problem to improve searching strategy?
■Map example:
■Heuristic: Expand those nodes closest in “straight-line” distance to goal
■8-puzzle:
■Heuristic: Expand those nodes with the most tiles in place

Chapter 2 Problem Solving97

123
6 5
784
Which
move is
best?
rig
ht
123
5
784
123
65
784
123
6 5
7
8
4
6
123
765
8 4
GOAL
A Simple 8-puzzle heuristic
Chapter 2 Problem Solving
98

Another approach
■Number of tiles in the incorrectposition.
■This can also be considered a lower bound on the number of moves from a
solution!
■The “best” move is the one with the lowest number returned by the heuristic.
123
6 5
784
rig
ht
123
5
784
123
65
784
123
6 5
7
8
4
6
123
765
8 4
GOAL
h=2 h=4 h=3

heuristics
E.g., for the 8-puzzle:
■h
1(n) = number of misplaced tiles
■h
2(n) = total Manhattan distance
(i.e., sum of the distances of the tiles
from the goal position)
■h
1(S) = 8
■h
2(S) = 3+1+2+2+2+3+3+2 = 18

Best-first search
■Idea: use an evaluation functionf(n) for each node
■f(n) provides an estimate for the total cost.
??????Expand the node n with smallest f(n).
■Implementation:
Order the nodes in fringe increasing order of cost.
■Special cases:
■greedy best-first search
■A
*
search

Best-First Search
■Use an evaluation function f(n).
■Always choose the node from fringe that has the lowestf
value.
3 5 1
3 5 1
4 6

Greedy best-first search
■f(n) = h(n) estimate of cost from nto goal
■e.g., f(n)= straight-line distance from nto Bucharest
■Greedy best-first search expands the node that
appearsto be closest to goal.

Romania with straight-line dist.

Greedy best-first search example

Greedy best-first search example

Properties of greedy best-first search
■Complete?No –can get stuck in loops.
■Time?O(b
m
), but a good heuristic can give dramatic
improvement
■Space?O(b
m
) -keeps all nodes in memory
■Optimal?No
e.g. Arad->Sibiu->Rimnicu Vilcea->Pitesti->Bucharest is
shorter!

E.g. Route finding problem
S is the starting state, G is the goal state. Let us run the greedy search algorithm
for the graph given in Figure a. The straight line distance heuristic estimates for
the nodes are shown in Figure b.
Figure a
Figure b
Chapter 2 Problem Solving 108

Chapter 2 Problem Solving 109

A
*
search
■Hart, Nilsson & Rafael 1968 Best first search with f(n) = g(n) + h(n)
■Idea: avoid expanding paths that are already expensive
■Evaluation function f(n) = g(n) + h(n)
g(n) = cost so far to reach n
h(n)= estimated cost from nto goal
f(n) = estimated total cost of path through nto goal

A* Shortest Path Example
Chapter 2 Problem Solving
111

A* Shortest Path Example
Chapter 2 Problem Solving 112

A* Shortest Path Example
Chapter 2 Problem Solving 113

A* Shortest Path Example
Chapter 2 Problem Solving 114

A* algorithm
Insert the root node into the queue
While the queue is not empty
Dequeue the element with the highest priority
(If priorities are same, alphabetically smaller path is
chosen)
If the path is ending in the goal state,
print the path and exit
Else
Insert all the children of the dequeued element, with
f(n) as the priority

Chapter 2 Problem Solving 116

Chapter 2 Problem Solving 117

Chapter 2 Problem Solving 118

Chapter 2 Problem Solving
119

A* Example
ConsiderthesearchproblembelowwithstartstateSandgoalstateG.Thetransitioncostsarenexttotheedges,andtheheuristic
valuesarenexttothestates.WhatisthefinalcostusingA*search.
Chapter 2 Problem Solving120

8 Puzzle Example
■f(n) = g(n) + h(n)
■What is the usual g(n)?
■two well-known h(n)’s
■h1 = the number of misplaced tiles
■h2 = the sum of the distances of the tiles from their goal positions,
using city block distance, which is the sum of the horizontal and
vertical distances

Applying A* on 8-puzzle
28 3
16 4
7 5
12 3
8 4
76 5
goal
28 3
1 4
7 6 5
28 3
16 4
7 5
28 3
16 4
7 5
2 3
18 4
7 6 5
28 3
14
7 6 5
28 3
1 4
7 6 5
0+4=4
5+1=6
4 6
1st
2nd
5 5 6
Heuristic: No. of misplaced tiles

Chapter 2 Problem Solving 123

A*: admissibility
■If search algorithm is admissible, if for any graph it terminates in an optimal
path from start state to goal state if path exists.
■A heuristic function is admissible(terminates with optimal path) if it satisfies the
following property:
h’(n) ≤ h*(n) (heuristic function underestimates the true cost)
■h’(n)hastobeanoptimisticestimator;itneverhastooverestimateh*(n).
■h*(n)–costofthecheapestsolutionpathfromntogoalnode
■If h(n) is admissible then search will find optimal solution.
{

125
For example, suppose you're trying todrive from Chicago to New Yorkand your
heuristic is what your friends think about geography. If your first friend says, "Hey,
Boston is close to New York" (underestimating), then you'll waste time looking at
routes via Boston. Before long, you'll realisethat anysensible route from Chicago to
Bostonalready gets fairly close to New York before reaching Boston and that actually
going via Boston just adds more miles. So you'll stop considering routes via Boston
and you'll move on to find the optimal route. Your underestimating friend cost you a
bit of planning time but, in the end, you found the right route.
Suppose that another friend says, "Indiana is a million miles from New York!"
Nowhere else on earth is more than 13,000 miles from New York so, if you take your
friend's advice literally, you won't even consider any route through Indiana. This
makes youdrive for nearly twice as long and cover 50%

Memory Bounded Heuristic Search: Recursive BFS
■How can we solve the memory problem for
A* search?
■Idea: Try something like depth first search,
but let’s not forget everything about the
branches we have partially explored.
■We remember the best f-value we have
found so far in the branch we are deleting.

RBFS:
RBFS changes its mind
very often in practice.
This is because the
f=g+h become more
accurate (less optimistic)
as we approach the goal.
Hence, higher level nodes
have smaller f-values and
will be explored first.
Problem: We should keep
in memory whatever we can.
best alternative
over fringe nodes,
which are not children:
i.e. do I want to back up?
Chapter 2 Problem Solving 127

Recursive best-first
■It is a recursive implementation of best-first, with linear
spatial cost.
■It forgets a branch when its cost is more than the best
alternative.
■The cost of the forgotten branch is stored in the parent node
as its new cost.
■The forgotten branch is re-expanded if its cost becomes the
best once again.

Local search algorithms and optimization problems
■Local search algorithms operate using a single current
state and generally move only to neighbors of that state.
■In addition to finding goals, these algorithms are useful for
solving optimization problemsin which aim is to find the
best state according to an objective function.
■In LS, there is a function to evaluate the quality of the
states, but this is not necessarily related to a cost.

Local search and optimization
■Local search
■Keep track of single current state
■Move only to neighboring states
■Ignore paths
■Advantages:
■Use very little memory
■Can often find reasonable solutions in large or infinite
(continuous) state spaces.
■“Pure optimization” problems
■All states have an objective function
■Goal is to find state with max (or min) objective value
■Does not quite fit into path-cost/goal-state formulation
■Local search can do quite well on these problems.

Local search algorithms
■These algorithms do notsystematically explore all the state space.
■The heuristic (or evaluation) function is used to reduce the search
space (not considering states which are not worth being explored).
■Algorithms do not usuallykeep track of the path traveled. The
memory cost is minimal.

132

Hill Climbing (Greedy Local Search)
■Searching for a goal state= Climbing to the top of a hill
■Heuristic functionto estimate how close a given state is to a
goal state.
■Children are considered only if their evaluation function is better
than the one of the parent(reduction of the search space).

134

135

136

137

Simple Hill Climbing
Algorithm
1.Evaluate the initial state.
2.Loop until a solution is found or there are no
new operators left to be applied:
−Select and apply a new operator
−Evaluate the new state:
* goal → quit
*better than current state → new current state
* not better->try new operator

139

140

141

142

143

144

Different regions in the State Space
•Local maximum:It is a state which is better than its neighboring state
however there exists a state which is better than it(global maximum).
This state is better because here the value of the objective function is
higher than its neighbors.
•Global maximum :It is the best possible state in the state space
diagram. This because at this state, objective function has highest
value.
•Plateau/flat local maximum :It is a flat region of state space where
neighboring states have the same value.
•Ridge :It is region which is higher than its neighbours but itself has a
slope. It is a special kind of local maximum.
•Current state :The region of state space diagram where we are
currently present during the search.
•Shoulder :It is a plateau that has an uphill edge.
Chapter 2 Problem Solving 145

Chapter 2 Problem Solving 146

Chapter 2 Problem Solving 147

148
A state that is better than all of its neighbours, but not better than global maximum.

149

150
A flat area of the search space in which all neighboring states have the same value. To get rid of
plateau, make a big jump to try to get in a new section.

151
Ridges (result in a sequence of local maxima) The orientation of the high region, compared to
the set of available moves, makes it impossible to climb up.

152

153

Steepest-Ascent Hill Climbing (Gradient Search)
•Standard hill-climbing search algorithm
–It is a simple loop which search for and select anyoperation that
improves the current state.
•Steepest-ascent hill climbing or gradient search
–Is a loop that continuously moves in the direction of increasing
value. (Terminates when peak is reached)
–The bestmove (not just any one) that improves the current state
is selected.
■Considers all the movesfrom the current state.
■Selects the best oneas the next state.

Steepest-Ascent Hill Climbing (Gradient Search)

156
Unlike simple hill climbing search, It considers all the successive nodes, compares them, and
choose the node which is closest to the solution. Steepest ascent hill climbing search is
similar tobest-first searchbecause it focuses on each node instead of one.

157

158
Stochastic hill climbing does not focus on all the nodes. It selects one node at random
and decides whether it should be expanded or search for a better one.

Random restart Hill climbing
159
Random-restart algorithm is based ontry and try strategy. It iteratively searches the
node and selects the best one at each step until the goal is not found.
The success depends most commonly on the shape of the hill. If there are few plateaus,
local maxima, and ridges, it becomes easy to reach the destination.

Blocks World
In this problem, a set of initial arrangement of eight blocks
is provided. We have to reach the GOAL arrangement by
moving blocks in a systematic order. States are to be
evaluated using heuristic , so that we can get next best
node by applying Steepest Ascent Hill Climbing technique.
Two Heuristics are considered : (i) LOCAL (ii) GLOBAL.
Both the function will try to maximize the score/cost of
each state.

LOCAL
Cost/score of goal state is 8 (using local heuristic),
because all the blocks are at its correct position.

I
J
K
L
M
Chapter 2 Problem Solving162

Now J is
current new
state with
score 6 > cost
of I (4).
So , In step 2
three moves
from best
state J is
possible.
Chapter 2 Problem Solving 163

All the neighbors of node J have lower score than value of J i.e 4 , so J is a local maxima, and further no move
is possible from states K, L and M. So search falls in TRAP situation. To overcome the above problem of Local
function, we can apply GLOBAL heuristic.
Chapter 2 Problem Solving 164

As the value of any structure maximizes, we will be nearer to the goal state.
J
K
L
M
I
Chapter 2 Problem Solving
165

Chapter 2 Problem Solving 166
Now goal state will have score /cost of
28 and Initial state will have cost of -28.
Again the best node in next
move will be that which has
maximum score/cost.
Further from state M we can have
following moves :
(i) PUSH block G on block A
(ii) PUSH block G on block H
(iii) PUSH block H on block A
(iv) PUSH block H on block G
(v) PUSH block A on block H
(vi) PUSH block A on block G BACK.
(vii) PUSH block G on TABLE…and so
on we select best node till we get
structure with score of + 28.
GLOBAL APPROACH

Simulated Annealing Analogies
167
Metal Annealing
Toy Analogy

Simulated Annealing
•A variation of hill climbing in which, at the beginning of the process,
some downhill movesmay be made.
•Lower the chancesof getting caught at a local maximum, or plateau,
or a ridge.
•It is inspired by the physical process of controlled cooling (crystallization, metal
annealing):
■A metal is heated up to a high temperature and then is progressively cooled in a controlled
way until some solid state is reached.
■If the cooling is adequate, the minimum-energy structure (a global minimum) is obtained.
■Annealing schedule: if the temperature is lowered sufficiently slowly, then the goal will be
attained.

Simulated Annealing
■It is a stochastichill-climbing algorithm (stochastic local search,
SLS):
■A successor is selected among all possible successors according to a
probability distribution.
■The successor can be worse than the current state.
■A Physical Analogy:
Imagine the task of getting a ping-pong ball into the deepest crevice in a bumpy
surface. If we just let the ball roll, it will come to rest at a local minimum. If we
shake the surface, we can bounce the ball out of the local minimum. The trick is
to shake just hard enough to bounce the ball out of local minima but not hard
enough to dislodge it from the global minimum. The simulated-annealing solution
is to start by shaking hard (i.e., at a high temperature) and then gradually reduce
the intensity of the shaking (i.e., lower the temperature).

Simulated annealing
•Main idea: Steps taken in random directions do not decrease (but actually
increase) the ability of finding a global optimum.
•Disadvantage: The structure of the algorithm increases the execution time.
•Advantage: The random steps possibly allow to avoid small “hills”.
•Temperature: It determines (through a probability function) the amplitude of
the steps, long at the beginning, and then shorter and shorter.
•Annealing: When the amplitude of the random step is sufficiently small not to
allow to descend the hill under consideration, the result of the algorithm is said
to be annealed.

•If the move improves the situation, it is always accepted. Otherwise, the
algorithm accepts the move with some probability less than 1.
•The probability decreases exponentially with the “badness” of the move—the
amount ΔE by which the evaluation is worsened.
•If the schedule lowers T slowly enough, the algorithm will find a global
optimum with probability approaching 1.
Simulated annealing
Chapter 2 Problem Solving 171
https://www.youtube.com/watch?v=NI3WllrvWoc

Simulated annealing
functionSIMULATED-ANNEALING( problem, schedule) returna solution state
input:problem, a problem schedule, a mapping from time to temperature
local variables: current, a node.
next, a node.
T, a “temperature” controlling the probability of downward steps
current ← MAKE-NODE(INITIAL-STATE[problem])
for t ← 1 to ∞ do
T ← schedule[t]
ifT = 0then returncurrent
next← a randomly selected successor of current
∆E← VALUE[next] -VALUE[current]
if∆E > 0 then current← next
elsecurrent← next only with probability e
∆E/T
t=t+1
Terminology from the physical problem is often used. Downhill moves are accepted readily early in the annealing schedule and then less
often as time goes on. The schedule input determines the value of the temperature T as a function of time.

Probabilty calculation
•The probability also decreases as the “temperature” T
goes down:
“bad” moves are more likely to be allowed at the start
when T is high, and they become more unlikely as T
decreases.
Chapter 2 Problem Solving 173

Simulated annealing
•Aim: to avoid local optima, which represent a problem in hill climbing.
•Solution: to take, occasionally, steps in a different direction from the one in
which the increase (or decrease) of energy is maximum.

Simulated annealing: conclusions
■It is suitable for problems in which the global optimum is
surrounded by many local optima.
■It is suitable for problems in which it is difficult to find a
good heuristic function.
■Determining the values of the parameters can be a problem
and requires experimentation.

Local Beam Search

177

178

179

Local beam search
In the Stochastic beam search instead of choosing the best k individuals, it
selects k number of the individuals at random; the individuals with a better
evaluation are more likely to be chosen.
This is done by making the probability of being chosen as a function of the
evaluation function.
Stochastic beam search tends to allow more diversity in the k individuals than
does plain beam search.

Stochastic beam Search: Genetic Algorithms(GA)
Chapter 2 Problem Solving
181

Genetic algorithms
■A genetic algorithm (GA) is a variant of stochastic beam search, in
which twoparent states are combined.
■Inspired by the process of natural selection:
■Living beings adapt to the environment thanks to the characteristics
inheritedfrom their parents.
■The possibility of survival and reproduction are proportional to the
goodnessof these characteristics.
■The combination of “good”individuals can produce better adapted
individuals.

Genetic algorithms
■To solve a problem via GAs requires:
■The size of the initialpopulation:
■GAs start with a set of kstates randomly generated
■A strategy to combineindividuals
■The representation of the states (individuals):
■A function, which measure the fitnessof the states
■Operators, which combine states to obtain new states
■Cross-over and mutation operators
183

Genetic algorithms: algorithm
■Steps of the basic GA algorithm:
1.N individuals from current population are
selected to form the intermediate population
(according to some predefined criteria).
2.Individuals are paired and for each pair:
a)The crossover operator is applied and two new
individuals are obtained.
b)New individuals are mutated
■The resulting individuals form the new
population.
■The process is iterated until the population
converges or a specific number of iteration has
passed.

Genetic Algorithms
Population
Chapter 2 Problem Solving 185

Fitness
Selection
Chapter 2 Problem Solving 186

Crossover
Mutation
Chapter 2 Problem Solving 187

8-Queens Problem
Chapter 2 Problem Solving 188

Solving 8-queens problem using Genetic algorithms
■An 8-queens state must specify the positions of 8 queens, each in
a column of 8 squares each in the range from 1 to 8.
■Each state is rated by the evaluation function or the fitness
function.
■A fitness function should return higher values for better states, so,
for the 8-queens problem the number of non-attackingpairs of
queens is used (8*7/2) =28 for a solution).
2

Solving 8-queens problem using Genetic algorithms
Chapter 2 Problem Solving 190

Representing individuals
Chapter 2 Problem Solving 191

Generating an initial population
Chapter 2 Problem Solving 192

Fitness calculation
Chapter 2 Problem Solving 193

Apply a Fitness function
■24/(24+23+20+11) = 31%
■23/(24+23+20+11) = 29% etc
Chapter 2 Problem Solving
194

Selection
Chapter 2 Problem Solving 195

196

197

198

199

Stochastic Universal sampling
Chapter 2 Problem Solving 200

Genetic algorithms
■Fitness function: number of non-attacking pairs of queens (min = 0, max =(8 ×7)/2 = 28)
4 states for
8-queens
problem
2 pairs of 2 states
randomly selected based
on fitness. Random
crossover points selected
New states
after crossover
Random
mutation
applied

Genetic algorithms: 8-queens problem
■The initial population in (a)
■is ranked by the fitness function in (b),
■resulting in pairs for mating in (c).
■They produce offspring in (d),
■which are subject to mutation in (e).

Summary: Genetic Algorithm
Chapter 2 Problem Solving 203

Genetic algorithms
Has the effect of “jumping” to a completely different new
part of the search space (quite non-local)

Genetic algorithms: application
■In practice, GAs have had a widespread
impact on optimization problems, such as:
■circuit layout
■scheduling
205
Tags