3. ArtificialSolving problems by searching.pptx

NAZMUSSAKIBMDADIL200 25 views 68 slides Feb 28, 2025
Slide 1
Slide 1 of 68
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66
Slide 67
67
Slide 68
68

About This Presentation

artificial intelligence


Slide Content

Ashim Dey Assistant Professor, CSE, CUET Solving Problems by Searching

Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms

Problem To build a system to solve a particular problem , we need to do 4 things: 1. Define the problem precisely. Specify Initial/final solutions 2. Analyze the problem. A few very important features can have an immense impact on the appropriateness of various possible techniques for solving the problem 3. Isolate & represent the task knowledge that is necessary to solve the problem 4. Choose the best problem-solving technique (s) & apply it (them) to the particular problem

Production Systems Since search forms the core of many intelligent processes, it is useful to structure AI programs in a way that facilitates describing & performing the search process . Production systems provide such structures . In AI, a production system is a model used to describe an intelligent system's problem-solving or decision-making process. It is a framework for creating rules and applying them to derive new information or actions.

Production Systems A production system consist of: - A set of rules , each consisting of a left side (a pattern ) that determines the applicability of the rule & a right side that describes the operation to be performed if the rule is applied - One or more knowledge/databases that contain whatever information is appropriate for the particular task. Some parts of the database may be permanent , while other parts of it may pertain only to the solution of the current problem. The information in these databases may be structured in any appropriate way. - A control strategy that specifies the order in which the rules will be compared to the database & a way of resolving the conflicts that arise when several rules match at once - A rule applier

Problem solving agents A agent follows four-phase problem solving process: Goal Formulation: Goals organize behavior by limiting the objectives and hence the actions to be considered Problem Formulation: The agent devises a description of the states and actions necessary to reach the goal—an abstract model of the relevant part of the world. Search: Before taking any action in the real world, the agent simulates sequences of actions in its model, searching until it finds a sequence of actions that reaches the goal. Solution Such a sequence is called a solution. The agent might have to simulate multiple sequences that do not reach the goal, but eventually it will find a solution or it will find that no solution is possible . Execution: The agent can now execute the actions in the solution, one at a time.

Problem-solving agents Restricted form of general agent: Note: this is offline problem solving; solution executed\eyes closed. Online problem solving involves acting without complete knowledge .

Assumptions Static : The world does not change unless the agent changes it. Observable : Every aspect of the world state can be seen. Discrete : Has distinct states as opposed to continuously flowing time. Deterministic : There is no element of chance. This is a restricted form of a general agent called offline problem solving where agent has complete information about the problem and can plan all actions in advance . Online problem solving involves acting without complete knowledge where agent interacts with the environment and solves the problem incrementally 02-Dec-24 CSE-345: Artificial Intelligence 8

Why Search? To achieve goals or to maximize utility, need to predict what the result of actions in the future will be. There are many sequences of actions, each with their own utility . To find, or search for, the best one. 02-Dec-24 CSE-345: Artificial Intelligence 9

Example: Romania On holiday in Romania ; currently in Arad . Flight leaves tomorrow from Bucharest Formulate goal: be in Bucharest Formulate problem: states: various cities actions: drive between cities Find solution: sequence of cities, e.g., Arad, Sibiu, Fagaras , Bucharest

Example: Romania

Problem types Deterministic, fully observable  single-state problem Agent knows exactly which state it will be in; solution is a sequence Non-observable  sensor less problem ( conformant problem) Agent may have no idea where it is ; solution is a sequence Nondeterministic and/or partially observable  contingency problem percepts provide new information about current state solution is a contingent plan or a policy often interleave search, execution Unknown state space  exploration problem (“online”)

Example: vacuum world Single-state , start in #5. Solution?? [Right; Suck] Conformant , start in {1; 2; 3; 4; 5; 6; 7; 8} e.g., Right goes to {2; 4; 6; 8}. Solution?? [Right; Suck; Left; Suck] Contingency , start in #5 Murphy's Law: Suck can dirty a clean carpet Local sensing: dirt, location only. Solution?? [Right; if dirt then Suck]

Single-state problem formulation A problem is defined by four items: initial state e.g., “at Arad" successor function S(x) = set of action - state pairs e.g., S(Arad) = {<Arad  Zerind ; Zerind >,…} goal state , can be explicit , e.g., x = “at Bucharest" implicit , e.g., NoDirt (x) path cost (additive) e.g., sum of distances, number of actions executed, etc. c(x; a; y) is the step cost , assumed to be  A solution is a sequence of actions leading from the initial state to a goal state

Selecting a state space (Abstraction) Real world is absolutely complex state space must be abstracted for problem solving by the process of simplifying a complex problem or system by reducing the number of states or variables (Abstract) state = set of real states (Abstract) action = complex combination of real actions e.g., “Arad  Zerind ” represents a complex set of possible routes, detours, rest stops, etc . For guaranteed reliability, any real state "in Arad“ must get to some real state "in Zerind " (Abstract) solution = set of real paths that are solutions in the real world Each abstract action should be “ easier ” than the original problem

Vacuum world state space graph 1 2 3 4 5 6 7 8 The state-space graph for the two-cell vacuum world. There are 8 states and 3 actions for each state: L = Left, R = Right, S = Suck.

Vacuum world state space graph states??: integer dirt and robot locations (ignore dirt amounts etc.) actions??: Left, Right, Suck, NoOp goal test??: no dirt path cost??: 1 per action (0 for NoOp ) 1 2 3 4 5 6 7 8

Example: The 8-puzzle states??: integer locations of tiles (ignore intermediate positions) actions?? : move blank left, right, up, down (ignore unjamming etc.) goal test??: = goal state (given) path cost??: 1 per move [Note: optimal solution of n-Puzzle family is NP-hard]

Example: robotic assembly states??: real-valued coordinates of robot joint angles parts of the object to be assembled actions??: continuous motions of robot joints goal test??: complete assembly with no robot included! path cost??: time to execute

Tree search algorithms Basic idea: offline , simulated exploration of state space by generating successors of already-explored states (i.e., expandin g states)

Tree search example

Tree search example

Tree search example

Implementation: states vs. nodes A state is a (representation of) a physical configuration A node is a data structure constituting part of a search tree includes state , parent node , action , path cost g(x) , depth States do not have parents, children, depth, or path cost! The EXPAND function creates new nodes, filling in the various fields & using the SUCCESSOR fn of the problem to create the corresponding states.

Implementation: general tree search

Search Strategies Uninformed search strategies Also known as " blind search ," uninformed search strategies use no information about the likely "direction" of the goal node(s). Uninformed search methods : Breadth-first, depth-first, depth-limited, uniform-cost, depth-first iterative deepening, bidirectional Informed search strategies Also known as " heuristic search, " informed search strategies use information about the domain to (try to) usually head in the general direction of the goal node(s). Informed search methods : Hill climbing, best-first, greedy search, beam search, A, A*

Search strategies A search strategy is defined by picking the order of node expansion Strategies are evaluated along the following dimensions/properties: Completeness --does it always find a solution if one exists? Time complexity --number of nodes generated/expanded Space complexity --maximum number of nodes in memory Cost Optimality - -does it always find a least-cost solution? Time and space complexity are measured in terms of b --maximum branching factor of the search tree (Avg. no. of children) d --depth of the least-cost solution m --maximum depth of the state space (may be ∞ )

Uninformed search strategies Uninformed search strategies use only the information available in the problem definition Breadth-first search
Uniform-cost search
Depth-first search
Depth-limited search
Iterative deepening search

Breadth-first search Expand shallowest unexpanded node Implementation: fringe is a FIFO queue, i.e., new successors go at end Step 1 Step 2 Step 3 Step 4

30 BFS: an Example A breadth-first search (BFS) explores nodes nearest the root before exploring nodes further away For example, after searching A , then B , then C , the search proceeds with D , E , F , G Node are explored in the order A B C D E F G H I J K L M N O P Q J will be found before N L M N O P G Q H J I K F E D B C A

31 How to do breadth-first searching? Put the root node on a queue ; while (queue is not empty) { remove a node from the queue; if (node is a goal node) return success; put all children of node onto the queue; } return failure; Just before starting to explore level n , the queue holds all the nodes at level n-1 In a typical tree, the number of nodes at each level increases exponentially with the depth Memory requirements may be infeasible

Properties of breadth-first search Complete?? Yes (if b is infinite) Time? 1+b+b 2 +b 3 +… + b d + b(b d -1 ) = O(b d+1 ) i.e., exp. in d Space? O(b d+1 ) (keeps every node in memory) Optimal? Yes (if cost = 1 per step); not optimal in general Space is the big problem; can easily generate nodes at 100MB/sec so 24hrs = 8640GB.

Uniform-Cost Search Enqueue nodes by path cost . That is, let g(n) = cost of the path from the start node to the current node n. Sort nodes by increasing value of g. Called “ Dijkstra’s Algorithm ” in the algorithms literature , similar to “Branch and Bound Algorithm” in operations research literature and Uniform-cost Search in AI community Implementation : fringe = queue ordered by path cost Equivalent to breadth-first if all step costs all equal. 02-Dec-24 CSE-345: Artificial Intelligence 33

Uniform-cost search

Uniform-cost search Expand least-cost unexpanded node Implementation : fringe = queue ordered by path cost, lowest first ( Priority queue ) Equivalent to breadth-first if step costs all equal Complete? Yes, if step cost ≥ ε Time? # of nodes with g ≤ cost of optimal solution, O( b ceiling (C*/ ε ) ) where C * is the cost of the optimal solution Space? # of nodes with g ≤ cost of optimal solution, O( b ceiling (C*/ ε ) ) Optimal? Yes – nodes expanded in increasing order of g(n)

Depth-first search Expand deepest unexpanded node Implementation : fringe = LIFO stack, i.e., put successors at front

Depth-first search

38 DFS Example A depth-first search (DFS) explores a path all the way to a leaf before backtracking and exploring another path For example, after searching A , then B , then D , the search backtracks and tries another path from B Node are explored in the order A B D E H L M N I O P C F G J K Q N will be found before J L M N O P G Q H J I K F E D B C A

39 How to do depth-first searching Put the root node on a stack ; while (stack is not empty) { remove a node from the stack; if (node is a goal node) return success; put all children of node onto the stack; } return failure; At each step, the stack contains some nodes from each of a number of levels The size of stack that is required depends on the branching factor b While searching level n , the stack contains approximately b*n nodes When this method succeeds, it doesn’t give the path

Properties of depth-first search Complete? No: fails in infinite-depth spaces, spaces with loops Modify to avoid repeated states along path complete in finite spaces Time? O( b m ) : terrible if m is much larger than d but if solutions are dense , may be much faster than breadth-first Space? O( bm ) , i.e., linear space! Optimal? No

BFS vs. DFS 02-Dec-24 CSE-345: Artificial Intelligence 41

Comparative analysis (b = 3,d = 2) Max-list-length = 1 + d(b-1) Max-list-length = b d LIFO FIFO

BFS vs. DFS Pros of BFS BFS will not get trapped exploring a blind alley. This contrasts with DFS, which may follow a single, unfruitful path for a very long time, perhaps forever, before the path actually terminates in a state that has no successors This is a particular problem of DFS (i.e., a state has a successor that is also of its ancestors) unless special care is expected to test for such a situation. If there is a solution, then BFS is guaranteed to find it. Furthermore, if there are multiple solutions, then a minimal solution (minimum # of steps) will be found . This is guaranteed by the fact that longer paths are never explored until all shorter ones have already been examined This contrasts with DFS, which may find a long path to a solution in one part of the tree, when a shorter exists in some other, unexplored part of thee tree Pros of DFS DFS requires less memory since only the nodes on the current path are stored This contrast with BFS, where all of the tree that has so far been generated must be stored By chance, DFS may find a solution without examine much of the search space at all Contrast with BFS, in which all parts of the tree must be examined to level n before any nodes on level n+1 can be examined. This is particularly significant if many acceptable solutions exist, DFS can stop when one of them is found 02-Dec-24 CSE-345: Artificial Intelligence 43

Depth-Limited Search (DLS) To avoid the infinite depth problem of DFS, we can decide to only search until depth l , i.e. we don’t expand beyond depth l. DFS with depth limit l , i.e., nodes at depth l have no successors

45 Depth-limited searching Depth-first searches may be performed with a depth limit : boolean limitedDFS (Node node , int limit, int depth) { if (depth > limit) return failure; if (node is a goal node) return success; for each child of node { if ( limitedDFS (child, limit, depth + 1)) return success; } return failure; } Since this method is basically DFS , if it succeeds then the path to a goal node is in the stack

Iterative deepening search (IDS) solves the problem of picking a good value for l by trying Iterative deepening search all values

Iterative deepening search Iterative deepening  is DFS to a not fixed depth in the tree being searched. If no solution is found up to this depth then the depth to be searched is increased and the whole ` bounded ' depth-first search begun again. It works by setting a depth of search -say, depth 1- and doing depth-first search to that depth. If a solution is found then the process stops -otherwise, increase the depth by, say, 1 and repeat until a solution is found.

Iterative deepening search l =0/1/2

Iterative deepening search l =3

Properties of iterative deepening search Complete? Yes Time? (d+1)b + d b 1 + (d-1)b 2 + … + b d = O( b d ) Space? O( bd ) Optimal? Yes, if step cost = 1 Can be modified to explore uniform-cost tree Number of nodes generated in a depth-limited search to depth d with branching factor b : N DLS = b + b 1 + b 2 + … + b d-2 + b d-1 + b d Number of nodes generated in an iterative deepening search to depth d with branching factor b : N IDS = (d+1)b + d b^ 1 + (d-1)b^ 2 + … + 3b d-2 +2b d-1 + 1 b d For b = 10 , d = 5 , N DLS = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111
N IDS = 6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,456 Overhead = (123,456 - 111,111)/111,111 = 11% IDS does better because other nodes at depth d are not expanded BFS can be modified to apply goal test when a node is generated

Iterative deepening search Worst Case: goal d DFS : 11 nodes BFS : 04 nodes IDS: 04 nodes

Iterative deepening  is a popular method of search. Why? -DFS can be implemented to be much cheaper than BFS in terms of memory usage -but it is not guaranteed to find a solution even where one is guaranteed. -On the other hand, BFS can be guaranteed to terminate if there is a winning state to be found & will always find the ` quickest ' solution (in terms of how many steps need to be taken from the root node). Very expensive method in terms of memory usage . - Iterative deepening  is liked because it is an effective compromise between the two other methods of search. It is a form of DFS with a lower bound on how deep the search can go. Iterative deepening terminates if there is a solution. It can produce the same solution that BFS would produce but does not require the same memory usage (as for BFS).

Bidirectional Search Run two simultaneous searches – one forward from the initial state another backward from the goal – stop when the two searches meet However, computing backward is difficult – A huge amount of goal states – At the goal state, which actions are used to compute it? – Can the actions be reversible to computer its predecessors?

Bidirectional Search

Bidirectional Search

Bidirectional Search

Comparison of algorithms

Graph search

Example ( BFS ) 02-Dec-24 CSE-345: Artificial Intelligence 59 S C B A D G E 3 1 8 15 20 5 3 7 Solution path found is S A G , cost 18 Number of nodes expanded (including goal node) = 7

Example ( DFS ) 02-Dec-24 CSE-345: Artificial Intelligence 60 S C B A D G E 3 1 8 15 20 5 3 7 Solution path found is S A G , cost 18 Number of nodes expanded (including goal node) = 5

Example ( Uniform-Cost Search ) 02-Dec-24 CSE-345: Artificial Intelligence 61 S C B A D G E 3 1 8 15 20 5 3 7 Solution path found is S A G , cost 13 Number of nodes expanded (including goal node) = 7

Repeated States Failure to detect repeated states can turn a linear problem into an exponential one ! 02-Dec-24 CSE-345: Artificial Intelligence 62

Avoiding Repeated States Ignored one of the most important complications to the search process : the possibility of wasting time by expanding states that have already been encountered and expanded before on some other path . For some problems, this possibility never comes up; each state can only be reached one way, e.g. 8-queens problem For many problems, repeated states are unavoidable , e.g. route-finding problems. The search trees for these problems are infinite , but if we prune some of the repeated states , we can cut the search tree down to finite size , generating only the portion of the tree that spans the state space graph. Even when the tree is finite , avoiding repeated states can yield an exponential reduction in search cost. 02-Dec-24 CSE-345: Artificial Intelligence 63

Solutions to Repeated States In increasing order of effectiveness in reducing size of state space and with increasing computational costs: 1. Do not return to the state you just came from. 2. Do not create paths with cycles in them. 3. Do not generate any state that was ever created before . Net effect depends on frequency of “ loops ” in state space. 02-Dec-24 CSE-345: Artificial Intelligence 64

Graph Search function graph-search (problem, QUEUEING-FUNCTION) ;; problem describes the start state, operators, goal test, and operator costs ;; queueing -function is a comparator function that ranks two states ;; graph-search returns either a goal node or failure nodes = MAKE-QUEUE(MAKE-NODE( problem.INITIAL -STATE)) closed = {} loop if EMPTY(nodes) then return "failure" node = REMOVE-FRONT(nodes) if problem.GOAL -TEST( node.STATE ) succeeds then return node.SOLUTION if node.STATE is not in closed then ADD(node, closed) nodes = QUEUEING-FUNCTION(nodes, EXPAND(node, problem.OPERATORS )) end ;; Note: The goal test is NOT done when nodes are generated ;; Note: closed should be implemented as a hash table for efficiency 02-Dec-24 CSE-345: Artificial Intelligence 65 Similar to tree search , but it maintains a list of already visited states (a closed list), ensuring that each state is expanded only once .

Graph Search Breadth-first search and uniform-cost search are optimal graph search strategies. Iterative deepening search and depth-first search can follow a non-optimal path to the goal. Iterative deepening search can be used with modification: It must check whether a new path to a node is better than the original one If so, IDS must revise the depths and path costs of the node’s descendants . 02-Dec-24 CSE-345: Artificial Intelligence 66

Summary Problem formulation usually requires abstracting away real-world details to define a state space that can feasibly be explored Variety of uninformed search strategies Iterative deepening search uses only linear space and not much more time than other uninformed algorithms Graph search can be exponentially more efficient than tree search

Thank you!
Tags