updated UNIT 2.pptxEWDSKJL;MSDCL;MLDSPKDPS

lavyaagrawal123 11 views 67 slides Mar 08, 2025
Slide 1
Slide 1 of 67
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66
Slide 67
67

About This Presentation

;KLEWCSM L;MARL;EWACSL;MVCX ZOLKMVADFOLVKFP[;fwkpre]afk;
v,;
/<:da,;gk[edal;fle;lf;dal;f,;elgpkb]refs8


Slide Content

23CSE2018 – Foundations of Artificial Intelligence Class -S.Y. (SEM-II), AIA Unit - II PROBLEM SOLVING AND SEARCH AY 2024-2025 SEM-II Prof. Sushma Mehetre MIT School of Computing Department of Computer Science & Engineering

Unit II: PROBLEM SOLVING AND SEARCH 09 hours Problem space and search Toy Problems Uninformed search methods – Breadth First Search, Uniform Cost Search, Depth First Search, Depth Limited Search, Iterative Deepening Search, Bi-directional Search Heuristic search methods - Best first, Greedy, A* , AO*, Hill Climbing, Local Search and optimization Local Beam Search, Adversarial search -Minimax, Alpha-Beta Pruning Unit II - Syllabus MIT School of Computing Department of Computer Science & Engineering

PROBLEM SOLVING AGENT (GOAL BASED ) What is the Problem? What can be Solution or Solutions ? Decide what to do by finding sequences of actions that lead to desirable states. General Purpose Search Algorithm to solve problem- analysis of algorithms Goal formulation, based on the current situation and the agent's performance measure, is the first step in problem solving- Tour Agent Example . Problem space and search MIT School of Computing Department of Computer Science & Engineering

An agent with several immediate options of unknown value can decide what to do by just examining different possible sequences of actions that lead to states of known value, and then choosing the best sequence. This process of looking for such a sequence is called Search Simple steps for achieving goal Problem space and search MIT School of Computing Department of Computer Science & Engineering “Formulate, Search, Execute"

Problem space and search MIT School of Computing Department of Computer Science & Engineering Environment is Static, observable, deterministic. Initial state known

state space of the problem = Initial State + Successor function (set of all states) States : All Possible world states. Initial state : State from where sear starts Actions : set of Possible Actions Transition State : Performing Action results to new state. Goal test : determines whether a given state is a goal state. Path cost : function that assigns a numeric cost to each path . WELL-DEFINED PROBLEMS AND SOLUTIONS MIT School of Computing Department of Computer Science & Engineering

The vacuum world There are Two locations A and B . There is a AI based Vacuum cleaner . Locations A & B can be either dirty or cleaned. Vacuum cleaner can be present in any one location at a time , it can move to left and right and when it senses dirt it can suck the dirt. Goal of the problem is to get both the locations clean. TOY PROBLEMS MIT School of Computing Department of Computer Science & Engineering

TOY PROBLEMS MIT School of Computing Department of Computer Science & Engineering

Consists of a 3 x 3 board with eight numbered tiles and a blank space A tile adjacent to the blank space can slide into the space Object is to reach a specified goal state, such as the one shown on the right of the figure States: A state description specifies the location of each of the eight tiles and the blank in one of the nine squares Initial state: Any state can be designated as the initial state. Note that any given goal can be reached from exactly half of the possible initial states Successor function: This generates the legal states that result from trying the four actions (blank moves Left, Right, Up, or Down). Path cost: Each step costs 1, so the path cost is the number of steps in the path. Goal test: This checks whether the state matches the goal configuration shown in THE 8-PUZZLE MIT School of Computing Department of Computer Science & Engineering

THE 8-PUZZLE MIT School of Computing Department of Computer Science & Engineering

THE 8-QUEENS PROBLEM MIT School of Computing Department of Computer Science & Engineering To place eight queens on a chessboard such that no queen attacks any other. There are two main kinds of formulation. An incremental formulation involves operators that augment the state description, starting with an empty state; for the 8-queens problem, this means that each action adds a queen to the state. A complete-state formulation starts with all 8 queens on the board and moves them around. In either case, the path cost is of no interest because only the final state counts.

Search algorithms are fundamental in artificial intelligence (AI) for problem-solving and pathfinding. They are broadly categorized into U ninformed (blind) search I nformed (heuristic ) search methods. Search methods MIT School of Computing Department of Computer Science & Engineering

Informed Vs Uninformed search methods MIT School of Computing Department of Computer Science & Engineering

Breadth First Search, Uniform Cost Search, Depth First Search, Depth Limited Search, Iterative Deepening Search, Bi-directional Search Uninformed search methods MIT School of Computing Department of Computer Science & Engineering

Uninformed search methods MIT School of Computing Department of Computer Science & Engineering Breadth-First Search (BFS) Expands the shallowest node first (level-wise). Complete and optimal for uniform cost problems. Time & space complexity: (exponential). Depth-First Search (DFS) Expands the deepest node first (backtracks when necessary). Not guaranteed to be optimal; can go into infinite loops. Time complexity Uniform Cost Search (UCS) Expands the node with the lowest path cost. Optimal but can be slow if all path costs are nearly the same. Time & space complexity:

Uninformed search methods Depth-Limited Search ( DLS) DFS with a depth limit to prevent infinite loops. Not always complete; may not find a solution if the limit is too low. Iterative Deepening Depth-First Search (IDDFS) Repeatedly applies DFS with increasing depth limits. Completeness and optimality like BFS, but lower space complexity.

How BFS Works: Start from an initial node (root). Expand all neighboring nodes before moving deeper. Use a queue (FIFO - First In, First Out) to keep track of nodes. Continue exploring until the goal node is found or all nodes are visited Breadth-First Search ( BFS) MIT School of Computing Department of Computer Science & Engineering

Breadth-First Search ( BFS) MIT School of Computing Department of Computer Science & Engineering BFS Traversal Starting from A: Queue: [A] Visit A → Enqueue [B, C] Visit B → Enqueue [C, D, E] Visit C → Enqueue [D, E, F] Visit D → Enqueue [E, F] Visit E → Enqueue [F] Visit F → Queue empty, end traversal. Output: A → B → C → D → E → F Time & Space Complexity: Time Complexity: O(V+E) (Vertices + Edges) Space Complexity: O(V) (for storing visited nodes and queue)

Breadth-First Search ( BFS) MIT School of Computing Department of Computer Science & Engineering Properties of BSF : Complete : Always finds a solution if one exists . Optimal : Guarantees the shortest path in an unweighted graph . High Memory Usage: Needs to store all nodes in memory . Slow for Large Graphs: Explores all neighbors before moving deeper.

Depth-First Search ( DFS) It explores as deeply as possible along each branch before backtracking , making it efficient for exploring large search spaces . How DFS Works: Start at the initial node (root). Explore one branch as deeply as possible before backtracking. Use a stack (LIFO - Last In, First Out) or recursion to track visited nodes. Continue exploring until the goal node is found or all nodes are visited.

Depth-First Search ( DFS) DFS Traversal Starting from A (Pre-order): Visit A → Push [C, B] Visit B → Push [C, E, D] Visit D → No neighbors left. Backtrack to B → Visit E → No neighbors left. Backtrack to A → Visit C → Push [F] Visit F → No neighbors left. Output (DFS Pre-order): A → B → D → E → C → F

Depth-First Search ( DFS) Key Properties of DFS: Low Memory Usage compared to BFS (uses a stack instead of a queue ). Efficient for Deep Searches in large graphs . Not Always Optimal (may not find the shortest path ). Can Get Stuck in Infinite Loops (if cycles are present and not handled).

Algorithm for uniform cost search: Insert the root node into the priority queue Repeat while the queue is not empty: Remove the element with the highest priority If the removed node is the destination, print total cost and stop the algorithm Else, enqueue all the children of the current node to the priority queue, with their cumulative cost from the root as priority UNIFORM-COST SEARCH MIT School of Computing Department of Computer Science & Engineering

UNIFORM-COST SEARCH MIT School of Computing Department of Computer Science & Engineering

UNIFORM-COST SEARCH MIT School of Computing Department of Computer Science & Engineering First, we just have the source node in the queue: Then, we add its children to the priority queue with their cumulative distance as priority:

UNIFORM-COST SEARCH MIT School of Computing Department of Computer Science & Engineering Now, A has the minimum distance (i.e., maximum priority), so it is extracted from the list. Since A is not the destination, its children are added to the PQ.

UNIFORM-COST SEARCH MIT School of Computing Department of Computer Science & Engineering B has the maximum priority now, so its children are added to the queue:

UNIFORM-COST SEARCH MIT School of Computing Department of Computer Science & Engineering Up next, G will be removed and its children will be added to the queue:

UNIFORM-COST SEARCH MIT School of Computing Department of Computer Science & Engineering C and I have the same distance, so we will remove alphabetically:

UNIFORM-COST SEARCH MIT School of Computing Department of Computer Science & Engineering Next, we remove I; however, I has no further children, so there is no update to the queue. After that, we remove D. D only has one child, E, with a cumulative distance of 10. However, E already exists in our queue with a lesser distance, so we will not add it again.   The next minimum distance is that of E, so that is what we will remove:

UNIFORM-COST SEARCH MIT School of Computing Department of Computer Science & Engineering Now, the minimum cost is F, so it will be removed and its child (J) will be added: After this, H has the minimum cost so it will be removed, but it has no children to be added:

UNIFORM-COST SEARCH MIT School of Computing Department of Computer Science & Engineering Finally, we remove the Destination node, check that it is our target, and stop the algorithm here . The minimum distance between the source and destination nodes (i.e., 8) has been found.

The depth-limited search (DLS) method is almost equal to depth-first search (DFS), but DLS can work on the infinite state space problem because it bounds the depth of the search tree with a predetermined limit L. Nodes at this depth limit are treated as if they had no successors . DEPTH LIMITED SEARCH MIT School of Computing Department of Computer Science & Engineering

DEPTH LIMITED SEARCH MIT School of Computing Department of Computer Science & Engineering

Iterative deepening search (or iterative deepening depth-first search) is a general strategy, often used in combination with depth-limited search, that finds the best depth limit. It does this by gradually increasing the limit—first 0, then 1, then 2, and so on—until a goal is found. This will occur when the depth limit reaches d, the depth of the shallowest goal node. ITERATIVE DEEPENING SEARCH MIT School of Computing Department of Computer Science & Engineering

ITERATIVE DEEPENING SEARCH MIT School of Computing Department of Computer Science & Engineering

BIDIRECTIONAL SEARCH MIT School of Computing Department of Computer Science & Engineering The idea of a bidirectional search is to reduce the search time by searching forward from the start and backward from the goal simultaneously. When the two search frontiers intersect, the algorithm can reconstruct a single path that extends from the start state through the frontier intersection to the goal.

Step 1: Say, A is the initial node and O is the goal node, and H is the intersection node. Step 2: We will start searching simultaneously from start to goal node and backward from goal to start node. Step 3: Whenever the forward search and backward search intersect at one node, then the searching stops. Advantages Below are the advantages: • One of the main advantages of bidirectional searches is the speedat which we get the desired results. • It drastically reduces the time taken by the search by having simultaneous searches. • It also saves resources for users as it requires less memory capacity to store all the searches. BIDIRECTIONAL SEARCH MIT School of Computing Department of Computer Science & Engineering

Disadvantages The fundamental issue with bidirectional search is that the user should be aware of the goal state to use bidirectional search and thereby to decrease its use cases drastically. The algorithm must be robust enough to understand the intersection when the search should come to an end or else there’s a possibility of an infinite loop. It is also not possible to search backwards through all states. BIDIRECTIONAL SEARCH MIT School of Computing Department of Computer Science & Engineering

Informed Search Strategies Informed search strategies employ knowledge about the problem domain to guide the search process. They utilize heuristics, which are rules of thumb that estimate the distance to the goal. These heuristics provide valuable insights, enabling the algorithm to prioritize paths that are likely to lead to the optimal solution. This strategy contrasts with uninformed search algorithms, which explore the search space blindly.

Greedy Best first Search Greedy best- first search algorithm always selects the path which appears best at that moment. It is the combination of depth- first search and breadth- first search algorithms. It uses the heuristic function and search. Best- first search allows us to take the advantages of both algorithms. With the help of best- first search, at each step, we can choose the most promising node.

Greedy Best first Search In the best first search algorithm, we expand the node which is closest to the goal node and the closest cost is estimated by heuristic function, i.e. f(n)= g(n). Were, h(n)= estimated cost from node n to the goal. The greedy best first algorithm is implemented by the priority queue.

Greedy BFS Algorithm Step 1: Place the starting node into the OPEN list. Step 2: If the OPEN list is empty, Stop and return failure. Step 3: Remove the node n, from the OPEN list which has the lowest value of h(n), and places it in the CLOSED list. Step 4: Expand the node n, and generate the successors of node n. Step 5: Check each successor of node n, and find whether any node is a goal node or not. If any successor node is goal node, then return success and terminate the search, else proceed to Step 6. Step 6: For each successor node, algorithm checks for evaluation function f(n), and then check if the node has been in either OPEN or CLOSED list. If the node has not been in both list, then add it to the OPEN list. Step 7: Return to Step 2.

Greedy BFS Algorithm Advantages: Best first search can switch between BFS and DFS by gaining the advantages of both the algorithms. This algorithm is more efficient than BFS and DFS algorithms. Disadvantages: It can behave as an unguided depth- first search in the worst case scenario. It can get stuck in a loop as DFS. This algorithm is not optimal.

Example In this search example, we are using two lists which are OPEN and CLOSED Lists. Following are the iteration for traversing the above example. At each iteration, each node is expanded using evaluation function f(n)=h(n) , which is given in the below table.

Example Iteration OPEN CLOSED Initially S -13 Iteration 1 A – 12 B -- 4 S Iteration 2 A -- 12 E -- 8 F -- 2 S B Iteration 3 A – 12 E – 8 I –9 G -- 0 S B F Iteration 4 S B F G

Example : S A B C D H F G E 10 9 7 8 8 6 6 3

Properties Time Complexity: The worst case time complexity of Greedy best first search is O(bm). Space Complexity: The worst case space complexity of Greedy best first search is O(bm). Where, m is the maximum depth of the search space. Complete: Greedy best- first search is also incomplete, even if the given state space is finite. Optimal: Greedy best first search algorithm is not optimal.

What is A* Search Algorithm? Moving from one place to another is a task we do almost everyday. Finding the shortest path by ourselves was difficult. We now have algorithms that help us find that shortest route. A* is one the most popular algorithms out there.

What is A* Algorithm? It is an advanced BFS algorithm that searches for shorter paths first rather than the longer paths. A* is optimal as well as a complete algorithm. Complete Optimal A* is going to find all the paths that are available to us from the source to the destination A* is sure to find the least cost from the source to the destination So that makes A* the best algorithm right? YES. But A* is slow and also the space it requires is a lot as it saves all the possible paths that are available to us. This makes other faster algorithms have an upper hand over A* but it is nevertheless, one of the best algorithms out there.

A* Example

A* Example

A* Example

A* Example

A* Example

A* Example

A* Example

A* Example

A* Example S DEFG

A* Example
Tags