Problem solving in Artificial Intelligence.pptx

7,165 views 36 slides Nov 02, 2023
Slide 1
Slide 1 of 36
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36

About This Presentation

Problem-Solving Strategies in Artificial Intelligence" delves into the core techniques and methods employed by AI systems to address complex problems. This exploration covers the two main categories of search strategies: uninformed and informed, revealing how they navigate the solution space. I...


Slide Content

Problem-Solving Strategies in Artificial Intelligence Problem solving Methods Search Strategies Uninformed – Informed Heuristics Local Search Algorithms and Optimization Problems Dr.J.SENTHILKUMAR Assistant Professor Department of Computer Science and Engineering KIT-KALAIGNARKARUNANIDHI INSTITUTE OF TECHNOLOGY

Problem Solving Methods The method of solving problem through AI involves the process of defining the search space, deciding start and goal states then finding the path from state to goal state through search space. The movement from start state to goal state is guided by set of rules specifically designed for that particular problem.

Problem It is the question which is to solved. For solving a problem it needs to be precisely defined. The definition means, defining the start state, goal state, other valid states and transitions. Finding the Solution After representation of the problem and related knowledge in the suitable format, the appropriate methodology is chosen which uses the knowledge and transforms the start state to goal state. The Techniques of finding the solution are called search techniques. Various search techniques are developed for this purpose.

Representation of AI Problem AI problem can be covered in following four parts: A Lexical part: that determines which symbols are allowed in the representation of the problem. Like the normal meaning of the lexicon, this part abstracts all fundamental features of the problem. A Structural part: that describes constraints on how the symbols can be arranged. This corresponds to finding out possibilities required for joining these symbols and generating higher structural unit. A Procedural Part: that specifies access procedure that enables to create descriptions, to modify them and to answer questions using them. A Semantic Part: That establishes a way of associating meaning with the descriptions.

WELL-DEFINED PROBLEMS AND SOLUTIONS A problem can be defined formally by five components: INITIAL STATE • The initial state that the agent starts in. ACTION : A description of the possible actions available to the agent. A description of what each action does; the formal name for this is the TRANSITION MODEL , specified by a function RESULT(s, a) that returns the state that results from SUCCESSOR doing action a in state s. We also use the term successor to refer to any state reachable from a given state by a single action. Together, the initial state, actions, and transition model implicitly define the STATE SPACE of the problem—the set of all states reachable from the initial state by any sequence of actions. The state space forms a directed network or GRAPH in which the nodes are states and the links between nodes are actions. A PATH in the state space is a sequence of states connected by a sequence of actions. The GOAL TEST , which determines whether a given state is a goal state. Sometimes there is an explicit set of possible goal states, and the test simply checks whether the given state is one of them. A PATH COST function that assigns a numeric cost to each path. The problem-solving agent chooses a cost function that reflects its own performance measure.

Formulating problems This formulation seems reasonable, but it is still a model—an abstract mathematical description—and not the real thing. The different amount of knowledge that an agent can have concerning its action and the state that it is in. This depends on how the agent is connected to its environment through its precepts and actions. We find that there are four essentially different types of problem Single state problems Multiple-state problems Contingency problems Exploration problems.

Toy problems This can be formulated as a problem as follows: States: The state is determined by both the agent location and the dirt locations. The agent is in one of two locations, each of which might or might not contain dirt. Thus, there are 2 × 2^2= 8 possible world states. A larger environment with n locations has n · 2n states. • Initial state: Any state can be designated as the initial state. • Actions: In this simple environment, each state has just three actions: Left, Right, and Suck. Larger environments might also include Up and Down. • Transition model: The actions have their expected effects, except that moving Left in the leftmost square, moving Right in the rightmost square, and Sucking in a clean square have no effect. • Goal test: This checks whether all the squares are clean. • Path cost: Each step costs 1, so the path cost is the number of steps in the path

The 8-puzzle : • States : A state description specifies the location of each of the eight tiles and the blank in one of the nine squares. • Initial state : Any state can be designated as the initial state. Note that any given goal can be reached from exactly half of the possible initial states. • Actions : The simplest formulation defines the actions as movements of the blank space Left , Right , Up , or Down . Different subsets of these are possible depending on where the blank is. • Transition model : Given a state and action, this returns the resulting state; for example, if we apply Left to the start state in Figure 3.4, the resulting state has the 5 and the blank switched. • Goal test : This checks whether the state matches the goal configuration shown in Figure 3.4. (Other goal configurations are possible.) • Path cost : Each step costs 1, so the path cost is the number of steps in the path.

Some more Example as AI Problems: Tic-Tac-Toe 8-Queen Problem Chess Problem Tower of Hanoi Traveling salesperson problem Monkey and Banana Problem Cryptarithmetic problem Block World Problem

Search: The process of Finding a path to desired goal state from all possible future states. The major work in the field of search is to find the right Search Strategy for a particular problem. There are 2 kinds of search based on whether they use information about the goal. uninformed search algorithms—algorithms that are given no information about the problem other than its definition. Although some of these algorithms can solve any solvable problem, none of them can do so efficiently. Informed search algorithms, on the other hand, can do quite well given some guidance on where to look for solutions.

Search algorithms  form the core of such Artificial Intelligence programs.  And while we may be inclined to think that this has limited applicability only in areas of gaming and puzzle-solving, such algorithms are in fact used in many more AI areas like route and cost optimizations, action planning, knowledge mining, robotics, autonomous driving, computational biology, software and hardware verification, theorem proving etc. In a way, many of the AI problems can be modelled as a search problem where the task is to reach the goal from the initial state via state transformation rules. So the search space is defined as a graph (or a tree) and the aim is to reach the goal from the initial state via the shortest path, in terms of cost, length, a combination of both etc.

All search methods can be broadly classified into two categories: Uninformed (or Exhaustive or Blind) methods , where the search is carried out without any additional information that is already provided in the problem statement. Some examples include Breadth First Search, Depth First Search etc. Informed ( or Heuristic) methods , where search is carried out by using additional information to determine the next step towards finding the solution. Best First Search is an example of such algorithms Informed search methods are more efficient, low in cost and high in performance as compared to the uninformed search methods.

UNINFORMED SEARCH STRATEGIES: Its is also called as Blind Search. The term means that the strategies have no additional information about states beyond that provided in the problem definition. All they can do is generate successors and distinguish a goal state from a non-goal state. All search strategies are distinguished by the order in which nodes are expanded. Strategies that know whether one non-goal state is “more promising” than another are called informed search or heuristic search strategies.

Informed(Heuristic) Search Strategies Informed search strategy is one that uses problem-specific knowledge beyond the definition of the problem itself. It can find solutions more efficiently than uninformed strategy. Best-first search Greedy Best-first search A* Search AO* Search Best-First Search: It always selects the path which appears best at that moment. It is combination of DFS and BFS It uses the heuristics function h (n)<=h*(n) and search h (n) = heuristic cost ; h* (n)=estimated cost. The greedy best first algorithm is implemented by priority queue. The node with lowest evaluation is selected for expansion, because the evaluation measures the distance to the goal.

Advantages: Best first search can switch between BFS & DFS by gaining the advantages of both the algorithm. This algorithm is more efficient than BFS & DFS algorithm. Disadvantages: It can behave as an unguided depth – first search in the worst case scenario. It can get stuck in a loop as DFS. This algorithm is not optimal.

Heuristic functions A heuristic function or simply a heuristic is a function that ranks alternatives in various search algorithms at each branching step basing on an available information in order to make a decision which branch is to be followed during a search. The key component of Best-first search algorithm is a heuristic function , denoted by h(n): h(n) = estimated cost of the cheapest path from node n to a goal node . Heuristic function are the most common form in which additional knowledge is imparted to the search algorithm.

Greedy Best-first search Greedy best-first search tries to expand the node that is closest to the goal, on the grounds that this is likely to a solution quickly. It evaluates the nodes by using the heuristic function f(n) = h(n). Using the straight-line distance heuristic hSLD , the goal state can be reached faster. Properties of greedy search o Complete?? No–can get stuck in loops. Complete in finite space with repeated-state checking o Time?? O(bm), but a good heuristic can give dramatic improvement o Space?? O(bm)—keeps all nodes in memory o Optimal?? No Greedy best-first search is not optimal, and it is incomplete. The worst-case time and space complexity is O(bm),where m is the maximum depth of the search space.

A* Search: A* search algorithm finds the shortest path through the search space using the heuristic function. It uses h (n) & Cost to reach the node n from the start stage g (n). This algorithm expands less search tree and provides optimal result faster. It i s similar to Uniform Cost Search (UCS) except that it uses g (n) + h (n) instead of g (n) . A* use search heuristic as well as the cost to reach the node hence we combine both cost as f (n) = g (n)+h (n). {Fitnes s Number} f (n) – Estimated cost of the cheapest solution. g (n) – Cost of reach node n from start state h ( n) – Cost of reach from node to goal node.

Algorithm ;

Advantages: It is best algorithm than other search algorithm. It is optimal & Complete It can solve very complex problem. Disadvantages: It does not always produce shortest path. It is not practical for various large – scale Problems.

AO * Algorithm: AO * is the best algorithm for solving cycle AND-OR Graph. The problem is divide into a set of sub problems, where each sub problems can be solved separately.

Heuristic Functions A heuristic function or simply a heuristic is a function that ranks alternatives in various search algorithms at each branching step basing on an available information in order to make a decision which branch is to be followed during a search.

State Space Landscape: A landscape has both “ location ” (defined by the state) and “ elevation ”(defined by the value of the heuristic cost function or objective function). If elevation corresponds to cost , then the aim is to find the lowest valley – a global minimum ; if elevation corresponds to an objective function , then the aim is to find the highest peak – a global maximum. Local search algorithms explore this landscape. A complete local search algorithm always finds a goal if one exists; an optimal algorithm always finds a global minimum/maximum .

Simple Hill Climbing: Algorithm: Evaluate the initial state. If it is also a goal state, then return it and quit. Otherwise, Continue with the initial state as the current state. Loop until a solution is found or until there are no new operators left to be applied in the current state: Select an operator that has no yet been applied to the current state and apply it to produce a new state. Evaluate the new state. If it is a goal state, then return it and quit. If it is not a goal state but it is better than the current state, then make it the current state. If it is not better than the current state, then continue in the loop.

Steepest Ascent Hill Climbing: Simple hill climbing considers all the moves from the current state and selects the best one as the next state. This method is called steepest – Ascent hill climbing. Algorithm: Evaluate the initial state. If it is also a goal state, then return it and quit. Otherwise, Continue with the initial state as the current state. Loop until a solution is found or until a complete iteration produces no changes to current state: Let SUCC be a state such that any possible successor of the current state will be better than SUCC. For each operator that applies to current state do: Apply the operator and generate a new state. Evaluate the new state. If it is a goal state, then return it and quit. If not, compare it to SUCC. If it is better then set SUCC to this state. If it is not better, leave SUCC alone. If the SUCC is better than current state, then set current state to SUCC.

LOCAL SEARCH IN CONTINUOUS SPACES  We have considered algorithms that work only in discrete environments, but real-world environment are continuous  Local search amounts to maximizing a continuous objective function in a multi-dimensional vector space.  This is hard to do in general.  Can immediately retreat o Discretize the space near each state o Apply a discrete local search strategy (e.g., stochastic hill climbing, simulated annealing)  Often resists a closed-form solution o Fake up an empirical gradient o Amounts to greedy hill climbing in discretized state space  Can employ Newton-Raphson Method to find maxima  Continuous problems have similar problems: plateaus, ridges, local maxima, etc.

Online Search Agents and Unknown Environments Online search problems  Offline Search (all algorithms so far)  Compute complete solution, ignoring environment Carry out action sequence  Online Search  Interleave computation and action  Compute—Act—Observe—Compute—·  Online search good  For dynamic, semi-dynamic, stochastic domains  Whenever offline search would yield exponentially many contingencies  Online search necessary for exploration problem  States and actions unknown to agent  Agent uses actions as experiments to determine what to do

Examples Robot exploring unknown building Classical hero escaping a labyrinth  Assume agent knows  Actions available in state s Step-cost function c( s,a,s ′) State s is a goal state  When it has visited a state s previously Admissible heuristic function h(s )  Note that agent doesn’t know outcome state (s ′ ) for a given action (a) until it tries the action (and all actions from a state s )  Competitive ratio compares actual cost with cost agent would follow if it knew the search space  No agent can avoid dead ends in all state spaces  Robotics examples: Staircase, ramp, cliff, terrain  Assume state space is safely explorable—some goal state is always reachable

Online Search Agents  Interleaving planning and acting hamstrings offline search  A* expands arbitrary nodes without waiting for outcome of action Online algorithm can expand only the node it physically occupies Best to explore nodes in physically local order  Suggests using depth-first search  Next node always a child of the current  When all actions have been tried, can’t just drop state Agent must physically backtrack  Online Depth-First Search  May have arbitrarily bad competitive ratio (wandering past goal) Okay for exploration; bad for minimizing path cost  Online Iterative-Deepening Search  Competitive ratio stays small for state space a uniform tree

Online Local Search  Hill Climbing Search  Also has physical locality in node expansions is, in fact, already an online search algorithm  Local maxima problematic: can’t randomly transport agent to new state in effort to escape local maximum  Random Walk as alternative  Select action at random from current state  Will eventually find a goal node in a finite space  Can be very slow, esp. if “backward” steps as common as “forward”  Hill Climbing with Memory instead of randomness  Store “current best estimate” of cost to goal at each visited state Starting estimate is just h(s )  Augment estimate based on experience in the state space Tends to “flatten out” local minima, allowing progress Employ optimism under uncertainty  Untried actions assumed to have least-possible cost Encourage exploration of untried paths

Learning in Online Search o Rampant ignorance a ripe opportunity for learning Agent learns a “map” of the environment o Outcome of each action in each state o Local search agents improve evaluation function accuracy o Update estimate of value at each visited state o Would like to infer higher-level domain model o Example: “Up” in maze search increases y -coordinate Requires o Formal way to represent and manipulate such general rules (so far, have hidden rules within the successor function) o Algorithms that can construct general rules based on observations of the effect of actions