AI-UNIT-2 PPT FINAL Complete unit 2.pptx

238r1a6739 0 views 109 slides Oct 14, 2025
Slide 1
Slide 1 of 109
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66
Slide 67
67
Slide 68
68
Slide 69
69
Slide 70
70
Slide 71
71
Slide 72
72
Slide 73
73
Slide 74
74
Slide 75
75
Slide 76
76
Slide 77
77
Slide 78
78
Slide 79
79
Slide 80
80
Slide 81
81
Slide 82
82
Slide 83
83
Slide 84
84
Slide 85
85
Slide 86
86
Slide 87
87
Slide 88
88
Slide 89
89
Slide 90
90
Slide 91
91
Slide 92
92
Slide 93
93
Slide 94
94
Slide 95
95
Slide 96
96
Slide 97
97
Slide 98
98
Slide 99
99
Slide 100
100
Slide 101
101
Slide 102
102
Slide 103
103
Slide 104
104
Slide 105
105
Slide 106
106
Slide 107
107
Slide 108
108
Slide 109
109

About This Presentation

AI-UNIT-2 PPT FINAL Complete unit 2


Slide Content

UNIT 2 UNIT - II Problem Solving by Search-II and Propositional Logic Adversarial Search: Games, Optimal Decisions in Games, Alpha–Beta Pruning, Imperfect Real-Time Decisions. Constraint Satisfaction Problems: Defining Constraint Satisfaction Problems, Constraint Propagation, Backtracking Search for CSPs, Local Search for CSPs, The Structure of Problems. Propositional Logic: Knowledge-Based Agents, The Wumpus World, Logic, Propositional Logic, Propositional Theorem Proving: Inference and proofs, Proof by resolution, Horn clauses and definite clauses, Forward and backward chaining, Effective Propositional Model Checking, Agents Based on Propositional Logic.

Adversarial Search: Games Optimal Decisions in Games Alpha–Beta Pruning Imperfect Real-Time Decisions.

Adversarial Search in Artificial Intelligence AI Adversarial search: Adversarial search is a game-playing technique where the agents are surrounded by a competitive environment. A conflicting goal is given to the agents (multiagent). These agents compete with one another and try to defeat one another in order to win the game. Such conflicting goals give rise to the adversarial search . Here, game-playing means discussing those games where human intelligence and logic factor is used, excluding other factors such as luck factor. Tic-tac-toe, chess, checkers, etc., are such type of games where no luck factor works, only mind works. Mathematically, this search is based on the concept of ‘Game Theory. ’ According to game theory, a game is played between two players. To complete the game, one has to win the game and the other looses automatically.’

Techniques required to get the best optimal solution There is always a need to choose those algorithms which provide the best optimal solution in a limited time. So, we use the following techniques which could fulfill our requirements: Pruning: A technique which allows ignoring the unwanted portions of a search tree which make no difference in its final result. Heuristic Evaluation Function: It allows to approximate the cost value at each level of the search tree, before reaching the goal node.

Elements of Game Playing search To play a game, we use a game tree to know all the possible choices and to pick the best one out. There are following elements of a game-playing: S0: It is the initial state from where a game begins. PLAYER (s): It defines which player is having the current turn to make a move in the state. ACTIONS (s): It defines the set of legal moves to be used in a state. RESULT (s, a): It is a transition model which defines the result of a move. TERMINAL-TEST (s): It defines that the game has ended and returns true. UTILITY ( s,p ): It defines the final value with which the game has ended. This function is also known as Objective function or Payoff function . The price which the winner will get i.e. (-1): If the PLAYER loses. (+1): If the PLAYER wins. (0): If there is a draw between the PLAYERS.

For example, in chess, tic-tac-toe, we have two or three possible outcomes. Either to win, to lose, or to draw the match with values +1,-1 or 0. Let’s understand the working of the elements with the help of a game tree designed for tic-tac-toe. Here, the node represents the game state and edges represent the moves taken by the players. 8-PUZZLE CHESS TIC TAC TOE

A game-tree for tic-tac- toe INITIAL STATE (S0): The top node in the game-tree represents the initial state in the tree and shows all the possible choice to pick out one. PLAYER (s): There are two players, MAX and MIN. MAX begins the game by picking one best move and place X in the empty square box. ACTIONS (s): Both the players can make moves in the empty boxes chance by chance. RESULT (s, a): The moves made by MIN and MAX will decide the outcome of the game. TERMINAL-TEST(s): When all the empty boxes will be filled, it will be the terminating state of the game. UTILITY: At the end, we will get to know who wins: MAX or MIN, and accordingly, the price will be given to them.

Types of algorithms in Adversarial search In a normal search, we follow a sequence of actions to reach the goal or to finish the game optimally. But in an adversarial search, the result depends on the players which will decide the result of the game. It is also obvious that the solution for the goal state will be an optimal solution because the player will try to win the game with the shortest path and under limited time. There are following types of adversarial search: Minmax Algorithm Alpha-beta Pruning.

Minimax Strategy In artificial intelligence, minimax is a decision-making strategy under game theory, which is used to minimize the losing chances in a game and to maximize the winning chances. This strategy is also known as ‘Minmax,’ ’MM,’ or ‘Saddle point.’ Basically, it is a two-player game strategy where if one wins, the other loose the game . This strategy simulates those games that we play in our day-to-day life. Like, if two persons are playing chess, the result will be in favor of one player and will unfavor the other one. The person who will make his bes t try, efforts as well as cleverness, will surely win. We can easily understand this strategy via game tree- where the nodes represent the states of the game and edges represent the moves made by the players in the game . Players will be two namely: MIN: Decrease the chances of MAX to win the game. MAX: Increases his chances of winning the game.

Minimax Strategy They both play the game alternatively, i.e., turn by turn and following the above strategy, i.e., if one wins, the other will definitely lose it. Both players look at one another as competitors and will try to defeat one-another, giving their best. In minimax strategy, the result of the game or the utility value is generated by a heuristic function by propagating from the initial node to the root node. It follows the backtracking technique and backtracks to find the best choice. MAX will choose that path which will increase its utility value and MIN will choose the opposite path which could help it to minimize MAX’s utility value.

MINIMAX Algorithm Mini-max algorithm is a recursive or backtracking algorithm which is used in decision-making and game theory. ❖ It provides an optimal move for the player assuming that opponent is also playing optimally. ❖ Min-Max algorithm is mostly used for game playing in AI. Such as Chess, Checkers, tic-tac-toe, go, and various tow-players game. In this algorithm two players play the game, one is called MAX and other is called MIN. Both the players fight it as the opponent player. Both Players of the game are opponent of each other, where MAX will select the maximized value and MIN will select the minimized value. The minimax algorithm performs a depth-first search algorithm for the exploration of the complete game tree. The minimax algorithm proceeds all the way down to the terminal node of the tree, then backtrack the tree as the recursio

MINIMAX Algorithm solved example

PROPERTIES OF MINI-MAX ALGORITHM Complete – Min-Max algorithm is Complete. It will definitely find a solution (if exist), in the finite search tree. Optimal – Min-Max algorithm is optimal if both opponents are playing optimally. Time complexity – As it performs DFS for the game-tree, so the time complexity of Min-Max algorithm is O( bm ), where b is branching factor of the game-tree, and m is the maximum depth of the tree. Space Complexity – Space complexity of Mini-max algorithm is also similar to DFS which is O( bm ). LIMITATION OF THE MINIMAX ALGORITHM The main drawback of the minimax algorithm is that it gets really slow for complex games such as Chess, Go, etc. This type of games has a huge branching factor, and the player has lots of choices to decide. This limitation of the minimax algorithm can be improved from alpha-beta pruning which we have discussed in the next topic. MINIMAX Algorithm example 2

Alpha-beta Pruning in Artificial Intelligence Alpha-beta pruning is an advance version of MINIMAX algorithm. The drawback of minimax strategy is that it explores each node in the tree deeply to provide the best path among all the paths. This increases its time complexity. But as we know, the performance measure is the first consideration for any optimal algorithm. Therefore, alpha-beta pruning reduces this drawback of minimax strategy by less exploring the nodes of the search tree. The method used in alpha-beta pruning is that it cutoff the search by exploring less number of nodes. It makes the same moves as a minimax algorithm does, but it prunes the unwanted branches using the pruning technique (discussed in adversarial search). Alpha-beta pruning works on two threshold values, i.e., α (alpha) and β  (beta). α: It is the best highest value, a MAX player can have. It is the lower bound, which represents negative infinity value. β  : It is the best lowest value, a MIN player can have. It is the upper bound which represents positive infinity.

Alpha-beta pruning alogorithm : Alpha-beta pruning is a modified version of the minimax algorithm. It is an optimization technique for the minimax algorithm. As we have seen in the minimax search algorithm that the number of game states it has to examine are exponential in depth of the tree. Since we cannot eliminate the exponent, but we can cut it to half. Hence there is a technique by which without checking each node of the game tree we can compute the correct minimax decision, and this technique is called pruning. This involves two threshold parameter Alpha and Beta for future expansion, so it is called alpha-beta pruning. It is also called as Alpha-Beta Algorithm. Alpha-beta pruning can be applied at any depth of a tree, and sometimes it not only prune the tree leaves but also entire sub-tree.

The two-parameter can be defined as: Alpha: The best (highest-value) choice we have found so far at any point along the path of Maximizer . The initial value of alpha is −∞. Beta: The best (lowest-value) choice we have found so far at any point along the path of Minimizer . The initial value of beta is +∞. The Alpha-beta pruning to a standard minimax algorithm returns the same move as the standard algorithm does, but it removes all the nodes which are not really affecting the final decision but making algorithm slow. Hence by pruning these nodes, it makes the algorithm fast . Condition for Alpha-beta pruning: The main condition which is required for alpha-beta pruning is: α>=β, then we can prune the node. Key points about alpha-beta pruning: The Max player will only update the value of alpha. The Min player will only update the value of beta. While backtracking the tree, the node values will be passed to upper nodes instead of values of alpha and beta. We will only pass the alpha, beta values to the child nodes.

Alpha-beta pruning alogorithm solved example

Alpha-beta pruning alogorithm solved example 2 Time complexity improves from O(bᵈ) to O(b^(d/2)) in the best case where: b = branching factor d = depth of the tree Advantages: Faster Computation : Alpha-beta pruning skips unnecessary branches in the game tree, making decision-making quicker than the standard minimax algorithm. Deeper Search : It allows the algorithm to explore deeper into the game tree without increasing the time taken, leading to better decision-making. Limitations: Depends on Move Ordering : The algorithm works best if strong moves are evaluated first. Poor move ordering means fewer branches will be pruned, reducing efficiency. Not Suitable for All Games : Alpha-beta pruning isn’t helpful in games that involve chance (like dice games) or incomplete information. Move Ordering is Challenging : Getting the right move order to maximize pruning is difficult, and without it, the algorithm’s performance is limited.

Imperfect real-time decisions Because moves must be made in a reasonable amount of time, usually it is not feasible to consider the whole game tree (even with alpha-beta), so programs should cut the search off at some point earlier and apply a heuristic evaluation function to states in the search, effectively turning nonterminal nodes into terminal leaves. i.e. Alter minimax or alpha-beta in 2 ways: replace the utility function by a heuristic evaluation function E VAL , which estimates the position’s utility. replace the terminal test by a cutoff test that decides when to apply E VAL .

Evaluation function An evaluation function returns an estimate of the expected utility of the game from a given position. How do we design good evaluation functions? The evaluation function should order the terminal states in the same way as the true utility function. The computation must not take too long. For nonterminal states, the evaluation function should be strongly correlated with the actual chances of winning.  

Two ways to design evaluation function An evaluation function is a way to assign a number (value) to the possible outcomes of an event or process, so you can compare or decide which outcome is better or more likely . A). Expected value Expected value is the average outcome you would expect from an experiment if you repeated it many times. The Formula EV=P(X)×n where: EV = Expected value (average outcome) P(X) = Probability of an event happening (for example, getting heads) n = Number of times the experiment is repeated Example: How many heads would you expect if you flipped a coin twice?

Weighted linear function is a way to combine several factors (features) to get a single score by: Multiplying each feature by a weight (which shows how important that feature is), Then adding all these results together.

Cutting off search When searching in games (like chess), the search tree can get very large and deep, which makes it very slow to explore fully. So, we cut off the search after a certain depth to keep the computation manageable. 1. Replace TERMINAL-TEST with a cutoff test: Normally, the search stops when a terminal state (end of the game) is reached.Instead , now we stop when a certain cutoff condition is met (usually based on search depth). If CUTOFF-TEST(state, depth) then return EVAL(state) 2. Keep track of the depth during recursive calls: We add bookkeeping so the algorithm knows how deep it is in the search tree. Increment depth by 1 every time we go deeper in recursion. Two approaches to the cutoff test: 1. Fixed depth limit: Set a fixed depth d, so CUTOFF_TEST(state, depth) returns true if depth>d This means the search stops at depth d. 2. Iterative deepening: Instead of one fixed depth, search with increasing depth limits: First search up to depth 1 Then up to depth 2 Then 3, and so on This approach is more robust because it provides a better anytime behavior and often improves move ordering for alpha-beta pruning.

Constraint Satisfaction Problems in Artificial Intelligence We have seen so many techniques like Local search, Adversarial search to solve different problems. The objective of every problem-solving technique is one, i.e., to find a solution to reach the goal . Although, in adversarial search and local search, there were no constraints on the agents while solving the problems and reaching to its solutions. In this section, we will discuss another type of problem-solving technique known as Constraint satisfaction technique. By the name, it is understood that constraint satisfaction means solving a problem under certain constraints or rules. Constraint satisfaction is a technique where a problem is solved when its values satisfy certain constraints or rules of the problem. This type of technique leads to a deeper understanding of the problem structure as well as its complexity. Constraint satisfaction depends on three components, namely:

X: It is a set of variables. D: It is a set of domains where the variables reside. There is a specific domain for each variable. C: It is a set of constraints which are followed by a set of variables The constraint value consists of a pair of {scope, rel } . The scope is a tuple of variables which participate in the constraint and rel is a relation which includes a list of values which the variables can take to satisfy the constraints of the problem The requirements to solve a constraint satisfaction problem (CSP) is: A state- space The notion of the solution. A state in state-space is defined by assigning values to some or all variables such as {X1=v1, X2=v2, and so on…}.

An assignment of values to a variable can be done in three ways: Consistent or Legal Assignment: An assignment which does not violate any constraint or rule is called Consistent or legal assignment. Complete Assignment: An assignment where every variable is assigned with a value, and the solution to the CSP remains consistent. Such assignment is known as Complete assignment. Partial Assignment: An assignment which assigns values to some of the variables only. Such types of assignments are called Partial assignments.

1. Consistent or Legal Assignment In context of Constraint Satisfaction Problems (CSP), a consistent or legal assignment is one where all constraints are satisfied. Let's look at a classic CSP example to illustrate this: Example: The 4-Queens Problem Problem Statement: Place 4 queens on a 4x4 chessboard such that no two queens can attack each other. This means no two queens should be on the same row, column, or diagonal. Constraints: No two queens can be in the same row. No two queens can be in the same column. No two queens can be on the same diagonal .

2.complete assignment In the context of Constraint Satisfaction Problems (CSP), a complete assignment is one where every variable has been assigned a value, and the assignment satisfies all the constraints of the problem. Example: Sudoku Problem Statement: Sudoku is a popular puzzle where the goal is to fill a 9x9 grid with digits from 1 to 9 so that each digit appears exactly once in each row, column, and 3x3 subgrid (also known as "regions"). Constraints: Each digit from 1 to 9 must appear exactly once in each row. Each digit from 1 to 9 must appear exactly once in each column. Each digit from 1 to 9 must appear exactly once in each 3x3 subgrid .

3. Partial Assignment Problem Statement: Color the nodes of a graph using a limited number of colors such that no two adjacent nodes share the same color.

Types of Domains in CSP There are following two types of domains which are used by the variables : Discrete Domain: It is an infinite domain which can have one state for multiple variables. Example: Chess or Tic- Tac -Toe: Each position on the board is a discrete state. States are like: "A is true, B is false", "A is false, B is true", etc. You can have many different states formed by combining variable values. Finite Domain: It is a finite domain which can have continuous states describing one domain for one specific variable. It is also called a continuous domain. Example: Temperature between 0°C and 100°C. Position of a robot arm between 0 and 180 degrees. Speed from 0 to 200 km/h .

Constraint Types in CSP With respect to the variables, basically there are following types of constraints: Unary Constraints: It is the simplest type of constraints that restricts the value of a single variable. Binary Constraints: It is the constraint type which relates two variables. A value x2 will contain a value which lies between x1 and x3 . Global Constraints: It is the constraint type which involves an arbitrary number of variables. Some special types of solution algorithms are used to solve the following types of constraints: Linear Constraints: These types of constraints are commonly used in linear programming where each variable containing an integer value exists in linear form only. Non-linear Constraints: These types of constraints are used in nonlinear programming where each variable (an integer value) exists in a non-linear form. Note: A special constraint which works in the real-world is known as Preference constraint.

2.7: CSP Problems Constraint satisfaction includes those problems which contain some constraints while solving the problem. CSP includes the following problems: Graph Coloring: The problem where the constraint is that no adjacent sides can have the same color. Sudoku Playing: The gameplay where the constraint is that no number from 0-9 can be repeated in the same row or column.

n-queen problem: In n-queen problem, the constraint is that no queen should be placed either diagonally, in the same row or column. Crossword: In crossword problems, the constraint is that there should be the correct formation of the words, and it should be meaningful.

Cryptarithmetic Problem: This problem has one most important constraint that is, we cannot assign a different digit to the same character. All digits should contain a unique alphabet.

2.8: Constraint Propagation Constraint Propagation is a technique used in Constraint Satisfaction Problems (CSPs) to reduce the search space by systematically enforcing constraints locally and propagating their effects to narrow down the possible values of variables. This technique aims to maintain consistency among variables and their domains by eliminating values that cannot lead to a valid solution. The core idea is to iteratively apply constraints to variables, removing inconsistent values from their domains. When a value is removed from a variable's domain, it can trigger further reductions in the domains of other variables connected by constraints, leading to a ripple effect of domain pruning. This process continues until no more values can be removed, or a contradiction is detected (indicating no solution exists).

Common types of consistency enforced through constraint propagation include : Node Consistency: A single variable is said to be node consistent if all the values in the variable’s domain satisfy the unary constraints on the variables. Arc Consistency: A variable is arc consistent if every value in its domain satisfies the binary constraints of the variables. Path Consistency: When the evaluation of a set of two variables with respect to a third variable can be extended over another variable, satisfying all the binary constraints. It is similar to arc consistency. k-consistency: This type of consistency is used to define the notion of stronger forms of propagation.

2.9:Backtracking Search Backtracking Search is a depth-first search algorithm that incrementally builds candidates for the solutions, abandoning a candidate (backtracks) as soon as it determines that the candidate cannot possibly be completed to a valid solution. Steps in Backtracking Initialization : Start with an empty assignment. Selection : Choose an unassigned variable. Assignment : Assign a value to the chosen variable. Consistency Check : Check if the current assignment is consistent with the constraints. Recursion : If the assignment is consistent, recursively try to assign values to the remaining variables. Backtrack : If the assignment is not consistent, or if further assignments do not lead to a solution, undo the last assignment (backtrack) and try the next possible value .

Back Tracking Search for CSP

Step 1 → Assign Node 1 Choose color for Node 1 . Suppose we assign Red . → No conflict yet . Step 2 → Assign Node 2 Node 2 is adjacent to Node 1 → must ≠ Red. Possible colors for Node 2 = {Green, Blue}. Suppose we pick Green Step 3 → Assign Node 3 Node 3 is adjacent to Node 1 → must ≠ Red. Possible colors for Node 3 = {Green, Blue}. Suppose we pick Blue . Step 4 → Assign Node 4 Node 4 is adjacent to Nodes 1, 2, and 3. Constraints: 4 ≠ 1 (so 4 ≠ Red) 4 ≠ 2 (so 4 ≠ Green) 4 ≠ 3 (so 4 ≠ Blue) 👉 Node 4 has no valid color left → failure . Backtracking Go back to Node 3 and try a different color. Try Node 3 = Green: Then Node 4 is adjacent to {1=Red, 2=Green, 3=Green}. Still no valid color left → failure . Backtrack to Node 2: Instead of Green, assign Blue . Now we have: Node 1 = Red Node 2 = Blue Assign Node 3: Node 3 ≠ Red → can be {Green, Blue}. If Node 3 = Green: Node 4 is adjacent to {1=Red, 2=Blue, 3=Green}. Node 4 still has no valid color (blocked). If Node 3 = Blue: Node 4 is adjacent to {1=Red, 2=Blue, 3=Blue}. Again, Node 4 has no valid color . Conclusion This graph is a triangle with an extra diagonal (basically, a complete graph of 4 nodes minus one edge). A graph like this needs 4 different colors to satisfy all constraints. But since we only have 3 colors → No solution exists with {Red, Green, Blue}.

2.10:Local search algorithms Local search algorithms are important in artificial intelligence as they can quickly find good answers, especially when finding the perfect solution would take too long or too much effort. They are useful for big or complex problems where checking every possible option isn't practical. It focus only on the current solution and the ones directly related to it rather than looking everywhere. Ideal for real-world tasks like puzzles, timetables or route finding. Working of Local Search Algorithms Step 1: Pick a starting point:  Start with a possible solution which is often random but sometimes based on rule. Step 2: Find the neighbors: Neighbors are similar solutions we can get by making small, simple changes to the current one. For example, in a puzzle, swapping two pieces creates a neighbor. Step 3: Compare:  Look around at all neighbors to see if any are better. Step 4: Move:  If a better neighbor exists, move to it, making it our new “current” solution. Step 5: Repeat:  Keep searching from the new point, following the same steps. Step 6: Stop:  When none of the neighbors are better or after enough tries. Types of Local Search Algorithms 1. Hill-Climbing Search Algorithm 2. Simulated Annealing 3. Genetic Algorithms 4. Tabu Search

Applications of Local Search Scheduling:  Creating timetables for schools, jobs or exams to avoid conflicts and balance workloads. Routing:  Finding efficient paths for deliveries or travel, like in the Traveling Salesperson Problem. Resource allocation:  Assigning limited resources (like machines, rooms or staff) without clashes. Games and AI:  Making fast decisions or moves in complex games. Tuning models:  Adjusting settings in machine learning to get better results.

2.11:Knowledge-Based Agent in Artificial intelligence In Artificial Intelligence, Knowledge-Based Agents(KBA) use stored information and reasoning techniques to make intelligent decisions and solve problems. They are designed to do complex tasks effectively by combining logic and organized information. Knowledge based agents depend on a  knowledge base  which stores facts, rules, and information about the world. They use an inference system for reasoning and draw conclusions from the knowledge base. KBAs can adapt to new situations by adding or updating their knowledge. KBAs are commonly used in problem-solving, decision-making, planning tasks and in domains like healthcare, education, and legal systems. KBAs improve the scalability and accuracy of decision-making by handling large amounts of  structured data .

The following must be able to be done by a knowledge-based agent: Agents should be able to represent states, actions, and other things. A representative New perceptions should be able to be incorporated. An agent's internal representation of the world can be updated. An agent can infer the world's intrinsic representation. An agent can deduce the best course of action.

A generic architecture for a knowledge-based agent is depicted in the diagram above. By observing the environment, the knowledge-based agent (KBA) receives input from it. The input is taken by the agent's inference engine, which also communicates with KB to make decisions based on the knowledge store in KB. KBA's learning component keeps the KB up to date by learning new information.

Knowledge base is the critical component of the Knowledge-Based Agent, which includes all the information, rules, and facts necessary for reasoning and decision. Facts  are information about the world. For example, "The Earth orbits the Sun once every 365.25 days". Rules  are logical statements that relate facts. For example, "If the temperature drops below 0C, water will freeze into ice". Updates : The knowledge base can be created or updated during the process in which an agent learns or introduces new knowledge so that over a period, its decision-making skill is increased . Operations Performed by KBA TELL:  The agent updates its knowledge base with new information from the environment. For example, if a robot sees a door open, it tells the knowledge base, "The door is open." ASK:  The agent asks the knowledge base what decision it should take. For example, it may ask, "Should I close the door or move forward?". PERFORM:  The agent will follow the guidance of the knowledge base, such as closing the door or moving forward. It performs the selected option.

  Why use a knowledge base? For an agent to learn from experiences and take action based on the knowledge, a knowledge base is required. Inference system Inference is the process of creating new sentences from existing ones. We can add a new sentence to the knowledge base using the inference mechanism. A proposition about the world is a sentence. The inference system uses logical rules to deduce new information from the KB. The inference system generates new facts for an agent to update the knowledge base. An inference system is based on two rules, which are as follows: Forward chaining Backward chaining

2.12:The Wumpus World in Artificial intelligence(Propositional Logic) Wumpus World is a 4x4 grid consisting of  16 rooms . Agent starts at Room[1,1] facing right and its goal is to retrieve treasure while avoiding hazards such as pits and the Wumpus. Agent must navigate through grid using its limited sensory input to make decisions that will keep it safe and allow it to successfully collect the treasure and exit the cave. Key Elements: Pits : If the agent steps into a pit it falls and dies. A  breeze  in adjacent rooms suggests nearby pits. Wumpus : A creature that kills agent if it enters its room. Rooms next to the Wumpus have a  stench . Agent can use an  arrow  to kill the Wumpus. Treasure : Agent’s main objective is to  collect the treasure  (gold) which is located in one room. Breeze : Indicates a pit is nearby. Stench : Indicates the Wumpus is nearby.

PEAS Description PEAS stands for  Performance Measures ,  Environment ,  Actuators  and  Sensors  which describe agent’s capabilities and environment. 1. Performance measures:  Rewards or Punishments Agent gets gold and return back safe =  +1000 points Agent dies (pit or Wumpus)=  -1000 points Each move of the agent =  -1 point Agent uses the arrow =  -10 points 2. Environment:  A setting where everything will take place. A cave with  16(4x4)  rooms. Rooms adjacent (not diagonally) to the Wumpus are stinking. Rooms adjacent (not diagonally) to the pit are breezy. Room with gold glitters. Agent's initial position - Room[1, 1] and facing right side. Location of Wumpus, gold and 3   pits can be anywhere except in  Room[1, 1] . 3. Actuators:  Devices that allow agent to perform following actions in the environment. Move forward: Move to next room. Turn right/left: Rotate agent 90 degrees. Shoot: Kill Wumpus with arrow. Grab: Take treasure. Release: Drop treasure 4. Sensors:  Devices help the agent in sensing following from the environment. Breeze: Detected near a pit. Stench: Detected near the Wumpus. Glitter: Detected when treasure is in the room. Scream: Triggered when Wumpus is killed. Bump: Occurs when hitting a wall.

Agent's First step: At first, the agent is in the first room, or square [1,1], and we all know that this room is safe for the agent, thus we will add the sign OK to the above diagram (a) to represent that room is safe. The agent is represented by the letter A, the breeze by the letter B, the glitter or gold by the letter G, the visited room by the letter V, the pits by the letter P, and the Wumpus by the letter W. Agent does not detect any wind or Stench in Room [1,1], indicating that the nearby squares are similarly in good condition.

Agent's second Step: Now that the agent must go forward, it will either go to [1, 2] or [2, 1]. Let's say agent enters room [2, 1], where he detects a breeze, indicating Pit is present. Because the pit might be in [3, 1] or [2, 2], we'll add the sign P? to indicate that this is a Pit chamber. Now the agent will pause and consider his options before doing any potentially destructive actions. The agent will return to room [1, 1]. The agent visits the rooms [1,1] and [2,1], thus we'll use the symbol V to symbolize the squares he's been to. Agent's third step: The agent will now proceed to the room [1,2], which is fine. Agent detects a stink in the room [1,2], indicating the presence of a Wumpus nearby. However, according to the rules of the game, Wumpus cannot be in the room [1,1], and he also cannot be in [2,2]. (Agent had not detected any stench when he was at [2,1]). As a result, the agent infers that Wumpus is in the room [1,3], and there is no breeze at the moment, implying that there is no Pit and no Wumpus in [2,2]. So that's safe, and we'll designate it as OK, and the agent will advance [2,2]

Agent's fourth step: Because there is no odor and no breeze in room [2,2], let's assume the agent decides to move to room [2,3]. Agent detects glitter in room [2,3], thus it should collect the gold and ascend out of the cave.

2.13:Propositional logic in Artificial intelligence Propositional logic  is used for solving complex problems using simple statements. These statements can either be true or false but cannot be both at same time. These propositions form knowledge representation, reasoning and decision-making in AI systems. Proposition operators like conjunction (∧), disjunction (∨), negation (¬), implication( →) and biconditional (↔) helps combine various proposition to represent logical relations. Logical Connectives: AND (∧) : This operation is true if both propositions are true. Example: "It is sunny ∧ it is warm" is true only if both "It is sunny" and "It is warm" are true. OR (∨) : This operation is true if at least one of the propositions is true. Example: "It is sunny ∨ it is raining" is true if either "It is sunny" or "It is raining" is true. NOT (¬) : This operation reverses the truth value of a proposition. Example: "¬It is raining" is true if "It is raining" is false. IMPLIES (→) : This operation is true if the first proposition leads to the second. Example: "If it rains then the ground is wet" (It rains → The ground is wet) is true unless it rains and the ground is not wet. IF AND ONLY IF (↔) : This operation is true if both propositions are either true or false together. Example: "It is raining ↔ The ground is wet" is true if both "It is raining" and "The ground is wet" are either true or both false.

Propositional Logic Connectives : Truth Table

Truth Table with Three Propositions

Syntax of propositional logic: The syntax of propositional logic defines the allowable sentences for the knowledge representation. There are two types of Propositions: Atomic Propositions Compound propositions Atomic Proposition:  Atomic propositions are simple propositions. It consists of a single proposition symbol. These are the sentences which must be either true or false. Example 1: 2+2 is 4; it is an atomic proposition as it is a fact. "The Sun is cold" is also a proposition as it is a false fact. Compound proposition:  Compound propositions are constructed by combining simpler or atomic propositions, using parenthesis and logical connectives. Example 2: "It is raining today, and the street is wet." "Ankit is a doctor, and his clinic is in Mumbai."

Precedence of connectives : Propositional connectors or logical operators, like arithmetic operators, have a precedence order. When evaluating a propositional problem, this order should be followed. The following is a list of the operator precedence order:

Properties of Operators: Commutativity: P ∧ Q= Q ∧ P, or P ∨ Q = Q ∨ P. Associativity: (P ∧ Q) ∧ R = P ∧ (Q ∧ R), (P ∨ Q) ∨ R= P ∨ (Q ∨ R). Identity element: P ∧ True = P, P ∨ True= True. Distributive: P ∧ (Q ∨ R) = (P ∧ Q) ∨ (P ∧ R). P ∨ (Q ∧ R) = (P ∨ Q) ∧ (P ∨ R). DE Morgan's Law: ¬(P ∧ Q) = (¬P) ∨ (¬Q), ¬(P ∨ Q) = (¬ P) ∧ (¬Q). Double-negation elimination: ¬(¬P) = P.

2.14:Propositional Theorem Proving what is theorem proving • Theorem proving = checking if a conclusion follows from facts and rules. • In propositional logic → facts are TRUE/FALSE statements. • Used in AI for reasoning and decision making. Methods of Propositional Theorem Proving 1. Truth Table Method 2. Inference Rules 3. Resolution Method 4. Proof by Refutation

(a) Truth Table Method List all possible truth values of propositions. Check if the conclusion is true in all cases where the premises are true. Example: Premises : P⇒Q P Conclusion: Q By truth table → whenever premises are true, conclusion is true → argument valid. (b) Inference Rules Use standard reasoning rules to derive new facts. Modus Ponens : If P⇒QP and P, then Q. Modus Tollens : If P⇒QP and ¬Q, then¬P . And-Elimination : From (P∧Q), infer P. Or-Introduction : From P, infer (P∨Q). Used in forward or backward chaining . (c) Resolution Method (most important in AI) Convert all sentences into Conjunctive Normal Form (CNF) . Apply resolution rule to prove contradictions. If contradiction found → theorem proved. Resolution Rule: (A∨P),(¬P∨B)⊢(A∨B) Example: P∨Q ¬P∨R → Resolvent: Q∨R Used in automated theorem proving and logic programming. (d)Proof by Refutation (Indirect Proof) To prove a statement X: Add ¬ Xto the knowledge base. Use resolution. If a contradiction occurs → X is true.

Inference and Proofs in Propositional Logic What is Inference? Inference = the process of deriving new facts (conclusions) from known facts (premises) using logic rules. Example: Premise: P⇒Q (If it rains, the ground is wet) Premise: P(It is raining) Conclusion: Q (The ground is wet) 👉 Here, we inferred Q from given knowledge. Types of Inference (a) Deductive Inference From general rules → specific conclusion. Always valid if rules are true. Example: All humans are mortal, Socrates is human → Socrates is mortal. (b) Inductive Inference From specific examples → general rule. Not guaranteed to be true (approximation). Example: The sun rose yesterday, today → It always rises. (c) Abductive Inference From effects → guess the cause. Example: Ground is wet → Maybe it rained. What is Proof? A proof is a sequence of logical steps showing that a conclusion follows from premises. In propositional logic, proofs can be: Truth Table Proofs – check all possible truth values. Inference Rule Proofs – use rules like Modus Ponens. Resolution Proofs – mechanical method using CNF and contradiction.

Proof by Resolution (Propositional Logic) What is Resolution? A rule of inference used for theorem proving in propositional and predicate logic. Works by refutation: prove a statement by showing its negation leads to contradiction . Very useful in AI (automated reasoning, expert systems, Wumpus World). Resolution Rule From two clauses: (A∨P),(¬P∨B)⊢(A∨B) Here, P and ¬P cancel each other out. The result (A∨B) is called the resolvent .

Steps of Proof by Resolution Express knowledge in propositional logic. Convert all statements to CNF (Conjunctive Normal Form). CNF = AND of ORs, e.g., (P∨Q)∧(¬R∨S). Negate the goal (what you want to prove) and add it to KB. Apply resolution rule repeatedly. If you get an empty clause [ ] , contradiction found → goal is proved. Example Problem: Prove: From (P⇒Q),(Q⇒R),P infer R. Step 1:Convert to CNF P⇒Q≡¬P∨Q Q⇒R≡¬Q∨R P Negation of conclusion: ¬R So KB = {¬P∨Q,  ¬Q∨R,  P,  ¬R} Step 2: Apply Resolution From (¬P∨Q) and P → Q. From (¬Q∨R) and ¬R → ¬Q. From Q and ¬Q → empty clause [ ] . Contradiction reached → Therefore, R is proved. Proof by resolution = Convert to CNF + Add negated goal + Resolve until contradiction.

Inference Rules in Propositional Logic

2.15:INFERENCE PROVING THEOREM Knowledge-base for Wumpus world Atomic proposition variable for Wumpus world:

Let  P i,j  be true if there is a Pit in the room [ i , j]. Let  B i,j  be true if agent perceives breeze in [ i , j], (dead or alive). Let  W i,j  be true if there is wumpus in the square[ i , j]. Let  S i,j  be true if agent perceives stench in the square [ i , j]. Let  V i,j  be true if that square[ i , j] is visited. Let  G i,j  be true if there is gold (and glitter) in the square [ i , j]. Let  OK i,j  be true if the room is safe. Some Propositional Rules for the wumpus world:

Prove that Wumpus is in the room (1, 3) We can prove that wumpus is in the room (1, 3) using propositional rules which we have derived for the wumpus world and using inference rule. Apply Modus Ponens with ¬S11 and R1: We will firstly apply MP rule with R1 which is ¬S 11  → ¬ W 11  ^ ¬ W 12  ^ ¬ W 21 , and  ¬S 11  which will give the output ¬ W 11  ^ W 12  ^ W 12 . Apply And-Elimination Rule: After applying And-elimination rule to ¬ W 11  ∧ ¬ W 12  ∧ ¬ W 21 , we will get three statements: ¬ W 11 , ¬ W 12 , and ¬W 21 .

Apply Modus Ponens to ¬S 21 , and R2: Now we will apply Modus Ponens to ¬S 21  and R2 which is ¬S 21  → ¬ W 21  ∧¬ W 22  ∧ ¬ W 31 , which will give the Output as  ¬ W 21  ∧ ¬ W 22  ∧¬ W 31 Apply And -Elimination rule: Now again apply And-elimination rule to  ¬ W 21  ∧ ¬ W 22  ∧¬ W 31 , We will get three statements: ¬ W 21 , ¬ W 22 , and ¬ W 31 .

Apply MP to S 12  and R4: Apply Modus Ponens to  S 12  and  R 4  which is  S 12  → W 13  ∨. W 12  ∨. W 22  ∨.W 11 , we will get the output as  W 13 ∨ W 12  ∨ W 22  ∨.W 11 . Apply Unit resolution on W 13  ∨ W 12  ∨ W 22  ∨W 11  and ¬ W 11  : After applying Unit resolution formula on W 13  ∨ W 12  ∨ W 22  ∨W 11  and ¬ W 11  we will get W 13  ∨ W 12  ∨ W 22 .

Apply Unit resolution on W 13  ∨ W 12  ∨ W 22  and ¬ W 22  : After applying Unit resolution on  W 13  ∨ W 12  ∨ W 22 , and  ¬W 22 , we will get  W 13  ∨ W 12  as output. Apply Unit Resolution on W 13  ∨ W 12  and ¬ W 12  : After Applying Unit resolution on  W 13  ∨ W 12  and ¬ W 12 , we will get  W 13  as an output, hence it is proved that the Wumpus is in the room [1, 3].

2.16:Forward Chaining and backward chaining in AI In artificial intelligence, forward and backward chaining is one of the important topics, but before understanding forward and backward chaining lets first understand that from where these two terms came. Inference engine: The inference engine is the component of the intelligent system in artificial intelligence, which applies logical rules to the knowledge base to infer new information from known facts. The first inference engine was part of the expert system. Inference engine commonly proceeds in two modes, which are: Forward chaining Backward chaining

Forward Chaining and backward chaining In AI Both are reasoning methods used with rules (if–then statements) in knowledge-based systems (like Expert Systems). Forward Chaining (Data-driven reasoning): Starts with facts (known data). Applies rules to move forward and find new facts. Continues until the goal is reached (or no new facts can be inferred). Steps: Start with given facts. Match facts with IF part of rules. Apply rules → generate new facts. Repeat until goal is found. Example: Rules: If it rains → ground is wet. If ground is wet → plants grow. Fact: It rains. Forward chaining: It rains → ground is wet. Ground is wet → plants grow. Goal achieved (plants grow).

2. Backward Chaining (Goal-driven reasoning) Starts with a goal (hypothesis). Works backward to check if facts support it. Tries to prove the goal by breaking it into sub-goals. Steps: Start with goal. Look for rules whose THEN part matches goal. Check if IF part is true (or prove it as a sub-goal). Continue until facts match. Example: Goal: Do plants grow? Rule: If ground is wet → plants grow. Need to prove: ground is wet. Rule: If it rains → ground is wet. Fact: It rains. Goal proved (plants grow)

2.17:Horn Clauses and Definite Clauses in AI Horn Clauses:  Horn clauses are a type of logical formula used in knowledge representation and reasoning. They are named after the logician Alfred Horn. A Horn clause is a disjunction of literals with at most one positive literal. In other words, it can have multiple negative literals (negated atoms) and at most one positive literal (non-negated atom). Horn clauses are widely used in logic programming and are fundamental in the implementation of Prolog. Definite Clauses:  Definite clauses are a specific type of Horn clause that has exactly one positive literal. This means that a definite clause is a disjunction of literals with exactly one positive literal. Definite clauses are often used in the context of logic programming and are a key component of Prolog programs. They are also known as positive clauses. NOTE: all definite clauses are Horn clauses, but not all Horn clauses are definite clauses.

Horn clauses and definite clauses

1.CNF Sentence (Conjunctive Normal Form Sentence): Definition : A CNF sentence is a logical formula that is conjunctive in nature. This means it is a conjunction (AND) of several clauses . Each clause is a disjunction (OR) of literals . Grammar Rule : CNFSentence → Clause₁ ∧ Clause₂ ∧ ... ∧ Clauseₙ This means that a CNF sentence is formed by combining multiple clauses using the logical AND operator (∧). Each clause, in turn, will be made of literals connected by the OR operator (∨). 2. Clause: Definition : A clause is a disjunction (OR) of one or more literals . The disjunction means that at least one of the literals in the clause must be true for the clause to be true. Grammar Rule: Clause → Literal₁ ∨ Literal₂ ∨ ... ∨ Literalₘ This means that a clause is a sequence of literals connected by the logical OR (∨). If any one of the literals is true, the entire clause becomes true. 3. Literal: Definition : A literal is either a symbol or the negation of a symbol. A symbol is essentially a basic unit or a variable, while the negation (¬) signifies the logical NOT operation. Grammar Rule : Literal → Symbol | ¬Symbol So, a literal can be either a symbol like P, Q, R , or it can be the negation of a symbol like ¬P, ¬Q , etc.

4. Symbol: Definition : A symbol represents a propositional variable or an atomic proposition in the logical formula. These are the building blocks of logic expressions. Grammar Rule : Symbol → P | Q | R | ... The symbols P , Q , R , etc., represent basic true/false propositions. These are the fundamental entities in propositional logic 5. Horn Clause Form: Definition : A Horn clause is a special type of clause with a specific structure. A Horn clause can have at most one positive literal (a symbol without negation). This type of clause is very important in logic programming and in knowledge representation. Grammar Rule: HornClauseForm → DefiniteClauseForm | GoalClauseForm So, a Horn clause can either be a definite clause or a goal clause . Definite Clause Form: Definition : A definite clause is a Horn clause with exactly one positive literal and one or more negative literals. The clause essentially says that if all the negative literals are true, then the positive literal must be true. Grammar Rule : DefiniteClauseForm → (Symbol₁ ∧ Symbol₂ ∧ ... ∧ Symbolᵢ) ⇒ Symbol

This means that a definite clause is of the form: If Symbol₁ ∧ Symbol₂ ∧ ... ∧ Symbolᵢ are true (the negative literals), then Symbol (the positive literal) must also be true. Example : A definite clause might look like: (¬A ∧ ¬B) ⇒ C This reads as: "If A and B are both false, then C must be true." This form is a Horn clause , and it is used to express logical implications . 7. Goal Clause Form: Definition : A goal clause is another special type of Horn clause. Instead of concluding a positive literal like the definite clause, it concludes False . Grammar Rule : GoalClauseForm → (Symbol₁ ∧ Symbol₂ ∧ ... ∧ Symbolᵢ) ⇒ False This is used to represent a contradiction or an undesirable outcome. It essentially means that if certain conditions (literals) are true, it leads to False , which is a logical contradiction. Example : A goal clause might look like: (A ∧ B) ⇒ False This reads as: "If A and B are both true, then this leads to False , which means it’s an inconsistent situation."

2.18:Effective propositional model checking Effective Propositional Model Checking in AI Propositional model checking is a technique used in artificial intelligence (AI) to verify whether a given model satisfies a certain property. Here's a detailed explanation of effective propositional model checking: Propositional Logic : Propositional model checking operates on propositional logic, which deals with propositions or statements that can be either true or false. Model Checking : Model checking involves verifying whether a given model (usually represented as a transition system) satisfies a particular property specified in temporal logic. Effective Propositional Model Checking : Effective propositional model checking refers to the efficient and practical application of propositional model checking techniques. It involves optimizing the model checking process to handle large-scale systems and complex properties effectively.

Techniques : Effective propositional model checking often involves the use of data structures like Binary Decision Diagrams (BDDs) to represent and manipulate the state space efficiently. Symbolic model checking techniques, which operate on sets of states symbolically rather than explicitly enumerating each state, are also commonly used for efficiency. Applications : Effective propositional model checking is widely used in the verification of hardware and software systems, concurrent systems, communication protocols, and more. It helps in identifying design errors, ensuring system correctness, and validating critical properties. NOTE: effective propositional model checking in AI involves the efficient application of propositional model checking techniques to verify the satisfaction of properties in complex systems, contributing to the reliability and correctness of AI systems.

Effective Algorithms  Davis-Putnam- Logemann -Loveland (DPLL) Algorithm: A backtracking-based algorithm for SAT ( Boolean Satisfiability ) solving that incorporates improvements such as early termination (detecting if a sentence must be true or false even with partial assignments) and the pure symbol heuristic. Local Search Methods: Alternative to backtracking, these algorithms use a "hill-climbing" approach to find a model satisfying the given formula .

1. A complete backtracking algorithm David-Putnam algorithm (DPLL):

Local search algorithms Local search algorithms can be applied directly to the SAT problem, provided that choose the right evaluation function. (We can choose an evaluation function that counts the number of unsatisfied clauses.) These algorithms take steps in the space of complete assignments, flipping the truth value of one symbol at a time. The space usually contains many local minima, to escape from which various forms of randomness are required. Local search  methods such as WALKSAT can be used to find solutions. Such algorithm are sound but not complete. WALKSAT:  one of the simplest and most effective algorithms.

On every iteration, the algorithm picks an unsatisfied clause, and chooses randomly between 2 ways to pick a symbol to flip: Either a. a “min-conflicts” step that minimizes the number of unsatisfied clauses in the new state; Or  b. a “random walk” step that picks the symbol randomly. When the algorithm returns a model, the input sentence is indeed satifiable ; When the algorithm returns failure, there are 2 possible causes: Either a. The sentence is unsatisfiable; Or b. We need to give the algorithm more time. If we set  max_flips =∞,  p >0, the algorithm will: Either a. eventually return a model if one exists Or b. never terminate if the sentence is unsatifiable . Thus WALKSAT is useful when we expect a solution to exist, but cannot always detect unsatisfiability .

2.19:Agent based on propositional logic An AI agent based on propositional logic operates using a knowledge base (KB) of true/false statements about its environment, which it uses to make inferences and decisions. It takes new perceptions, updates its KB with the new information and facts, and uses inference rules like Modus Ponens to deduce new facts, allowing it to make intelligent, logical decisions and plans to solve problems. This is exemplified by agents in complex environments like the  Wumpus World , where the agent uses sensory information to infer the location of hazards and the goal . How it Works 1. Knowledge Base (KB): The agent starts with an empty KB, but as it perceives its environment, it adds facts about the world to this KB.  2. Percepts & Knowledge: The agent receives percepts (sensory inputs) from its environment and adds these to its knowledge base as logical sentences.  3. Logical Inference: Using  logical inference rules , the agent derives new, unstated facts from the existing knowledge in the KB. For example, if the KB contains axioms like "If there's a stench, there's a Wumpus " and the agent perceives a stench, it can infer the Wumpus is nearby.  4. Action & Planning: Based on the deduced facts, the agent makes intelligent decisions and creates plans of action to achieve its goals. 

Key Components Propositions: Basic statements that are either true or false.  Logical Operators: Symbols like AND (∧), OR (∨), NOT (¬), and IMPLIES (→) are used to combine propositions into more complex statements.  Axioms : Statements in the KB that are taken as given, such as the rules of the environment.  Inference Rules: Logical procedures, like  Modus Ponens , used to derive new conclusions from existing facts.  Example: Wumpus World Goal:  An agent in the Wumpus World needs to find the gold.  Percepts:  The agent perceives "stench," "breeze," and "glitter".  Inference:  By applying propositional logic and knowing the rules (e.g., a breeze indicates a pit), the agent can infer the location of the Wumpus and pits.  Decision:  This logical deduction allows the agent to navigate safely and retrieve the gold. 

Example: Wumpus World Goal:  An agent in the Wumpus World needs to find the gold.  Percepts:  The agent perceives "stench," "breeze," and "glitter".  Inference:  By applying propositional logic and knowing the rules (e.g., a breeze indicates a pit), the agent can infer the location of the Wumpus and pits.  Decision:  This logical deduction allows the agent to navigate safely and retrieve the gold . 

A Simple Knowledge base agent in propositional logic We want to represent what we know about the Wumpus world in a simple KB using propositional logic. Proposition Symbols: Let P ij TRUE is there’s a pit in location[ i,j ] Let B ij be TRUE there’s a breeze in location [ i,j ].

KB consists of: FACT: There is no pit in[1,1]: R1: RULES: A square is breezy if and only if there’s a pit in a neighboring square: R2:B 11 (P 12 V P 21 ) R3:B 21 (P 11 V P 22 V P 31 ) FACTS: from sensors R4: . R5: . The KB can be represented ass the conjunction of all sentences: R 1 ᴧ R 2 ᴧ R 3 ᴧ R 4 ᴧ R 5  

Propositional Logic Logical reasoning: The previous approach is known as Model checking Also known as Inferencing with Truth Tables. Time Complexity=O(2 n ).
Tags