Presentation on Artificial Intelligence.

191013607gouthamsric 53 views 99 slides Jun 21, 2024
Slide 1
Slide 1 of 99
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66
Slide 67
67
Slide 68
68
Slide 69
69
Slide 70
70
Slide 71
71
Slide 72
72
Slide 73
73
Slide 74
74
Slide 75
75
Slide 76
76
Slide 77
77
Slide 78
78
Slide 79
79
Slide 80
80
Slide 81
81
Slide 82
82
Slide 83
83
Slide 84
84
Slide 85
85
Slide 86
86
Slide 87
87
Slide 88
88
Slide 89
89
Slide 90
90
Slide 91
91
Slide 92
92
Slide 93
93
Slide 94
94
Slide 95
95
Slide 96
96
Slide 97
97
Slide 98
98
Slide 99
99

About This Presentation

Detailed ppt on Artificial Intelligence


Slide Content

ARITIFICIAL INTELLIGENCE A machine that could organize your cupboard as you like Serve every member of a house a customized cup of coffee makes your day easier These are the products of artificial intelligence Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems

WHY USE THE TERM ARITIFICIAL INTELLIGENCE?????? These machine are artificially incorporated with human like intelligence to perform tasks as we do This intelligence is built using complex algorithm and mathematical functions

Inspite of variation in lighting , landscape and dimensions of the field the AI robot must perform as expected The ability to react appropriately to new situation is called generalized learning

Robot at crossroad one that is paved and other one is rocky The robot should determine which path to take based on circumstances This portraits the robot’s reasoning ability

Robot encounters a stream that it cannot swim across Using the plank provided as an input the robot is able to cross the string

AI provides machine with capability to adapt, reason and provide solutions Weak AI also called as narrow AI focuses solely on one task Alphago is a maestro of the game go but you cant expect it to be remotely good at chess Avengers is an ideal example of a strong AI because its self aware and eventually develops emotions This makes AI response unpredictable

Algorithm that mimics human brain to incorporate intelligence into machine Algorithm to incorporated intelligence into machine by automatically learning from data AI is the ability of machine to imitate human intelligence

AI Techniques Search provides a way of solving problems for which no more direct approach is available as well as a framework into which any direct techniques that are available can be embedded. Abstraction finds a way of separating important features and notifications from the unimportant ones that would otherwise confuse any process Search Use of Knowledge Abstraction This technique provides a way of solving complex problems by exploiting the structure of the object that are involved

Problem solving in AI Defining the problem Analyzing the problem Identification of solutions Choosing the solution Implementation

Water Jug Problem

Water Jug Problem

Water Jug Problem

TowerOfHanoi

Travelling Salesman Problem

Travelling Salesman Problem

Broad Categorizations of AI problems Structured Problem Well structured – Yield a right answer ( 1+2=3) Ill structured – Do not yield a particular answer (Which economic system allows for the most human flourishing ?) Unstructured Problem Very hard to formulate the problem Ambiguous in nature

Broad Categorizations of AI problems Linear Problem Have clear solution All kind of classification problems Non linear Problem Relationships between input and output is non linear Further decision can’t be taken like in linear problem

23 Semiotic Models - Based on Sign processes / signification and communication. - Code is specific which gives meaning to each sign based on the sound or letters that human use to form words or movements. AI Models

24 Statistical Models - Refers to representation and formulation of relationships through statistical techniques . - Statistical model employs probabilistic approaches and is typically a collection of probability density function and distribution functions. AI Models

Data acquisition and learning aspects in AI

Data acquisition and learning aspects in AI

Data acquisition and learning aspects in AI

Data acquisition and learning aspects in AI

Data acquisition and learning aspects in AI

Data acquisition and learning aspects in AI

Data acquisition and learning aspects in AI

32 Problem Solving

33 Problem Solving

34 Goal based agent or problem solving agent

35 Problem solving agent

36 Goal formulation and problem formulation

37

38

Problem types Deterministic or observable ( single-state) Non-observable (multiple-state) Non-deterministic or partially observable Unknown state space

40 Deterministic or observable ( Single-state problems) Each state is fully observable and it goes to one definite state after any action. Here, the goal state is reachable in one single action or sequence of actions. Deterministic environments ignore uncertainty .  Ex- Vacuum cleaner with sensor

41 Non-observable(Multiple-state problems) or conformant problems Problem – solving agent does not have any information about the state. Solution may or may not be reached . Ex- In case of vacuum cleaner , the goal state is to clean the floor rather clean floor. Action is to suck if there is dirt. So , in non-observable condition , as there is no sensor , it will have to suck the dirt , irrespective of whether it is towards right or left . Here , the solution space is the states specifying its movement across the floor.

42 Non-deterministic ( partially observable) problem The effect of action is not clear. Percepts provide new information about the current state. Ex- If we take Vacuum cleaner , and now assume that the sensor is attached to it , then it will suck if there is dirt. Movement of the cleaner will be based on its current percept.

43 Unknown state space problems Typically exploration problems States and impact of actions are not known Ex- online search that involves acting without compete knowledge of the next state or scheduling without map.

44 Problem Characteristics Is the problem decomposable ? Can Solution steps be ignored or undone ? Is the Universe Predictable? Is a good solution absolute or relative ? Is the solution a state or a path? What is the role of knowledge? Does the task require interaction with a person ?

45 1. Is the problem decomposable ? In this case, the problem is divided into smaller problems. The smaller problems are solved independently. Finally , the result is merged to get the final result.

46 2. Can Solution steps be ignored or undone ? In the Theorem Proving problem, a lemma that has been proved can be ignored for the next steps . Such problems are called Ignorable problems . In the 8-Puzzle, Moves can be undone and backtracked . Such problems are called Recoverable problems.

47 3. Is the Universe Predictable ? In Playing Bridge, We cannot know exactly where all the cards are or what the other players will do on their turns. Uncertain outcome ! For  certain-outcome problems , planning can be used to generate a sequence of operators that is guaranteed to lead to a solution . For  uncertain-outcome problems , a sequence of generated operators can only have a good probability of leading to a solution. Plan revision is made as the plan is carried out and the necessary feedback is provided.

48 4. Is a good solution to the problem is absolute or relative ? The facts can be represented using a formal language called “Predicate Logic” There may be “n” different solutions. If one solution is found , there is no need to go back and see if some path might also lead to a solution. Absolute solution : Ex : P -> Q If marks = 92 -> Grade = O Relative solution: Ex . TSP (Explore all possible solutions )

49 5. Is the solution a state or a path ? To solve a problem, of finding interpretation, we need to produce only the interpretation itself. Record of processing by how the interpretation was arrived is NOT required. Ex : The President ate Food with a fork . In contrast, if we must produce, the final state along with the path that we found to that state along with sequence of operations to produce the final state.

50 6. What is the role of knowledge ? Now consider the problem of scanning daily newspapers to decide which are supporting the Democrats and which are supporting the Republicans in some upcoming election. Again assuming unlimited computing power, how much knowledge would be required by a computer trying to solve this problem? This time the answer is a great deal. It would have to know such things as: The names of the candidates in each party. The fact that if the major thing you want to see done is have taxes lowered, you are probably supporting the Republicans. The fact that if the major thing you want to see done is improved education for minority students, you are probably supporting the Democrats. The fact that if you are opposed to big government, you are probably supporting the Republicans.

51 7 . Does the task require interaction with a person ? The  solitary problem , in which there is no intermediate communication and no demand for an explanation of the reasoning process. The  conversational problem , in which intermediate communication is to provide either additional assistance to the computer or additional information to the user.

52 Problem, Problem space and Search Problem: A problem is a specific task or challenge that requires finding a solution or making a decision. In artificial intelligence, problems can vary in complexity and scope, ranging from simple tasks like arithmetic calculations to complex challenges such as image recognition, natural language processing, game playing, and optimization. Each problem has a defined set of initial states, possible actions or moves, and a goal state that needs to be reached or achieved . Example , in a game of chess, the problem is to find a sequence of moves that lead to checkmate, while in route planning, the problem is to find the shortest path between two locations on a map.

53 Problem, Problem space and search Problem Space: The problem space is the set of all possible states, actions, and transitions that can be encountered while attempting to solve a specific problem. It represents the entire landscape of potential solutions and paths from the initial state to the goal state. In other words, the problem space defines all the possible configurations or arrangements of elements involved in the problem and the set of valid moves or actions that can be taken at each state. Each state in the problem space represents a specific configuration, and each action represents a possible move or step from one state to another . E xample , in the problem of route planning, the problem space includes all possible locations on the map as states and all valid roads or paths between them as actions.

54 Problem, Problem space and search Search: Search is the process of exploring the problem space to find a sequence of actions or moves that lead to the goal state or a satisfactory solution. In AI, search algorithms are used to systematically navigate through the problem space and discover paths or solutions that satisfy the problem’s constraints and objectives . The search process involves starting from the initial state and exploring possible actions to generate new states. These states are then evaluated based on certain criteria (e.g., distance to the goal, cost, or utility) to determine the most promising states to explore further. The process continues iteratively until the goal state is reached or a satisfactory solution is found . There are various search algorithms used in AI, such as depth-first search, breadth-first search, A* search, and heuristic search. Each algorithm has its strengths and weaknesses, and the choice of search algorithm depends on the problem’s characteristics, size of the problem space, and the resources available.

The 8-Puzzle (Sliding Block Puzzle) States: A state description specifies the location of each of the eight tiles and the blank in one of the nine squares. Initial state: Any state can be designated as the initial state. Successor function : This generates the legal states that result from trying the four actions (blank moves Left, Right, Up, or Down). Goal test: This checks whether the state matches the goal configuration (Other goal configurations are possible.) Path cost: Each step costs 1, so the path cost is the number of steps in the path.

The 8-Puzzle (Sliding Block Puzzle) - Solution h f = +1 for every correct position Solution of this problem is “movement of tiles” in order to reach goal state. The transition function or legal move is any one tile movement by one space in any direction.

The 8-Puzzle (Sliding Block Puzzle) - Solution

The 8-Puzzle (Sliding Block Puzzle) - Solution

4- queen problem

Toy Problem- 7 Missionaries and Cannibals Let Missionary is denoted by ‘M’ and Cannibal, by ‘C’. These rules are described below: All or some of these production rules will have to be used in a particular sequence to find the solution of the problem.

Toy Problem- 7 Missionaries and Cannibals Rules applied and their sequence in Missionaries and Cannibals problem

Toy Problem- 7 Formalization of the M&C Problem State space: triple (x,y,z) with 0 ≤ x,y,z ≤ 3, where x,y, and z represent the number of missionaries, cannibals and boats currently on the original bank. Initial State: (3,3,1) Successor function: From each state, either bring one missionary, one cannibal, two missionaries, two cannibals, or one of each type to the other bank. Note: Not all states are attainable (e.g., (0,0,1)), and some are illegal. Goal State: (0,0,0) Path Costs: 1 unit per crossing

63 Agents in Artificial Intelligence What is an Agent? An agent can be anything that perceive its environment through sensors and act upon that environment through actuators. Agent runs in the cycle of  perceiving ,  thinking , and  acting .

64 Agents in Artificial Intelligence Human-Agent:  A human agent has eyes, ears, and other organs which work for sensors and hand, legs, vocal tract work for actuators. Robotic Agent:  A robotic agent can have cameras, infrared range finder, NLP for sensors and various motors for actuators. Software Agent:  Software agent can have keystrokes, file contents as sensory input and act on those inputs and display output on the screen.

65 Agents in Artificial Intelligence Sensor:  Sensor is a device which detects the change in the environment and sends the information to other electronic devices. An agent observes its environment through sensors. Actuators:  Actuators are the component of machines that converts energy into motion. The actuators are only responsible for moving and controlling a system. An actuator can be an electric motor, gears, rails, etc. Effectors:  Effectors are the devices which affect the environment. Effectors can be legs, wheels, arms, fingers, wings, fins, and display screen.

66 Intelligent Agents An intelligent agent is an autonomous entity which act upon an environment using sensors and actuators for achieving goals. An intelligent agent may learn from the environment to achieve their goals. A thermostat is an example of an intelligent agent. Following are the main four rules for an AI agent: Rule 1:  An AI agent must have the ability to perceive the environment. Rule 2:  The observation must be used to make decisions. Rule 3:  Decision should result in an action. Rule 4:  The action taken by an AI agent must be a rational action.

67 Characteristics of an AI agent Autonomy:  An AI virtual agent is capable of performing tasks independently without requiring constant human intervention or input. Perception:  The agent function senses and interprets the environment they operate in through various sensors, such as cameras or microphones. Reactivity:  An AI agent can assess the environment and respond accordingly to achieve its goals. Reasoning and decision-making:  AI agents are intelligent tools that can analyze data and make decisions to achieve goals. They use reasoning techniques and algorithms to process information and take appropriate actions. Learning:  They can learn and enhance their performance through machine, deep, and reinforcement learning elements and techniques. Communication:  AI agents can communicate with other agents or humans using different methods, like understanding and responding to natural language, recognizing speech, and exchanging messages through text. Goal-oriented:  They are designed to achieve specific goals, which can be pre-defined or learned through interactions with the environment.

68 Rational Agent A rational agent is an agent which has clear preference, models uncertainty , and acts in a way to maximize its performance measure with all possible actions . A rational agent is said to perform the right things. AI is about creating rational agents to use for game theory and decision theory for various real-world scenarios . For an AI agent, the rational action is most important because in AI reinforcement learning algorithm, for each best possible action, agent gets the positive reward and for each wrong action, an agent gets a negative reward.

69 Rationality The rationality of an agent is measured by its performance measure. Rationality can be judged on the basis of following points : Performance measure which defines the success criterion. Agent prior knowledge of its environment. Best possible actions that an agent can perform. The sequence of percepts. Rationality differs from Omniscience because an Omniscient agent knows the actual outcome of its action and act accordingly, which is not possible in reality.

70 Functions of an Artificial Intelligence Agent Artificial Intelligence agents perform these functions continuously: Perceiving dynamic conditions in the environment Acting to affect conditions in the environment Using reasoning to interpret perceptions Problem-solving Drawing inferences Determining actions and their outcomes

71 Structure of an AI Agent The task of AI is to design an agent program which implements the agent function. The structure of an intelligent agent is a combination of architecture and agent program. Agent = Architecture + Agent program Following are the main three terms involved in the structure of an AI agent: Architecture:  Architecture is machinery that an AI agent executes on. Agent Function:  Agent function is used to map a percept to an action . f:P*  → A   Agent program: Agent program is an implementation of agent function. An agent program executes on the physical architecture to produce function f.

72 Types of Agents in Artificial Intelligence There are five different types of intelligent agents used in AI. Simple Reflex Agents Model-based Agents Goal-based agents Utility-based agents Learning agents

73 Simple Reflex Agent A simple reflex agent is an AI system that follows pre-defined rules to make decisions. It only responds to the current situation without considering the past or future ramifications . A simple reflex agent is suitable for environments with stable rules and straightforward actions, as its behavior is purely reactive and responsive to immediate environmental changes. Example A rule-based system developed to support automated customer support interactions. The system can automatically generate a predefined response containing instructions on resetting the password if a customer’s message contains keywords indicating a password reset.

74 Model-based Reflex Agent A model-based reflex performs actions based on a current percept and an internal state representing the unobservable word. It updates its internal state based on two factors : How the world evolves independently of the agent How does the agent’s action affect the world A cautionary model-based reflex agent is a variant of a model-based reflex agent that also considers the possible consequences of its actions before executing them.

75 Model-based Reflex Agent The model-based reflex agent operates in four stages: Sense:  It perceives the current state of the world with its sensors. Model:  It constructs an internal model of the world from what it sees. Reason:  It uses its model of the world to decide how to act based on a set of predefined rules or heuristics. Act:  The agent carries out the action that it has chosen. Example One of the finest examples of a cautionary model-based reflex agent is Amazon Bedrock Amazon Bedrock is a service that uses foundational models to simulate operations, gain insights, and make informed decisions for effective planning and optimization. By relying on various models, Bedrock gains insights, predicts outcomes, and makes informed decisions. It continuously refines its models with real-world data, allowing it to adapt and optimize its operations. Amazon Bedrock then plans for different scenarios and selects optimal strategies through simulations and adjustments to model parameters.

76 Goal-based Agents Goal-based agents are AI agents that use information from their environment to achieve specific goals. They employ search algorithms to find the most efficient path towards their objectives within a given environment . These agents are also known as rule-based agents, as they follow predefined rules to accomplish their goals and take specific actions based on certain conditions . Goal-based agents are easy to design and can handle complex tasks. They can be used in various applications like robotics, computer vision, and natural language processing . Unlike basic models, a goal-based agent can determine the optimal course of decision-making and action-taking processes depending on its desired outcome or goal.

77 Goal-based Agents The working pattern of the goal-based agent can be divided into five steps: Perception:  The agent perceives its environment using sensors or other input devices to collect information about its surroundings. Reasoning:  The agent analyzes the information collected and decides on the best course of action to achieve its goal. Action:  The agent takes actions to achieve its goal, such as moving or manipulating objects in the environment. Evaluation:  After taking action, the agent evaluates its progress towards the goal and adjusts its actions, if necessary. Goal Completion:  Once the agent has achieved its goal, it either stops working or begins working on a new goal . Example  Google Bard  is a goal-based agent. No doubt, it is also a learning agent. As a goal-based agent, it has a goal or objective to provide high-quality responses to user queries . It chooses its actions that are likely to assist users in finding the information they seek and achieving their desired goal of obtaining accurate and helpful responses.

78 Utility-based Agents Utility-based agents are AI agents that make decisions based on maximizing a utility function or value. They choose the action with the highest expected utility, which measures how good the outcome is . This helps them deal with complex and uncertain situations more flexibly and adaptively. Utility-based agents are often used in applications where they have to compare and select among multiple options, such as resource allocation, scheduling, and game-playing.

79 Utility-based Agents A utility-based agent aims to choose actions that lead to a high utility state. To achieve this, it needs to model its environment, which can be simple or complex. Then, it evaluates the expected utility of each possible outcome based on the probability distribution and the utility function. Finally, it selects the action with the highest expected utility and repeats this process at each time step Example Anthropic Claude, an AI tool whose goal is to help card members maximize their rewards and benefits from using cards, is a utility-based agent. Because to achieve its goal, it employs a utility function to assign numerical values representing success or happiness to different states (situations that card members face, such as purchasing, paying bills, redeeming rewards, etc.). And then compares the outcome of different actions in each state and trade-off decisions based on their utility values.

80 Learning Agents An AI learning agent is a software agent that can learn from past experiences and improve its performance. It initially acts with basic knowledge and adapts automatically through machine learning. The learning agent comprises four main components: Learning Element:  It is responsible for learning and making improvements based on the experiences it gains from its environment. Citric:  It provides feedback to the learning element by the agent’s performance for a predefined standard. Performance Element:  It selects and executes external actions based on the information from the learning element and the critic. Problem Generator:  It suggests actions to create new and informative experiences for the learning element to improve its performance.

81 Learning Agents

82 Learning Agents AI learning agents follow a cycle of observing, learning, and acting based on feedback. They interact with their environment, learn from feedback, and modify their behavior for future interactions. Observation:  The learning agent observes its environment through sensors or other inputs. Learning:  The agent analyzes data using algorithms and statistical models, learning from feedback on its actions and performance. Action:  Based on what it has learned, the agent acts in its environment to decide how to behave. Feedback:  The agent receives feedback about their actions and performance through rewards, penalties, or environmental cues. Adaptation:  Using feedback, the agent changes its behavior and decision-making processes, updating its knowledge and adapting to its environment. Example AutoGPT analyzes the pros and cons of the top ten smartphones by exploring various websites and sources. It evaluates the authenticity of websites using a sub-agent program. Finally, it generates a detailed report summarizing the findings and listing the pros and cons of the top ten smartphone companies.

83 PEAS Representation PEAS is a type of model on which an AI agent works upon. When we define an AI agent or rational agent, then we can group its properties under PEAS representation model. It is made up of four words: P:  Performance measure E:  Environment A:  Actuators S:  Sensors

84 PEAS for self-driving cars Performance:  Safety, time, legal drive, comfort Environment:  Roads, other vehicles, road signs, pedestrian Actuators:  Steering, accelerator, brake, signal, horn Sensors:  Camera, GPS, speedometer, odometer, accelerometer, sonar.

85 Example of Agents with their PEAS representation Agent Performance measure Environment Actuators Sensor Medical Diagnose Vacuum Cleaner Part -picking Robot

86 Example of Agents with their PEAS representation Agent Performance measure Environment Actuators Sensor Medical Diagnose Healthy patient Minimized cost Patient Hospital Staff Tests Treatments Keyboard (Entry of symptoms) Vacuum Cleaner Cleanness Efficiency Battery life Security Room Table Wood floor Carpet Various obstacles Wheels Brushes Vacuum Extractor Camera Dirt detection sensor Cliff sensor Bump Sensor Infrared Wall Sensor Part -picking Robot Percentage of parts in correct bins Conveyor belt with parts Bins Jointed Arms Hand Camera Joint angle sensors.

87 Types of Environments in AI Fully Observable vs Partially Observable Deterministic vs Stochastic Competitive vs Collaborative Single-agent vs Multi-agent Static vs Dynamic Discrete vs Continuous Episodic vs Sequential Known vs Unknown

88 Fully Observable vs Partially Observable When an agent sensor is capable to sense or access the complete state of an agent at each point in time, it is said to be a fully observable environment else it is partially observable. Maintaining a fully observable environment is easy as there is no need to keep track of the history of the surrounding. An environment is called unobservable when the agent has no sensors in all environments. Examples: Chess – the board is fully observable, and so are the opponent’s moves. Driving – the environment is partially observable because what’s around the corner is not known.

89 Deterministic vs Stochastic When a uniqueness in the agent’s current state completely determines the next state of the agent, the environment is said to be deterministic. The stochastic environment is random in nature which is not unique and cannot be completely determined by the agent. Examples: Chess – there would be only a few possible moves for a coin at the current state and these moves can be determined. Self-Driving Cars- the actions of a self-driving car are not unique, it varies time to time.

90 Competitive vs Collaborative An agent is said to be in a competitive environment when it competes against another agent to optimize the output. The game of chess is competitive as the agents compete with each other to win the game which is the output. An agent is said to be in a collaborative environment when multiple agents cooperate to produce the desired output. When multiple self-driving cars are found on the roads, they cooperate with each other to avoid collisions and reach their destination which is the output desired.

91 Single-agent vs Multi-agent An environment consisting of only one agent is said to be a single-agent environment. A person left alone in a maze is an example of the single-agent system. An environment involving more than one agent is a multi-agent environment. The game of football is multi-agent as it involves 11 players in each team. Dynamic vs Static An environment that keeps constantly changing itself when the agent is up with some action is said to be dynamic. A roller coaster ride is dynamic as it is set in motion and the environment keeps changing every instant. An idle environment with no change in its state is called a static environment. An empty house is static as there’s no change in the surroundings when an agent enters.

92 Discrete vs Continuous If an environment consists of a finite number of actions that can be deliberated in the environment to obtain the output, it is said to be a discrete environment. The game of chess is discrete as it has only a finite number of moves. The number of moves might vary with every game, but still, it’s finite. The environment in which the actions are performed cannot be numbered i.e. is not discrete, is said to be continuous. Self-driving cars are an example of continuous environments as their actions are driving, parking, etc. which cannot be numbered.

93 Episodic vs Sequential In  an Episodic task environment , each of the agent’s actions is divided into atomic incidents or episodes. There is no dependency between current and previous incidents. In each incident, an agent receives input from the environment and then performs the corresponding action. Example:  Consider an example of  Pick and Place robot , which is used to detect defective parts from the conveyor belts. Here, every time robot(agent) will make the decision on the current part i.e. there is no dependency between current and previous decisions. In a  Sequential environment , the previous decisions can affect all future decisions. The next action of the agent depends on what action he has taken previously and what action he is supposed to take in the future. Example:  Checkers-   Where the previous move can affect all the following moves.

94 Known vs Unknown In a known environment, the output for all probable actions is given. Obviously , in case of unknown environment, for an agent to make a decision, it has to gain knowledge about how the environment works.

95 Constraint Satisfaction Problems (CSP ) Finding a solution that meets a set of constraints is the goal of constraint satisfaction problems (CSPs), a type of AI issue. Finding values for a group of variables that fulfill a set of restrictions or rules is the aim of constraint satisfaction problems . For tasks including resource allocation, planning, scheduling, and decision-making, CSPs are frequently employed in AI.

96 Basic components in (CSP) Variables: The things that need to be determined are variables. Variables in a CSP are the objects that must have values assigned to them in order to satisfy a particular set of constraints. Boolean, integer, and categorical variables are just a few examples of the various types of variables Variables , for instance, could stand in for the many puzzle cells that need to be filled with numbers in a sudoku puzzle. Domains: The range of potential values that a variable can have is represented by domains. Depending on the issue, a domain may be finite or limitless. For instance, in Sudoku, the set of numbers from 1 to 9 can serve as the domain of a variable representing a problem cell. Constraints:  The guidelines that control how variables relate to one another are known as constraints. Constraints in a CSP define the ranges of possible values for variables. Unary constraints, binary constraints, and higher-order constraints are only a few examples of the various sorts of constraints. For instance, in a sudoku problem, the restrictions might be that each row, column, and 3×3 box can only have one instance of each number from 1 to 9.

97 Constraint Satisfaction Problems (CSP) representation

98 Constraint Satisfaction Problems (CSP) algorithms The  backtracking algorithm  is a depth-first search algorithm that methodically investigates the search space of potential solutions up until a solution is discovered that satisfies all the restrictions. The method begins by choosing a variable and giving it a value before repeatedly attempting to give values to the other variables. The method returns to the prior variable and tries a different value if at any time a variable cannot be given a value that fulfills the requirements. Once all assignments have been tried or a solution that satisfies all constraints has been discovered, the algorithm ends.

99 Constraint Satisfaction Problems (CSP) algorithms The forward-checking algorithm is a variation of the backtracking algorithm that condenses the search space using a type of local consistency. For each unassigned variable, the method keeps a list of remaining values and applies local constraints to eliminate inconsistent values from these sets. The algorithm examines a variable’s neighbors after it is given a value to see whether any of its remaining values become inconsistent and removes them from the sets if they do. The algorithm goes backward if, after forward checking, a variable has no more values.

100 Constraint Satisfaction Problems (CSP) algorithms Algorithms for propagating constraints are a class that uses local consistency and inference to condense the search space. These algorithms operate by propagating restrictions between variables and removing inconsistent values from the variable domains using the information obtained.
Tags