AI Intelligent Agents: Types and Environment Properties

anasdange1 86 views 54 slides Jul 19, 2024
Slide 1
Slide 1 of 54
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54

About This Presentation

AI Intelligent Agents


Slide Content

Intelligent Agents Vidya Vikas Education Trust’s Universal College of Engineering, Vasai(E)

Agent & Environment An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators. A human agent has eyes, ears, and other organs for sensors and hands, legs, mouth, and other body parts for actuators. A robotic agent might have cameras and infrared range finders for sensors and various motors for actuators. An environment in artificial intelligence is the surrounding of the agent. 

Example: Vacuum-cleaner world This world has just two locations: squares A and B. The vacuum agent perceives which square it is in and whether there is dirt in the square. It can choose to move left, move right, absorb the dirt, or do nothing. One very simple agent function is the following: if the current square is dirty, then absorb; otherwise, move to the other square.

Concept of Rationality Rational dictionary meaning is something logical or sensible. A rational agent is one that does the right thing to be most successful. For each possible percept sequence, an ideal rational agent should do whatever action is expected to maximize its performance measure, on the basis of the evidence provided by the percept sequence and whatever built-in knowledge the agent has.

Concept of Rationality Rational Agent at any given time depends on four things: The performance measure that defines degree of success. The environment that is suitable for agent action. Details of agent through which it performs actions in related and specified environment. The way to receive different attributes from environment.

Properties/Characteristics of Task Environment Fully observable vs. partially observable Single agent vs. multi agent Deterministic vs. stochastic Episodic vs. sequential Static vs. dynamic Discrete vs. continuous

Fully observable vs. partially observable If an agent’s sensors give it access to the complete state of the environment at each point in time, then it is fully observable. Fully observable environments are convenient because agent need not maintain any internal state to keep track of the world. An environment is partially observable because of noisy and inaccurate sensors or parts of state are missing from sensor data. For example, a vacuum agent with only a local dirt sensor cannot tell whether there is dirt in other squares, and an automated taxi cannot see what other drivers are thinking.

Single agent vs. Multi agent When there is only one agent in a defined environment, it is named the Single-Agent System (SAS). This agent acts and interacts only with its environment. If there is more than one agent and they interact with each other and their environment, the system is called the Multi-Agent System. For example, an agent solving a crossword puzzle by itself is clearly in a single-agent environment. Chess is a competitive multi agent environment. In the taxi-driving environment, on the other hand, avoiding collisions maximizes the performance measure of all agents, so it is a partially cooperative multi agent environment.

Deterministic vs. stochastic If the next state of the environment is completely determined by the current state and the action executed by the agent, then we say the environment is deterministic; otherwise, it is stochastic. A fully observable environment is deterministic environment. Taxi driving is clearly stochastic because one can never predict the behavior of traffic. Teacher’s time table is deterministic.

Episodic vs. Sequential In an episodic task environment, the agent’s experience is divided into atomic episodes. In each episode the agent receives a percept and then performs a single action. The next episode does not depend on the actions taken in previous episodes. For example, an agent that has to spot defective parts on an assembly line bases each decision on the current part, regardless of previous decisions; In sequential environment current decision could affect all future decisions. Chess and taxi driving are sequential: in both cases, short-term actions can have long-term consequences.

Static vs. Dynamic If the environment can change while an agent is deliberating, then we say the environment is dynamic for that agent; otherwise, it is static. An idle environment with no change in its state is called a static environment. E.g. Taxi driving is clearly dynamic: the other cars and the taxi itself keep moving Crossword puzzles are static. An empty house is static as there’s no change in the surroundings when an agent enters.

Discrete vs. continuous : If the environment has limited number of distinct clearly defined percepts and actions then the environment is discrete. For example, the chess environment has a finite number of distinct states. Taxi driving is a continuous-state and continuous-time problem: the speed and location of the taxi and of the other vehicles sweep through a range of continuous values

Competitive vs. Collaborative An agent is said to be in a competitive environment when it competes against another agent to optimize the output. For example, Chess game An agent is said to be in a collaborative environment when multiple agents cooperate to produce the desired output. For example, When multiple self-driving cars are found on the roads, they cooperate with each other to avoid collisions and reach their destination.

PEAS Descriptor The PEAS system delivers the performance measure with respect to the environment, actuators and sensors of the respective agent. PEAS stands for Performance measure, Environment, Actuator, Sensor. P- Performance expected by agent E-Surrounding conditions where agent wants to perform task. A- Tools available to agent to complete task S- Tools required to sense the environment.  

PEAS Performance Measure:   It is the objective function to judge the performance of the agent. Things we can evaluate an agent against to know how well it performs. Environment : It the real environment where the agent needs to deliberate actions. what the agent can perceive. Actuator : These are the tools, equipment or organs using which agent performs actions in the environment. This works as output of the agent. Sensor : These are tools, organs using which agent captures the state of the environment. This works as input to the agent.

Automated Car Driver Performance Measure: Safety: Automated system should be able to drive the car safely without dashing anywhere. Optimum speed: Automated systems should be able to maintain the optimal speed depending upon the surroundings. Comfortable journey: Automated systems should be able to give a comfortable journey to the end-user. Environment: Roads: Automated car drivers should be able to drive on any kind of road ranging from city roads to highway. Traffic conditions: There will be different sorts of traffic conditions for different types of roads. Actuators: Steering wheel: used to direct car in desired directions. Accelerator, gear: To increase or decrease the speed of the car. Brake: To stop the car completely. Sensors: To take input from the environment in-car driving example cameras, sonar system, etc

Online Shopping Agent PEAS description: Performance measure: price, quality, appropriateness, efficiency Environment: current and future WWW sites, vendors, shippers Actuators: display to user, fill in form Sensors: HTML pages (text, graphics, scripts) Environment Characteristics: Fully observable: No // partially observable Deterministic: Partly Episodic: No // sequential Static: Semi // the world changes partly while the agent is thinking Discrete: Yes Single-agent: Yes // multi-agent for auctions

Agent Performance Measure Environment Actuator Sensor Hospital Management System Patient’s health, Admission process, Payment Hospital, Doctors, Patients Prescription, Diagnosis, Scan report Symptoms, Patient’s response Robot Soccer Player Play game, Make maximum goals, Win a game Soccer, team member, opponent, referee, soccer field, audience Navigator, robot legs, View Detector for Robot. Camera, Communicator and Orientation & Touch Sensors. Subject Tutoring Maximize scores, Improvement i n students Classroom, Desk, Chair, Board, Staff, Students Smart displays, Corrections Eyes, Ears, Notebooks

Type of Agents Simple reflex agents; Model-based reflex agents; Goal-based agents; and Utility-based agents. Learning Agents

Simple Reflex Agent These agents select actions based on the current percept, ignoring the rest of the percept history. For example, the vacuum agent: if status = Dirty then Clean else if location = A then Right else if location = B then Left Diagnosis system: If the patient has cold, fever, cough, breathing problem then start the treatment for covid. The agent function is based on the condition-action rule . A condition-action rule is a rule that maps a state i.e., condition to an action. If the condition is true, then the action is taken, else not.

Model-based Reflex Agent It uses internal model to keep track of the current state of the world. The Model-based agent can work in a partially observable environment, and track the situation Model: It is knowledge about "how things happen in the world," so it is called a Model-based agent. A model-based reflex agent should maintain some sort of internal model that depends on the percept history and thereby reflects at least some of the unobserved aspects of the current state. For driving tasks such as changing lanes, the agent needs to keep track of where the other cars are if it can’t see them all at once.

Goal Based Agent The knowledge of the current state environment is not always sufficient to decide for an agent to what to do. An agent knows the description of current state and also needs some sort of goal information that describes situation that are desirable. This allows the agent a way to choose among multiple possibilities, selecting the one which reaches a goal state. The goal based agent is more flexible for more than one destination. Google's Waymo driverless cars are good examples of a goal-based agent when they are programmed with an end destination, or goal, in mind. The car will then ''think'' and make the right decisions in order to deliver the passenger where they intended to go.

Utility based Agent Utility-based agent act based not only goals but also the best way to achieve the goal. The Utility-based agent is useful when there are multiple possible alternatives, and an agent has to choose in order to perform the best action. These agents are similar to the goal-based agent but provide an extra component of utility measurement. It is possible to define a measure of how desirable a particular state is. This measure can be obtained through the use of a utility function which maps a state to a measure of the utility of the state.

Utility based Agent For example, many action sequences will get the taxi to its destination (thereby achieving the goal) but some are quicker, safer, more reliable, or cheaper than others. When there are conflicting goals, only some of which can be achieved (for example, speed and safety), the utility function specifies the appropriate tradeoff based on a preference for each state. A utility-based agent has to model and keep track of its environment, tasks that have involved a great deal of research on perception, representation, reasoning, and learning.

Learning Agent A learning agent in AI is the type of agent that can learn from its past experiences or it has learning capabilities. It starts to act with basic knowledge and then is able to act and adapt automatically through learning. Learning has an advantage that it allows the agents to initially operate in unknown environments and to become more competent than its initial knowledge alone might allow. For example, in school, the test is the critic. The teacher would mark the test and see what could be improved and instructs how to do better next time, so the teacher is the learning element and tester is the performance element.

Learning Agent A learning agent has mainly four conceptual components:  Learning element:  It is responsible for making improvements by learning from the environment  Critic:  The learning element takes feedback from critics which describes how well the agent is doing with respect to a fixed performance standard. Performance element:  It is responsible for selecting external action Problem Generator:  This component is responsible for suggesting actions that will lead to new and informative experiences.

Difference between goal-based agents and utility-based agents Goal based agents decides its actions based on goal whereas Utility based agents decides its actions based on utilities. Goal based agents are less faster whereas Utility based agents are more faster. Goal based agents are not enough to generate high-quality behavior in most environment whereas Utility based agents are enough to generate high-quality behavior in most environment. Goal based agents can not specify the appropriate tradeoff whereas Utility based agents can specify the appropriate tradeoff . Goal based agents are less safer whereas Utility based agents are more safer

Problem-solving agent The problem-solving agent is a type of goal-based agent which uses atomic representation with no internal states visible to the problem-solving algorithms and performs precisely by defining problems and its several solutions. “ A problem-solving refers to a state where we wish to reach to a definite goal from a present state or condition”

Steps performed by Problem-solving agent Goal Formulation: It is the first and simplest step in problem-solving. It organizes the steps/sequence required to formulate one goal out of multiple goals as well as actions to achieve that goal. Goal formulation is based on the current situation and the agent’s performance measure.

Steps performed by Problem-solving agent Declaring the Goal: Goal information given to agent i.e. start from A and reach to B. Ignoring some actions: Agent has to ignore some actions that will not lead agent to desire goal. i.e. there are three roads out of A, one toward S, one to T, and one to Z. Limits the objective: Agent will decide its action when it has some added knowledge about map of the city. Goal can be defined as set of world states: Once it has found a path on the map from Arad to Bucharest, it can achieve its goal by carrying out the driving actions.

Problem Formulation Problem formulation is the process of deciding what actions and states to consider, given a goal. Process of looking for action sequence (number of action that agent carried out to reach to goal) is called search. A search algorithm takes a problem as input and returns a solution in the form of an action sequence. Once a solution is found, the actions it recommends can be carried out. This is called the execution phase.

Search : It identifies all the best possible sequence of actions to reach the goal state from the current state. Solution : It finds the best algorithm out of various algorithms, which may be proven as the best optimal solution. Execution : It executes the best optimal solution from the searching algorithms to reach the goal state from the current state. Problem Formulation

Problem Formulation Initial State: It is the starting state or initial step of the agent towards its goal. Actions: It is the description of the possible actions available to the agent. Transition Model: It describes what each action does. Goal Test: It determines if the given state is a goal state. Path cost: It assigns a numeric cost to each path that follows the goal. The problem-solving agent selects a cost function, which reflects its performance measure. An optimal solution has the lowest path cost among all the solutions.

State Space Formulation Method used for searching path from start state to goal state while solving a problem. State-space of a problem is a set of all states which can be reached from the initial state followed by any sequence of actions. The state-space forms a directed map or graph where nodes are the states, links between the nodes are actions, and the path is a sequence of states connected by the sequence of actions.

Example Problems Toy Problem: It is a concise and exact description of the problem which is used by the researchers to compare the performance of algorithms. Real-world Problem: It is real-world based problems which require solutions. Unlike a toy problem, it does not depend on descriptions, but we can have a general formulation of the problem.

8 Puzzle Problem Here, we have a 3×3 matrix with movable tiles numbered from 1 to 8 with a blank space. https://murhafsousli.github.io/8puzzle/#/ The tile adjacent to the blank space can slide into that space. The objective is to reach a specified goal state similar to the goal state, as shown in the below figure. In the figure, our task is to convert the current state into goal state by sliding digits into the blank space.

Problem Formulation States: It describes the location of each numbered tiles and the blank tile. Initial State: We can start from any state as the initial state. Actions: Here, actions of the blank space is defined, i.e., either left, right, up or down Transition Model: It returns the resulting state as per the given state and actions. Goal test: It identifies whether we have reached the correct goal-state. Path cost: The path cost is the number of steps in the path where the cost of each step is 1.

8-queens problem The aim of this problem is to place eight queens on a chessboard in an order where no queen may attack another. A queen can attack other queens either diagonally or in same row and column. https://www.brainmetrix.com/8-queens/

States : Arrangement of any 0 to 8 queens on the chessboard. Initial State : An empty chessboard Actions : Add a queen to any empty box. Transition model : Returns the chessboard with the queen added in a box. Goal test : Checks whether 8-queens are placed on the chessboard without any attack. Path cost : There is no need for path cost because only final states are counted.

Some Real-world problems Traveling salesperson problem(TSP): It is a touring problem where the salesman can visit each city only once. The objective is to find the shortest tour and sell-out the stuff in each city. We initialize the problem state by {A} means the salesman departed from his office. As an operator, when he visited city-B, the problem state is updated to {A, B}, where the order of elements in { } is considered. When the salesman visited all the cities, {A, B, C, D} in this case, the departed point A is automatically added to the state which means {A, B, C, D, A}. Therefore, the initial state of this TSP is {A} and the final state(goal) is {A, X1, X2, X3, A} where traveled distance is minimized.

Initial state (starts 1) Possible routes: 1-2-4-3-1= 1-4-2-3-1= 1-2-3-4-1= 1-4-3-2-1= 1-3-4-2-1= 1-3-2-4-1=
Tags