Chapter Topics
1.Introduction to Intelligent Agent
2.Properties of Agent
3.Sensor / Actuator / effectors and actions
4.Environments
5.Intelligent Agent example
3
Chapter Topics
1.Introduction to Intelligent Agent
2.Properties of Agent
3.Sensor / Actuator / effectors and actions
4.Environments
5.Intelligent Agent example
4
Intelligent Agent
1.Introduction to Intelligent Agent
2.Properties of Agent
3.Sensor / Actuator / effectors and actions
4.Environments
5.Intelligent Agent example
5
What is meant by intelligent agent?
Intelligent agents are entities that use sensors to perceive the environment, make a
decision, and act upon that environment through effectors. An intelligent agent could
be a robot, machine, or even a human or an animal.
Human agent:It has eyes,ears and other organs for sensors and and
hand,legs,mouth and and other body parts for effectors.
Robotic agent:It has camera,microphone for sensors and various motors for
effectors
6
How does the intelligent agent work?
1.Agent Architecture
2.Program or Decision-making mechanism
7
Agent Architecture
.
8
Agent Architecture
Environment: Environment is the area around the agent that it
interacts with. An environment can be anything like a physical
space, a room or a virtual space like a game world or the internet.
Sensors: Sensors are tools that AI agent uses to perceive their
environment. They can be any physical like cameras, microphones,
temperature sensors or a software sensor that read data from files.
.
9
Agent Architecture
Effectors: Effectors are the components of an intelligent agent that
carry out actions to influence the environment.Eg "limbs" or "tools"
through which the agent manifests its decisions into physical reality
or changes in the environment.
Actuators: Actuators are specific types of effectors that convert
control signals (often electrical) into physical action.Eg electric
motors
10
Program or Decision-making mechanism
The decision-making mechanism, often referred to as the agent’s program,
processes information from sensors and makes decisions based on that data.
The program takes current precepts as input and generates actions for the
actuators.
It embodies the agent function, which maps percepts to actions based on the
agent’s goals and objectives.
Program can be reflex,goal based,utility based,etc
11
Characteristics of intelligent agent-I
Autonomy: An AI virtual agent is capable of performing tasks
independently without requiring constant human intervention or input.
Perception: The agent function senses and interprets the environment
they operate in through various sensors, such as cameras or
microphones.
Reactivity: An AI agent can assess the environment and respond
accordingly to achieve its goals.
12
Characteristics of intelligent agent-II
Reasoning and decision-making: AI agents are intelligent tools that can
analyze data and make decisions to achieve goals. They use reasoning
techniques and algorithms to process information and take appropriate actions.
Learning: They can learn and enhance their performance through machine,
deep, and reinforcement learning elements and techniques.
Communication: AI agents can communicate with other agents or humans
using different methods, like understanding and responding to natural
language, recognizing speech, and exchanging messages through text.
13
Characteristics of intelligent agent-III
Goal-oriented: They are designed to achieve specific goals, which
can be pre-defined or learned through interactions with the
environment.
Persistence:Intelligent agents can maintain their operations and
continue pursuing their goals over long periods, even when faced
with obstacles or delays.
14
PEAS Representation of AI agent
It is a framework to describe and understand as AI agent.
P=Performance Measure
E=Environment
A=Actuators
S=Sensors
15
PEAS:Performance Measure-I
Performance measure is a criteria that measures the success of
the agent. It is used to evaluate how well the agent is acheiving its
goal.
For example, in a spam filter system, the performance measure
could be minimizing the number of spam emails reaching the inbox.
16
PEAS:Performance Measure-II
Performance measure for an agent is not universally fixed but varies according
to the criteria relevant to its task. For instance, a vacuum cleaner agent might
be evaluated on the amount of dirt it cleans, the electricity it uses, and the
noise it makes. Performance should be measured over an appropriate
timeframe to capture relevant metrics based for our goals and objectives.
Agent sucess(Rationality) is evaluated based on performance measure.
17
PEAS:Environment
The environment represents the domain or context in which the
agent operates and interacts. This can range from physical spaces
like rooms to virtual environments such as game worlds or online
platforms like the internet.
18
PEAS:Actuators
Actuators are the mechanisms through which the AI agent performs
actions or interacts with its environment to achieve its goals. These
can include physical actuators like motors and robotic hands, as
well as digital actuators like computer screens and text-to-speech
converters.
19
PEAS:Sensors
It describes the mediums through which the agent precepts the
environment or takes raw data from the enviroment.It can be
camera, microphone, radio waves,etc.
20
Types of Agent
1.Table driven agents
2.Simple reflex agents
3.Model based reflex agent
4.Goal-based agents
5.Utility-based agents
6.Learning agents
21
Table driven agent
It is the agent that operates based on a predefined set of actions stored in a
table.The table maps the every possible percepts to corresponding actions.
22
Table driven agent
It is the agent that operates based on a predefined set of actions stored
in a table.The table maps the every possible percept sequence not just
the current percepts to corresponding actions.
●Features:
●Simplicity
●Deterministic behaviour
●Predefined table
●Direct Mapping
23
Table driven agent
Limitations
Lack of scalability as input sequence increases
Cannot adapt to new situations
Large memory requirements
24
Simple Reflex Agent
It is an agent selects actions based on the current percept, ignoring the rest of the percept
history. It operates using a set of condition-action rules, which specify what action to take
when a particular condition is met.
25
Simple Reflex Agent
The Simple reflex agents are the simplest agents. These agents take decisions
on the basis of the current percepts and ignore the rest of the percept history.
These agents only succeed in the fully observable environment.
The Simple reflex agent does not consider any part of percepts history during
their decision and action process.
The Simple reflex agent works on Condition-action rule, which means it maps
the current state to action. Such as a Room Cleaner agent, it works only if
there is dirt in the room.
26
Simple Reflex Agent
Limitations
They have very limited intelligence
They do not have knowledge of non-perceptual parts of the
current state
Mostly too big to generate and to store.
Not adaptive to changes in the environment.
27
Model based reflex agent
The Model-based agent can work in a partially observable environment, and track
the situation.
28
Model based reflex agent
A model-based agent has two important factors:
Model: It is knowledge about "how things happen in the world," so it is called a
Model-based agent.
Internal State: It is a representation of the current state based on percept history.
These agents have the model, "which is knowledge of the world" and based on the model
they perform actions.
29
Model based reflex agent
Updating the agent state requires information about:
●How the world evolves
●How the agent's action affects the world
30
Goal based agents
The knowledge of the current state environment is not always sufficient to decide
for an agent to what to do.
31
Goal based agents
The agent needs to know its goal which describes desirable situations.
Goal-based agents expand the capabilities of the model-based agent by having
the "goal" information.
They choose an action, so that they can achieve the goal.
These agents may have to consider a long sequence of possible actions before
deciding whether the goal is achieved or not. Such considerations of different
scenario are called searching and planning, which makes an agent proactive.
32
Utility based agents
These agents are similar to the goal-based agent but provide an extra component of
utility measurement which makes them different by providing a measure of success
at a given state.
33
Utility based agents
Utility-based agent act based not only goals but also the best way to achieve the
goal.
The Utility-based agent is useful when there are multiple possible alternatives, and
an agent has to choose in order to perform the best action.
The utility function maps each state to a real number to check how efficiently each
action achieves the goals.
34
Learning Agents
A learning agent in AI is the type of agent which can learn from its past
experiences, or it has learning capabilities.
35
Learning Agents
A learning agent has mainly four conceptual components, which are:
Learning element: It is responsible for making improvements by learning from
environment
Critic: Learning element takes feedback from critic which describes that how well
the agent is doing with respect to a fixed performance standard.
Performance element: It is responsible for selecting external action
Problem generator: This component is responsible for suggesting actions that
will lead to new and informative experiences.
36