Understanding Intelligent Agents: Concepts, Structure, and Applications

rashmi_mestri 584 views 62 slides Feb 11, 2025
Slide 1
Slide 1 of 62
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62

About This Presentation

This document provides a comprehensive overview of Intelligent Agents, a crucial concept in Artificial Intelligence (AI). It explores the definition, purpose, and importance of intelligent agents, their interactions with environments, and the fundamental principles governing their behavior.

Key top...


Slide Content

Ms. Rashmi Bhat
Asst. Professor,
Dept. of Computer Engineering
St. John College of Engineering and
Management
INTELLIGENT
AGENTS

Agents and Environments, 1
The concept of rationality2
The nature of Environment,3
The structure of Agents4
Types of Agents,5
Contents
6 Learning Agent
Problem Solving Agent7
8 Formulating Problems

Introduction
to
Intelligent
Agents
What are intelligent agents?
Definition:
Agents are entities that perceive environments via
sensors and act upon them through actuators.
Purpose:
Automate tasks, make decisions, and solve problems.
Applications:
Virtual assistants (e.g., Alexa, Siri).
Autonomous vehicles.
Industrial robots.

Importance
of
Intelligent
Agents
Why Intelligent Agents Matter?
Intelligent Agents are important to:
Handle complex, dynamic environments efficiently.
Automate routine or repetitive tasks.
Drive innovations across industries like healthcare,
finance, and logistics.
Examples:
real-time traffic management systems
personalized shopping assistants etc

Agents and
Environments
Interacting with Environments
Components of an Agent:
Sensors: Gather environmental data (e.g., cameras,
microphones).
Actuators: Respond to the environment (e.g., motors,
speakers).
Environment: External conditions influencing the agent.
Examples:
Robotic arm with touch sensors and motors for
precision tasks.
Smart thermostat sensing temperature and
adjusting heating.

Agents and
Environments
Interacting with Environments
Agent
Environment
Percepts
Actions
Sensors
Actuators
Fig. Agents interact with environments through sensors and actuators.

Agents and
Environments
Interacting with Environments
An AI Agent must follow the following four rules:
AI agent must have the ability to perceive the
environment
1
2
3
4
The observation must be used to make the decision.
Decision should result in an action.
An action taken by an AI agent must be rational
action.

Agents and
Environments
Interacting with Environments
Percepts are the content an agent’s sensors are
perceiving.
An agent’s percept sequence is the complete history of
everything the agent has ever perceived.
An agent’s behavior is described by the agent function that
maps any given percept sequence to an action.
The agent function for an artificial agent will be implemented
by an agent program.
An agent’s choice of action at any given instant can depend on
its built-in knowledge and on the entire percept sequence
observed to date, but not on anything it hasn’t perceived.

Agents and
Environments
Interacting with Environments
E.g. a robotic vacuum-cleaner world with just two locations
This agent has two sensors
Location (A or B)1.
Dirt (Dirty or Clean) 2.
Actions allowed:
Move Left1.
Move Right2.
Suck dirt3.
Possible Percepts
[A, Dirty], [A, Clean]
[B, Dirty], [B, Clean]

Agents and
Environments
Interacting with Environments
Partial tabulation of a simple agent function for the vacuum-
cleaner world
Percepts Sequence
[A, Clean]
[A, Dirty]
[B, Clean]
[B, Dirty]
[A, Clean], [A, Clean]
[A, Clean], [A, Dirty]
.
.
Action
Move Right
Suck
Move Left
Suck
Move Right
Suck
.
.
What makes an agent good or bad, intelligent or stupid?

A rational agent is one that does the right thing.
Rational agent makes decisions that maximize performance.
Performance measure:
Consequentialism: an agent’s behavior is evaluated by its consequences.
The notion of desirability is captured by a performance measure that
evaluates any given sequence of environment states.
Rationality: the good behavior

Rationality: the good behavior
For each possible percept sequence, a rational agent should select an
action that is expected to maximize its performance measure, given the
evidence provided by the percept sequence and whatever built-in
knowledge the agent has.
Definition of Rational Agent:
Evaluation Criteria of Rationality:
Performance measure.
Environment knowledge.
Available actions.
Percept sequence.

Task environments:
The “problems” to which rational agents are the “solutions.”
The Nature of Environments
Task environment of a rational agent is specified by:
PEAS (Performance, Environment, Actuators, Sensors) description.
The first step in designing an agent is to specify the task environment as fully as
possible.
e.g. A self-driving Car

The Nature of Environments
PEAS Description for A self-driving Car

Vacuum Cleaner Robot1.
Smart Thermostat 2.
Chess Playing AI3.
Google Search Engine4.
Weather Forecasting System5.
Warehouse Management Robot6.
Drones for Delivery7.
Personal Voice Assistant8.
Online Shopping Recommendation System9.
Industrial Robot for Manufacturing10.
Traffic Signal Control System11.
Healthcare Monitoring System 12.
Write the
PEAS
Description
for Each AI
Agent

Properties of Task Environments
Fully Observable Partially Observable
The agent has complete
and accurate information
about the current state of
the environment.
The agent has incomplete,
noisy or missing data
about the environment.
E.g. Chess E.g. Driving in fog

Properties of Task Environments
Deterministic Stochastic
The next state of the
environment is entirely
predictable based on the
current state and the
agent's actions.
Outcomes are uncertain,
and actions can lead to
probabilistic results.
E.g. Solving a puzzle.
E.g. Dice rolls in board
games

Properties of Task Environments
Episodic Sequential
The agent’s actions are
divided into episodes, and
decisions in one episode
do not affect others.
Each action impacts
future states, requiring a
long-term strategy.
E.g. Spam filtering E.g. Driving, chess

Properties of Task Environments
Static Dynamic
The environment remains
constant while the agent
decides on an action.
The environment can
change over time, even as
the agent deliberates.
E.g. Crossword puzzles. E.g. Real-time traffic
systems

Properties of Task Environments
Discrete Continuous
The environment has a
finite set of states and
actions.
The environment has
infinite states or requires
continuous control.
E.g. Turn-based board
games.
E.g. Robot motion control.

Properties of Task Environments
Single Agent Multi-Agent
A system with a single
intelligent agent that
perceives and interacts
with its environment.
A system comprising
multiple intelligent agents
that can interact,
collaborate, or compete
E.g. A vacuum-cleaning
robot in a single room.
E.g. Traffic management
systems

Properties of Task Environments
Known Unknown
All aspects, including
rules, states, actions, and
their outcomes, are
completely understood by
the agent at design time.
The agent has incomplete
or no prior knowledge of
the dynamics, rules, or
outcomes of actions.
E.g. Chess
E.g. Autonomous driving

Behavior of an agent is the action that is performed after any
given sequence of percepts.
Structure
of
Agent
Agent program implements the agent function (the mapping
from percepts to actions).
Agent architecture refers to a type of computing device
equipped with physical sensors and actuators, on which the
agent program operates.
Agent = Architecture + Program

Four basic kinds of agent programs that embody the
principles underlying almost all intelligent systems
Structure
of
Agent
TYpes of agents
Single Reflex Agents
Model-based Reflex Agents
Goal-based Agents
Utility-based Agents

Simple Reflex Agents:
Simple reflex agents select actions on the basis of the current
percept, ignoring the rest of the percept history.
How It works?
These agents executes actions based
a set of predefined Condition-action rules or if-then rules
e.g. if car-in-front-is-braking then initiate-braking.
Structure
of
Agent
TYpes of agents
Example:
Thermostat adjusting temperature.
A light sensor turning on a streetlight when it gets dark.

Simple Reflex Agents:
Structure
of
Agent
TYpes of agents

Simple Reflex Agents:
Advantages:
Easy to design and implement.
Real-time responses to environmental changes.
Disadvantages:
No memory or state; cannot handle partial observability.
Ineffective in dynamic or complex environments.
Structure
of
Agent
TYpes of agents

Model-Based Reflex Agents
It uses an internal state to keep track of unobservable
aspects of the environment.
Maintain a model of how the world evolves.
Two kinds of knowledge to be encoded in the agent program
A transition model of the world.
A sensor model.
Structure
of
Agent
TYpes of agents
How It Works?
Sense, model, reason, and act in stages.
Example: Robot vacuum navigating obstacles

Model-Based Reflex Agents
Structure
of
Agent
TYpes of agents

Model-Based Reflex Agents
Advantages:
Effective in partially observable environments.
Provides more informed decision-making.
Disadvantages:
Computationally expensive to maintain models.
Models may not accurately capture real-world
complexity.
Structure
of
Agent
TYpes of agents

Goal-Based Agents
AI agents that act to achieve specific goals using search
algorithms and reasoning.
Use goals to decide actions by evaluating future states.
Search and planning are the sub-fields of AI devoted to
finding action sequences that achieve the agent’s goals.
Example:
GPS navigation system planning the shortest route
Structure
of
Agent
TYpes of agents
How It Works?
Perceive, reason, act, evaluate, and achieve goals.

Goal-Based Agents
Structure
of
Agent
TYpes of agents

Goal-Based Agents
Advantages:
Flexible and capable of long-term planning.
Easy to evaluate based on goal completion.
Disadvantages:
Limited to specific goals.
Ineffective in environments with many variables or
changing goals.
Structure
of
Agent
TYpes of agents

Utility-Based Agents
Make decisions by maximizing a utility function that
quantifies the desirability of outcomes.
Capable of handling trade-offs and optimizing performance.
Example:
An autonomous car optimizing speed, safety, and fuel
efficiency.
Structure
of
Agent
TYpes of agents
How It Works:
Evaluate actions based on expected utility.

Utility-Based Agents
Structure
of
Agent
TYpes of agents

Utility-Based Agents
Advantages:
Handles complex decision-making problems.
Provides flexibility in uncertain environments.
Disadvantages:
Requires accurate utility functions and high
computation.
Difficult to interpret or validate for humans.
Structure
of
Agent
TYpes of agents

Improve their performance over time by learning from
experiences.
Evolve and adjust behaviors for dynamic environments.
More robust
Example: Spam filters improving based on flagged emails and
user feedback.
Structure
of
Agent
Learning Agent
Components:
Performance Element: Executes actions.1.
Learning Element: Improves behavior based on feedback.2.
Critic: Evaluates performance and provides feedback.3.
Problem Generator: Explores new ways to improve.4.

Structure
of
Agent
Learning Agent

How It Works:
Observe, learn, act, and adapt using feedback loops.
Structure
of
Agent
Advantages:
Continuously improve and evolve.
Adaptable to new and changing environments.
Disadvantages:
Require significant data and computational resources.
Susceptible to biased or incorrect decision-making.
Learning Agent

Solving Problems by Searching

The computational process
problem solving agent
undertakes is called search.
Problem Solving Agents
Problem Solving Agent is an
agent that finds a sequences
of actions that form a path to
a goal state.

Problem Solving Agents
Agent follows four-phase problem-solving process
Goal Formulation
Problem Formulation
Search
Execution
Define the desired outcome.
Define states, actions, and goals.
Find a sequence of actions to reach the goal
called as Solution.
Perform the actions one at a time.

Goal Formulation: Reaching a specific shelf location and grabbing a particular
item.
1.
Problem Formulation: Robot's current position, possible actions (move forward,
turn left/right), constraints (avoid obstacles), and the goal test (reaching the
target item)
2.
Search: The process of exploring different paths through the warehouse to find
the most efficient sequence of actions to reach the goal.
3.
Execution: The actual act of the robot physically moving through the warehouse
following the chosen path to pick up the item.
4.
Problem Solving Agents
E.g. Robot navigating a warehouse to pick up an item.

A search problem can be defined formally by following:
State space
Initial state
Goal state
Actions
Transition model
Action cost function
Path.
Solution
Optimal solution
A state space can be represented a graph in which the vertices
are states and the directed edges between them are actions.
Search Problems and Solutions
Problem
Solving
Agents

State Space
A set of possible states that the environment can be in.
Initial state
A state that the agent starts in.
Goal:
A set of one or more goal states.
Action:
The actions available to the agent.
Given a state s, ACTIONS(s) returns a finite set of actions
that can be executed in s.
Each of these action is said to be applicable in s.
Search Problems and Solutions
Problem
Solving
Agents

Transition model,
It describes what each action does.
RESULT(s,a) returns the state that results from doing action
a in state s.
Action cost function
Denoted by ACTION-COST(s,a,s’) gives the numeric cost of
applying action a in state s to reach s’ state.
Path:
A sequence of actions forms a path.
Solution
it is a path from the initial state to a goal state.
An optimal solution has the lowest path cost among all
solutions.
Search Problems and Solutions
Problem
Solving
Agents

A model—an abstract mathematical description of a real-world
situation.
A good problem formulation has the right level of abstraction
(the process of removing detail from a representation).
The abstraction is valid if we can elaborate any abstract
solution into a solution in the more detailed world.
The abstraction is useful if carrying out each of the actions in
the solution is easier than the original problem.

Formulating Problems
Problem
Solving
Agents

A standardized/ toy problem is intended to illustrate
or exercise various problem-solving methods.
A real-world problem, such as robot navigation, is one
whose solutions people actually use, and whose
formulation is idiosyncratic, not standardized.
A grid world problem is a two-dimensional rectangular
array of square cells in which agents can move from
cell to cell.
Example Problems

Problem: An agent (a vacuum cleaner) operating in a simple
environment where it must clean dirt from a set of locations.
STATE SPACE:
A state space of the world says which objects are in which locations.
A state is represented as (L,R,P), where:
L represents the status of the left location (clean or dirty).
R represents the status of the right location (clean or dirty).
P represents the current position of the vacuum (left or right).
Example states:
(Dirty, Dirty, Left) → Both locations are dirty, and the vacuum is at the left.
(Clean, Dirty, Right) → Left is clean, right is dirty, and the vacuum is at the
right.
Vacuum Cleaner world

INITIAL STATE:
Any valid combination of dirt distribution and vacuum position.
Assume the initial state is (Dirty, Dirty, Left)
GOAL STATE:
The environment is completely clean.
i.e., (Clean, Clean, Left) or (Clean, Clean, Right).
Vacuum Cleaner world

ACTIONS:
The vacuum cleaner can perform three actions:
Move Left: Moves to the left if it's at the right.
Move Right: Moves to the right if it's at the left.
Suck: Cleans the current location if it is dirty.
TRANSITION MODEL:
If the action is Move Left, the vacuum moves to the left location.
If the action is Move Right, the vacuum moves to the right location.
If the action is Suck, the current location is cleaned.
Vacuum Cleaner world

ACTION COST:
\
Each action (Move Left, Move Right, Suck) has a cost of 1.
The total path cost is the sum of all actions taken to reach the goal state.
Total Cost = ?
PATH:
(Dirty, Dirty, Left) -> (Clean, Dirty, Left)->(Clean, Dirty, Right)->(Clean, Clean, Right)
Vacuum Cleaner world

STATE SPACE:
\
A state can be represented as a 3×3 matrix of size 9 with one blank tile.
Problem: Arrange tiles 1-8 in order using a blank space to slide tiles.
INITIAL STATE:
Any Random arrangement.
GOAL STATE:
Ordered 3×3 grid.
8-Puzzle Problem
2 8 3
1 6 4
7 5
1 2 3
4 5 6
7 8

ACTIONS:
\
The empty space can move in four directions (if not blocked by edges):
Up (if not in the top row)1.
Down (if not in the bottom row)2.
Left (if not in the leftmost column)3.
Right (if not in the rightmost column)4.
Each move swaps the empty space with an adjacent tile.
8-Puzzle Problem

TRANSITION MODEL:
\
Applying an action results in a new state where the blank tile has moved.
Example:
If the blank tile moves left, it swaps with the tile to its left.
If the blank tile moves down, it swaps with the tile below.
8-Puzzle Problem
COST:
\
Uniform Cost Model: Each move has a cost of 1.

PATH:
8-Puzzle Problem
2 8 3
1 6 4
7 5
2 8 3
1 4
7 6 5
2 8 3
1 4
7 6 5
2 8
1 4 3
7 6 5
1 2 3
4 5 6
7 8
Initial State Goal State
COST: 15 - 25

Missionaries and Cannibals

STATE SPACE:
State: (M_left, C_left, B, M_right, C_right),
where M and C represent missionaries and cannibals,
and B represents the boat position.
Missionaries and Cannibals
INITIAL STATE:
All Missionaries, all cannibals and boat are on left side of the lake.
(3,3,Left,0,0)
GOAL STATE:
All Missionaries, all cannibals and boat are on right side of the lake.
(0,0,Right,3,3)

TRANSITION MODEL:
Two missionaries cross from left to right:
Two cannibals cross from left to right:
One missionary and one cannibal cross from left to right:
One missionary crosses from left to right:
One cannibal crosses from left to right:
The boat moves back to the left side with one or two people
Missionaries and Cannibals

COST:
Uniform Cost Model: Each move has a cost of 1.
Missionaries and Cannibals
PATH:
(3,3,Left,0,0) → (2,2,Right,1,1) → (3,2,Left,0,1) → … → (0,0,Right,3,3)
Missionaries and Cannibals Game

List of
Problems
to
Formulate
Tower of Hanoi1.
Water Jug Problem2.
Travelling Salesman Problem (TSP)3.
N-Queens Problem4.
Shortest Path in a Graph5.
Warehouse Robot Navigation6.
Factory Assembly Line Optimization7.
Touring Problem8.