Lecture Note on Introduction to Intelligent Agents.pdf

azuezenwoke 87 views 82 slides Jun 14, 2024
Slide 1
Slide 1 of 82
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66
Slide 67
67
Slide 68
68
Slide 69
69
Slide 70
70
Slide 71
71
Slide 72
72
Slide 73
73
Slide 74
74
Slide 75
75
Slide 76
76
Slide 77
77
Slide 78
78
Slide 79
79
Slide 80
80
Slide 81
81
Slide 82
82

About This Presentation

Lecture note on Intelligent Agents


Slide Content

Lecture Two
Intelligent Agents
The CSC415 Team

Lecture Outline
▪What is an Agent?
▪Percept Sequence
▪The Concept of Rationality
▪Performance Measures, Rationality,
Omniscience, Learning, Autonomy
▪The Nature of Environments-PEAS
▪Properties of Task Environments
▪Fully Observable Vs Partially Observable |
Single-agent Vs Multiagent | Deterministic
Vs Nondeterministic | Episodic Vs
Sequential | Static Vs Dynamic | Discrete Vs
Continuous
▪The Structure of Agents and Types
▪Simple Reflex Agents
▪Model-based Reflex Agents
▪Goal-based Agents
▪Utility-based Agents
▪Learning Agents
▪How the Components of Agent
Programs Work
▪Atomic | Factored | Structured
Representations
▪Summary

Main Text for this Course
Title
Artificial Intelligence: a Modern
Approach (2020, 4
th
Edition)
Authors: Stuart Russell and Peter Norvig

Preamble
▪In the previous lecture the concept of Rational Agents as
central to our approach to AI was presented.
▪The concept of Rationalitycan be applied to a wide variety of agents
operating in any imaginable environment.
▪This concept will be applied to develop a small set of Design
Principles for building successful agents—systems that can
reasonably be called intelligent.
4

Lecture Outline | Progress
▪What is an Agent?
▪Percept Sequence
▪The Concept of Rationality
▪Performance Measures, Rationality,
Omniscience, Learning, Autonomy
▪The Nature of Environments-PEAS
▪Properties of Task Environments
▪Fully Observable Vs Partially Observable |
Single-agent Vs Multiagent | Deterministic
Vs Nondeterministic | Episodic Vs
Sequential | Static Vs Dynamic | Discrete Vs
Continuous
▪The Structure of Agents and Types
▪Simple Reflex Agents
▪Model-based Reflex Agents
▪Goal-based Agents
▪Utility-based Agents
▪Learning Agents
▪How the Components of Agent
Programs Work
▪Atomic | Factored | Structured
Representations
▪Summary

What is An Agent?
▪An Agentis anything that can
be viewed as perceiving its
Environmentthrough Sensors
and acting upon that
environment through Actuators
▪Robotic Agents:
▪Sensors: Cameras, Infrared
range finders
▪Actuators: Motors
▪Software Agents:
▪Sensory Inputs:File contents,
network packets, and human
input
▪Acts on Environment: Writing
files, sending network packets,
and displaying information or
generating sounds
▪The environment could the
entire universe!
▪In practice, just a subsetof the
universe whose state we care
about
6

7

Lecture Outline | Progress
▪What is an Agent?
▪Percept Sequence
▪The Concept of Rationality
▪Performance Measures, Rationality,
Omniscience, Learning, Autonomy
▪The Nature of Environments-PEAS
▪Properties of Task Environments
▪Fully Observable Vs Partially Observable |
Single-agent Vs Multiagent | Deterministic
Vs Nondeterministic | Episodic Vs
Sequential | Static Vs Dynamic | Discrete Vs
Continuous
▪The Structure of Agents and Types
▪Simple Reflex Agents
▪Model-based Reflex Agents
▪Goal-based Agents
▪Utility-based Agents
▪Learning Agents
▪How the Components of Agent
Programs Work
▪Atomic | Factored | Structured
Representations
▪Summary

Percept Sequence
▪The term Perceptto refer to what
the agent’s sensors are perceiving
▪Percept Sequence: An agent’s
complete historyof everything the
agent has ever perceived
▪An Agent’s choice of action
depends on
▪Its built-in Knowledge
▪Entire Percept Sequence observed
to date; but not on what it hasn’t
perceived
▪Mathematically, an agent’s
behavior is described by the Agent
Function that maps any given
Percept Sequence to an Action
▪Agent Function:abstract
mathematical description
▪Agent Program:concrete
implementation, running within some
physical system.
9

10

11

Lecture Outline | Progress
▪What is an Agent?
▪Percept Sequence
▪The Concept of Rationality
▪Performance Measures, Rationality,
Omniscience, Learning, Autonomy
▪The Nature of Environments-PEAS
▪Properties of Task Environments
▪Fully Observable Vs Partially Observable |
Single-agent Vs Multiagent | Deterministic
Vs Nondeterministic | Episodic Vs
Sequential | Static Vs Dynamic | Discrete Vs
Continuous
▪The Structure of Agents and Types
▪Simple Reflex Agents
▪Model-based Reflex Agents
▪Goal-based Agents
▪Utility-based Agents
▪Learning Agents
▪How the Components of Agent
Programs Work
▪Atomic | Factored | Structured
Representations
▪Summary

Good Behavior: The Concept of Rationality
▪A Rational Agent is
one that does the
Right Thing.
▪Doing the right thing is
better than doing the
wrong thing
▪But what does it mean
to do the right thing?
▪Performance Measures
▪Rationality
▪Omniscience
▪Learning
▪Autonomy
13

The Concept of Rationality
Performance Measures
▪AI has generally stuck to
Consequentialism:
▪There are various notions of “Right
thing” in Moral philosophy,
▪An agent’s behavior is evaluated
by its Consequences
▪Recall:Agent generates Percept
sequencefrom an environment
▪Sequence of actionscauses the
environment to go through a
Sequence of States
▪Desirable Sequence = Good
performance
▪Performance Measure: evaluates
the given sequence of
environment states
▪Usually in the mind of the designer,
or the users of the machine
▪Machines do not have desires and
preferences of their own
▪Example of Vacuum-cleaner with
PM to clean floor
14

The Concept of Rationality
Rationality
▪For each possible Percept Sequence, a rational agent should
select an action that is expected to maximizeits Performance
Measure, given the evidence provided by the Percept Sequence
and whatever Built-in Knowledge the agent has.
▪What is rational at any given time depends on four things:
▪Performance Measurethat defines the criterion of success
▪Agent’s prior knowledgeof the environment
▪Actions that the agent can perform
▪Agent’s Percept Sequence to date
15

Example of Rationality | Vacuum-Cleaner
▪The Vacuum-Cleaner is Rationalif
▪The Performance Measureawards one point for each clean square at each time
step, over a “lifetime” of 1000-time steps.
▪The “geography” of the environmentis known a priori, but the dirt distribution
and the initial location of the agent are not.
▪Clean squares stay clean and sucking cleans the current square.
▪The Right and Left actions move the agent one square except when this would take
the agent outside the environment, in which case the agent remains where it is.
▪The only available actions are Right, Left, and Suck.
▪The agent correctly perceives its location and whether that location contains
dirt.
16

The Concept of Rationality
Omniscience
▪There are distinctions between
Rationalityand Omniscience.
▪An Omniscient Agent knows the
actual outcome of its actions and
can act accordingly;
▪Impossible in reality
▪For example: John, walking along
the road, and being rational, decides
to exchange pleasantries with an
old friend across the street.
▪Meanwhile, at 33,000 feet, a cargo
door falls off a passing airliner and
flattens John before he makes it
▪Was John irrational to cross the
street?
▪It is unlikely that John’s obituary
would read “Idiot attempts to cross
street”.
▪This example shows that Rationality
is not the same as Perfection.
▪Rationalitymaximizes Expected
Performance
▪Perfectionmaximizes Actual
Performance
17

The Concept of Rationality
Learning
▪A Rational Agent not only
gathers information but also
to Learnas much as
possible from what it
perceives
▪Agent’s initial configuration
could reflect some prior
knowledge of the environment,
but as the agent gains
experience this may be
modified and augmented.
▪There are extreme cases in
which the environment is
completely known a priori and
completely predictable.
▪In such cases, the agent need
not perceive or learn; it simply
acts correctly
▪Such Agents are Fragile
18

The Concept of Rationality
Autonomy
▪A rational agent should be Autonomous
▪Learn what it can to compensate for Partialor Incorrect Prior
Knowledge
▪An agent lacks Autonomy to the extent it relieson the Prior Knowledge
of its designer rather than on its own percepts and learning processes
▪For example, a more rational vacuum-cleaner is one that learns to predict
where and when additional dirt will appear
▪After sufficient experience of its environment, the behavior of a rational
agent can become effectively independentof its prior knowledge
19

Lecture Outline | Progress
▪What is an Agent?
▪Percept Sequence
▪The Concept of Rationality
▪Performance Measures, Rationality,
Omniscience, Learning, Autonomy
▪The Nature of Environments-PEAS
▪Properties of Task Environments
▪Fully Observable Vs Partially Observable |
Single-agent Vs Multiagent | Deterministic
Vs Nondeterministic | Episodic Vs
Sequential | Static Vs Dynamic | Discrete Vs
Continuous
▪The Structure of Agents and Types
▪Simple Reflex Agents
▪Model-based Reflex Agents
▪Goal-based Agents
▪Utility-based Agents
▪Learning Agents
▪How the Components of Agent
Programs Work
▪Atomic | Factored | Structured
Representations
▪Summary

The Nature of Environments
▪Task Environments are the
“problems” to which rational
agents are the “solutions.”
▪Task Environment –PEAS
▪Performance
▪Environment
▪Actuators
▪Sensors
▪In designing an agent, the first
step must always be to
specifythe task environment
as fully as possible
▪Consider the example of a
Taxi Driving Agent
21

22

23

Lecture Outline | Progress
▪What is an Agent?
▪Percept Sequence
▪The Concept of Rationality
▪Performance Measures, Rationality,
Omniscience, Learning, Autonomy
▪The Nature of Environments-PEAS
▪Properties of Task Environments
▪Fully Observable Vs Partially Observable |
Single-agent Vs Multiagent | Deterministic
Vs Nondeterministic | Episodic Vs
Sequential | Static Vs Dynamic | Discrete Vs
Continuous
▪The Structure of Agents and Types
▪Simple Reflex Agents
▪Model-based Reflex Agents
▪Goal-based Agents
▪Utility-based Agents
▪Learning Agents
▪How the Components of Agent
Programs Work
▪Atomic | Factored | Structured
Representations
▪Summary

6 Properties of Task Environments
▪Fully Observable Vs Partially Observable
▪Single-agent Vs Multiagent
▪Deterministic Vs Nondeterministic
▪Episodic Vs Sequential
▪Static Vs Dynamic
▪Discrete Vs Continuous
25

Properties of Task Environments
Fully Observable Vs Partially Observable
▪Fully Observable: If an agent’s
sensors give it access to the
complete state of the environment
at each point in time.
▪Sensorsdetect all aspects that are
relevant to the choice of action
based on Performance Measure
▪Fully Observable environment are
Convenient
▪Agent needn’t maintain any internal
state to keep track of the world
▪An environment might be Partially
observable
▪Noisy and Inaccurate sensors or
missing Sensory data
▪For example, an automated taxi
cannot see what other drivers are
thinking.
▪If the agent has no sensors at all
then the environment is
Unobservable.
26

Properties of Task Environments
Single-agent Vs Multiagent
▪Describes the number of
agents in the environment, E.g.
▪Single-agent Environment: an
agent solving a crossword
puzzle by itself
▪Two-agent Environment: an
agent playing chess
▪However, there are some
subtle issues on how to
determine what an Agent is
▪Does an agent A (the taxi driver)
treat an object B (another
vehicle) as an agent?
▪or an object behaving according
to the laws of physics?
▪Key distinction: Can B’s
behavior be best described as
maximizing a performance
measure whose value depends
on agent A’s behavior
27

Single-agent Vs Multiagent
Some Examples
▪In Chess playing Environment
▪The opponent entity B is trying
to maximizeits performance
measure, which, by the rules of
chess, minimizesagent A’s
performance measure.
▪Thus, chess is a Competitive
Multiagent Environment.
▪In the taxi-driving environment,
▪Avoiding collisions maximizes
the Performance Measure of all
agents,
▪It is a Partially Cooperative
Multiagent Environment.
▪It is also Partially Competitive
▪For example, only one car can
occupy a parking space
28

Properties of Task Environments
Deterministic Vs Nondeterministic
▪Deterministic Environment: If the
next stateof the environment is
completely determined by the
current stateand the action
executed by the agent
▪Otherwise, it is Nondeterministic.
▪In principle, an agent need not worry
about uncertainty in a Fully
Observableand Deterministic
environment.
▪A Partially Observable environment
could appear to be
Nondeterministic
▪Most real situations are so
complex;
▪It is impossibleto keep track of all
the unobservedaspects
▪For practical purposes, they must be
treated as nondeterministic.
29

Deterministic Vs Nondeterministic
Some Examples
▪Taxi driving is clearly Nondeterministic, because
▪One can never predict the behavior of traffic exactly;
▪One’s tires may blow out unexpectedly
▪One’s engine may seize up without warning
▪The vacuum world as we described it is Deterministic
▪But variations can include nondeterministic elements such as
randomly appearing dirt and an unreliable suction mechanism
30

Properties of Task Environments
Episodic Vs Sequential
▪Episodic:the agent’s experience
is divided into Atomic Episodes
▪In each episode the agent receives a
percept and then performs a Single
Action
▪Crucially, the next episode does not
depend on the actions taken in
previous episodes
▪Many Classification tasks are
episodic.
▪Sequential: the current decision
could affect all future decisions.
▪Chess and Taxi driving are
Sequential:
▪Short-term actions can have long-
term consequences.
▪Episodicenvironments are much
simplerthan Sequential
environments
▪Because the agent does not need to
think ahead
31

Properties of Task Environments
Static Vs Dynamic
▪Dynamic: If the environment
can change while an agent is
deliberating; Otherwise, it is
Static
▪Static environments are easy to
deal with because the Agent
need not
▪Keep looking at the world while
it is deciding on an action
▪Worry about the passage of
time.
▪Dynamic environmentsare
continuously asking the agent
what it wants to do
▪If it hasn’t decided yet that
counts as deciding to do
nothing.
▪Semidynamic: When the
environment itself does not
change with the passage of
time but the agent’s
performance score does
32

Static Vs Dynamic
Examples
▪Taxi driving is clearly Dynamic
▪The other cars and the taxi itself keep moving while the driving
algorithm dithers about what to do next.
▪Chess, when played with a clock, is Semidynamic
▪Crossword puzzles are Static
33

Properties of Task Environments
Discrete Vs Continuous
▪The Discrete/Continuous distinction applies to the state of
the environment
▪To the way Timeis handled,
▪To the Perceptsand Actionsof the agent
34

Discrete Vs Continuous
Examples
▪Chess Environmenthas a finite number of distinct states
(excluding the clock).
▪Chess also has a discrete set of percepts and actions.
▪Taxi driving is a continuous-stateand continuous-time problem:
▪The Speedand Locationof the taxi and of the other vehicles sweep
through a range of continuous values and do so smoothly over time.
▪Taxi-driving actions are also continuous (steering angles, etc.).
▪Input from Digital Cameras is discrete, strictly speaking,
▪But is typically treated as representing Continuouslyvarying intensities
and location
35

Hardest Environment for Agents
▪The Hardest Environmental case is
▪Partially Observable,
▪Multiagent,
▪Nondeterministic,
▪Sequential,
▪Dynamic, and
▪Continuous
▪Taxi driving is hard in all these senses
36

37

Lecture Outline | Progress
▪What is an Agent?
▪Percept Sequence
▪The Concept of Rationality
▪Performance Measures, Rationality,
Omniscience, Learning, Autonomy
▪The Nature of Environments-PEAS
▪Properties of Task Environments
▪Fully Observable Vs Partially Observable |
Single-agent Vs Multiagent | Deterministic
Vs Nondeterministic | Episodic Vs
Sequential | Static Vs Dynamic | Discrete Vs
Continuous
▪The Structure of Agents and Types
▪Simple Reflex Agents
▪Model-based Reflex Agents
▪Goal-based Agents
▪Utility-based Agents
▪Learning Agents
▪How the Components of Agent
Programs Work
▪Atomic | Factored | Structured
Representations
▪Summary

The Structure of Agents
▪The job of AI is to design an Agent Program that
implements the Agent function
▪The mapping from Perceptsto Actions.
▪We assume this program will run on some sort of
computing device with Physical Sensorsand Actuators
▪This is called the Agent Architecture
▪Agent = Architecture + Program
39

Agent Architecture
▪The Programmust be appropriate for the architecture.
▪If the program is going to recommend actions like Walk, the
architecture had better have Legs.
▪The Architecturemight be just an ordinary PC, or it might be a robotic
car with several onboard computers, cameras, and other sensors.
▪In general, the Architecture
▪Makes the Percepts from the Sensors available to the program,
▪Runs the program
▪Feeds the Program’s Action choices to the Actuators as they are
generated
40

Agent Programs
▪The agent program has no choice but to take just the
Current Percept as input because nothing more is available
from the environment
▪If the agent’s actions depends on the entire Percept Sequence, the
agent will have to remember the percepts.
41

To Study
▪Why are Table-driven Approaches to agent construction
is doomed to failure?
42

4 Types of Agent Programs
▪4 basic kinds of Agent Programs that embody the
principles underlying almost all intelligent systems
▪Simple Reflex Agents
▪Model-based Reflex Agents
▪Goal-based Agents
▪Utility-based Agents
43

Lecture Outline | Progress
▪What is an Agent?
▪Percept Sequence
▪The Concept of Rationality
▪Performance Measures, Rationality,
Omniscience, Learning, Autonomy
▪The Nature of Environments-PEAS
▪Properties of Task Environments
▪Fully Observable Vs Partially Observable |
Single-agent Vs Multiagent | Deterministic
Vs Nondeterministic | Episodic Vs
Sequential | Static Vs Dynamic | Discrete Vs
Continuous
▪The Structure of Agents and Types
▪Simple Reflex Agents
▪Model-based Reflex Agents
▪Goal-based Agents
▪Utility-based Agents
▪Learning Agents
▪How the Components of Agent
Programs Work
▪Atomic | Factored | Structured
Representations
▪Summary

Simple Reflex Agents
▪These Agentsselect actions on the basis of the current
percept, ignoring the rest of the percept history
▪They are the simplest kind of agents
▪For example, the Vacuum Agent whose agent function
▪Its decision is based only on the Current Locationand on whether that
location contains dirt.
45

46

47

Simple Reflex Agents
▪Simple Reflex agents are simple,
but they have Limited
Intelligence; will work only if
▪Correct decision can be made
based on just the current percept
▪Environment is Fully Observable
▪a little bit of Unobservabilitycan
cause serious trouble
▪For example, the Braking Rule
assumes that the condition car-in-
front-is-braking can be determined
from the current percept—a single
frame of video.
▪This works if the car in front has
a centrally mounted and uniquely
identifiable brake light
▪What happens if the brake-light
are bad? Differently configured
or non-existent?
▪A Simple Reflex Agent driving
behind such a car would either
brake continuously and
unnecessarily, or, worse, never
brake at all
48

49

Lecture Outline | Progress
▪What is an Agent?
▪Percept Sequence
▪The Concept of Rationality
▪Performance Measures, Rationality,
Omniscience, Learning, Autonomy
▪The Nature of Environments-PEAS
▪Properties of Task Environments
▪Fully Observable Vs Partially Observable |
Single-agent Vs Multiagent | Deterministic
Vs Nondeterministic | Episodic Vs
Sequential | Static Vs Dynamic | Discrete Vs
Continuous
▪The Structure of Agents and Types
▪Simple Reflex Agents
▪Model-based Reflex Agents
▪Goal-based Agents
▪Utility-based Agents
▪Learning Agents
▪How the Components of Agent
Programs Work
▪Atomic | Factored | Structured
Representations
▪Summary

Model-Based Reflex agents
▪Effective way to handle Partial
Observability is for the agent to
keep track of the part of the
world it can’t see now.
▪Agent should maintain
▪Some sort of Internal State that
depends on the Percept History
▪Reflects at least some of the
unobserved aspects of the
current state
▪For the braking problem, the
Internal State is not too
extensive
▪Just the previous frame from
the camera, allowing the agent
to detect when two red lights
at the edge of the vehicle go
on or off simultaneously.
▪For other driving tasks such as
changing lanes, the agent
needs to keep track of where
the other cars are if it can’t see
them all at once.
51

Model-Based Reflex agents
▪Updating this Internal State
Information as time goes by
requires two kinds of knowledge
▪First: Information on How the world
works or changes over time. Two
parts:
▪The Effects of the Agent’s Actions
▪How the world evolves
independently of the agent
▪E.g. when the agent turns the
steering wheel clockwise, the car
turns to the right
▪This knowledge is called Transition
model of the world
▪Second: Information about How the
state of the world is captured in the
agent’s percepts
▪E.g. When the car in front initiates
braking, one or more illuminated red
regions appear in the forward-facing
camera image
▪This knowledge is called a Sensor
Model
52

Model-Based Reflex agents
▪The Transition model + Sensor Model enables a Model-
based Agentto keep track of the state of the world
▪To the extent possible given the limitations of the Agent’s Sensors
53

54

55

Lecture Outline | Progress
▪What is an Agent?
▪Percept Sequence
▪The Concept of Rationality
▪Performance Measures, Rationality,
Omniscience, Learning, Autonomy
▪The Nature of Environments-PEAS
▪Properties of Task Environments
▪Fully Observable Vs Partially Observable |
Single-agent Vs Multiagent | Deterministic
Vs Nondeterministic | Episodic Vs
Sequential | Static Vs Dynamic | Discrete Vs
Continuous
▪The Structure of Agents and Types
▪Simple Reflex Agents
▪Model-based Reflex Agents
▪Goal-based Agents
▪Utility-based Agents
▪Learning Agents
▪How the Components of Agent
Programs Work
▪Atomic | Factored | Structured
Representations
▪Summary

Goal-Based Agents
▪Knowledge of the Current State of the environment is not always
enough to decide action
▪E.g. at a road junction, the taxi can turn left, turn right, or go straight on
▪The correct decision depends on where the taxi is trying to get to
▪In addition to the current state description, Agents need some
sort of Goal Information that describes desirable situations
▪Goal information used in the Model-based Reflex Agent can inform the
actions that achieve the Goal
57

Goal-Based Agents
▪A Goal-based action selection may be straightforward
▪When Goal Satisfaction results immediately from a Single Action.
▪Sometimes it is more Complex
▪E.g. Considering long sequences of twists and turns in order to find a
way to achieve the goal
▪Search and Planning are the subfields of AI devoted to finding action
sequences that achieve the agent’s goals.
58

Goal-Based Agents
▪Goal-based agents acts
differently from Reflex Agents
▪They involve consideration of
the future
▪What will happen if I do such-
and-such?
▪Will that make me happy?
▪In Reflex agent designs, this
information is not explicitly
represented
▪Built-in rules map directly from
Perceptsto Actions.
▪Reflex agent brakes when it
sees brake lights; no idea why.
▪Goal-based agents brake when
it sees brake lights
▪Because that’s the only action
that it predicts will achieve its
goal of not hitting other cars.
59

Goal-Based Agents
▪The Goal-based agent may appears less efficient, but it is more
Flexible
▪Because the knowledge that supports its decisions is represented
explicitly and can be modified
▪For example, a Goal-based agent’s behavior can easily be changed to go to a
different destination simply by specifying that Destinationas the goal.
▪The Reflex Agent’s Rules for when to turn and when to go straight will work
only for a single destination;
▪They must all be replaced to go somewhere new
60

61

Lecture Outline | Progress
▪What is an Agent?
▪Percept Sequence
▪The Concept of Rationality
▪Performance Measures, Rationality,
Omniscience, Learning, Autonomy
▪The Nature of Environments-PEAS
▪Properties of Task Environments
▪Fully Observable Vs Partially Observable |
Single-agent Vs Multiagent | Deterministic
Vs Nondeterministic | Episodic Vs
Sequential | Static Vs Dynamic | Discrete Vs
Continuous
▪The Structure of Agents and Types
▪Simple Reflex Agents
▪Model-based Reflex Agents
▪Goal-based Agents
▪Utility-based Agents
▪Learning Agents
▪How the Components of Agent
Programs Work
▪Atomic | Factored | Structured
Representations
▪Summary

Utility-Based agents
▪Goals are not enough to generate
high-quality behavior in most
environments
▪E.g. Many action sequences will get
the Taxi to its destination (thereby
achieving the goal),
▪But some are Quicker, Safer, More
Reliable, or Cheaperthan others.
▪Goals only provide a crude binary
distinction between “Happy” and
“Unhappy” states.
▪A general Performance Measurefor
agents should allow a comparison
of different world states with
varying degree of happiness
▪“Happy”does not sound Scientific
▪Economists and Computer
scientists use the term Utility
instead.
63

64

Utility-Based agents
▪Performance measure assigns
a score to any given sequence
of environment states
▪So it can easily distinguish
between moreand less
desirableways of getting to the
taxi’s destination.
▪An agent’s Utility Function is
essentially an internalization of
the Performance Measure.
▪Provided that the internal Utility
Function and the External
Performance Measure agree
▪An agent that chooses actions
to maximize its Utilitywill be
rationalaccording to the
external performance measure.
65

Utility-Based agents
▪Utility is not the only way to be
rational
▪E.g. Rational vacuum agent doesn’t
have a Utility function
▪Like Goal-based agents, a
Utility-based agent has the
advantage of Flexibilityand
Learning
▪However, there are Scenarios
where Goals are inadequate
but a Utility-based agent can
still make rational decisions
▪Conflicting Goals:Speed Vs Safety
▪Utility Function specifies tradeoff
▪Multiple goals and none of which
can be achieved with certainty,
▪Utilityprovides a way in which the
likelihood of success can be
weighed against the importance of
the goals.
66

Utility-Based agents
▪A Rational Utility-based Agent
chooses the action that
maximizes the expected utility
of the action outcomes
▪The utility the agent expects to
derive, on average, given the
probabilities and utilities of
each outcome
▪A Utility-based Agent has to
model and keep track of its
environment
▪This has led to a great deal of
research on Perception,
Representation, Reasoning, and
Learning
▪Perfect rationality is usually
unachievable in practice
because of computational
complexity
67

Lecture Outline | Progress
▪What is an Agent?
▪Percept Sequence
▪The Concept of Rationality
▪Performance Measures, Rationality,
Omniscience, Learning, Autonomy
▪The Nature of Environments-PEAS
▪Properties of Task Environments
▪Fully Observable Vs Partially Observable |
Single-agent Vs Multiagent | Deterministic
Vs Nondeterministic | Episodic Vs
Sequential | Static Vs Dynamic | Discrete Vs
Continuous
▪The Structure of Agents and Types
▪Simple Reflex Agents
▪Model-based Reflex Agents
▪Goal-based Agents
▪Utility-based Agents
▪Learning Agents
▪How the Components of Agent
Programs Work
▪Atomic | Factored | Structured
Representations
▪Summary

Learning Agents
▪This is now the preferred
method for creating state-
of-the-art systems in many
areas of AI.
▪Any type of agent (model-
based, goal-based, utility-
based, etc.) can be built as a
Learning Agent (or not)
▪Learninghas advantage. It
allows the agent to:
▪Operate in initially unknown
environments
▪Become more competent than
its initial knowledge alone
might allow
69

Four Elements of a Learning Agent
▪A learning agent has four
conceptual components
▪Learning Element: responsible
for making improvements
▪Critic: Provides feedback to
Learning element on how the
agent is doing and determines
how the performance element
should be modified to do better
in the future
▪Performance Element:
Responsible for selecting
external actions
▪The Performance Element is
what we have previously
considered –Agent Types
▪Problem Generator: Suggests
actions that will lead to new and
informative experiences
70

71

Lecture Outline | Progress
▪What is an Agent?
▪Percept Sequence
▪The Concept of Rationality
▪Performance Measures, Rationality,
Omniscience, Learning, Autonomy
▪The Nature of Environments-PEAS
▪Properties of Task Environments
▪Fully Observable Vs Partially Observable |
Single-agent Vs Multiagent | Deterministic
Vs Nondeterministic | Episodic Vs
Sequential | Static Vs Dynamic | Discrete Vs
Continuous
▪The Structure of Agents and Types
▪Simple Reflex Agents
▪Model-based Reflex Agents
▪Goal-based Agents
▪Utility-based Agents
▪Learning Agents
▪How the Components of Agent
Programs Work
▪Atomic | Factored | Structured
Representations
▪Summary

How the Components of Agent Programs Work
▪Agent Programs answer the questions:
▪What is the world like now?
▪What action should I do now?
▪What do my actions do?
▪The next question for a student of AI is
▪How on Earth do these components work?
▪Representations can be placed along an axis of Increasing
Complexity and Expressive Power
▪Expressive Power according to Atomic, Factored, Structured
73

74

Agent Programs Work
Atomic Representation
▪In an Atomic Representation each state of the world is
Indivisible
▪It has no internal Atomic representation structure.
▪Consider the task of finding a driving route from one end of
a country to the other via some sequence of cities
▪To solve this problem, one can reduce the state of the world to just
the nameof the city we are in—a single atom of knowledge
▪A “black box” whose only discernible property is that of being identical
to or different from another black box.
75

Agent Programs Work
Factored Representation
▪A factored representation splits up
each state into a fixed set of
Variablesor Attributes, each of
which can have a Value.
▪Considering the same driving problem,
the concern is beyond just the atomic
location in one city or another, to
include
▪How much gas is in the tank
▪Our current GPS coordinates
▪Whether or not the oil warning light is
working,
▪How much money we have for tolls,
▪What station is on the radio, etc.
▪Two different Atomic States have
nothing in common
▪Just different Black Boxes
▪Two different Factored Statescan
share some attributes, such as
▪Being at some particular GPS location
▪Or not, such as having lots of gas or
having no gas
▪Making it much easier to work out
how to turn one state into another
76

Agent Programs Work
Structured Representation
▪The World has things in it that
are related to each other
beyond Variableswith Values
▪E.g. a large truck reversing into
the driveway of a dairy farm, but
a loose cow is blocking the
truck’s path.
▪A factored representation is
unlikely to be pre-equipped with
the attribute
TruckAheadBackingIntoDairyFa
rmDrivewayBlockedByLooseCo
wwith value trueor false
▪Instead, a Structured
Representation explicitly
describes
▪the objects Cowsand Trucks
and their various and varying
relationships
77

Lecture Outline | Progress
▪What is an Agent?
▪Percept Sequence
▪The Concept of Rationality
▪Performance Measures, Rationality,
Omniscience, Learning, Autonomy
▪The Nature of Environments-PEAS
▪Properties of Task Environments
▪Fully Observable Vs Partially Observable |
Single-agent Vs Multiagent | Deterministic
Vs Nondeterministic | Episodic Vs
Sequential | Static Vs Dynamic | Discrete Vs
Continuous
▪The Structure of Agents and Types
▪Simple Reflex Agents
▪Model-based Reflex Agents
▪Goal-based Agents
▪Utility-based Agents
▪Learning Agents
▪How the Components of Agent
Programs Work
▪Atomic | Factored | Structured
Representations
▪Summary

Summary
▪This lecture explored the science of agent design. The major
points to recall are as follows:
▪An Agentis something that perceives and acts in an
environment.
▪The Agent Function for an agent specifies the action taken by the agent
in response to any Percept Sequence.
▪The Performance Measure evaluates the behavior of the agent in
an environment.
▪A rational agent acts so as to maximizethe Expected Value of the
performance measure, given the percept sequence it has seen so far
79

Summary
▪A Task Environment specification includes Performance
measure, External environment, Actuators, and Sensors.
▪In designing an agent, the first step must always be to specify the
task environment as fully as possible.
▪Task environments vary along several significant
dimensions.
▪They can be Fully or Partially Observable, Single-agent or Multiagent,
Deterministic or Nondeterministic, Episodic or Sequential, Static or
Dynamic, Discrete or Continuous, and known or unknown.
80

Summary
▪The Agent Program implements
the agent function.
▪There exists a variety of basic agent
program designs reflecting the kind
of information made explicit and
used in the decision process.
▪The designs vary in Efficiency,
Compactness, and Flexibility.
▪The nature of the Environment
determines the appropriate agent
problem design
▪Simple Reflex agentsrespond
directly to percepts
▪Model-based reflex agentsmaintain
internal state to track aspects of the
world that are not evident in the
current percept.
▪Goal-based agents act to achieve
their goals
▪Utility-based agentstry to maximize
their own expected “happiness.”
▪All agents can improve their
performance through Learning.
81

Lecture Outline | End
▪What is an Agent?
▪Percept Sequence
▪The Concept of Rationality
▪Performance Measures, Rationality,
Omniscience, Learning, Autonomy
▪The Nature of Environments-PEAS
▪Properties of Task Environments
▪Fully Observable Vs Partially Observable |
Single-agent Vs Multiagent |
Deterministic Vs Nondeterministic |
Episodic Vs Sequential | Static Vs
Dynamic | Discrete Vs Continuous
▪The Structure of Agents and Types
▪Simple Reflex Agents
▪Model-based Reflex Agents
▪Goal-based Agents
▪Utility-based Agents
▪Learning Agents
▪How the Components of Agent
Programs Work
▪Atomic | Factored | Structured
Representations
▪Summary
Tags