Artificial Intelligence Lecture Slide-09

asmshafi1 70 views 36 slides Jun 25, 2024
Slide 1
Slide 1 of 36
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36

About This Presentation

Artificial Intelligence


Slide Content

June 25, 2024 Artificial Intelligence, Lecturer #9 1
Artificial Intelligence
Lecture #09

June 25, 2024 Artificial Intelligence, Lecturer #9 2
Contents
The Structure of Agent
Agent types
•Simple reflex
•Model-based reflex
•Goal-based
•Utility-based
•Learning Agents
•Knowledge based

June 25, 2024 Artificial Intelligence, Lecturer #9 3
Structure of Agent
Goals
Given a PEAS task environment
Construct agent function f,
Design an agent program that implements fon a particular architecture
Agent= Architecture +program.
Agent Architecture:
Computing device with physicalsensorand actuator.
Makes the percept from the sensors and make it available to the program
Runs the program
Feeds the program action choices to the actuators.

June 25, 2024 Artificial Intelligence, Lecturer #9 4
Agent types
Six basic agent types:
•Simple reflex
•Model-based reflex
•Goal-based
•Utility-based
•Learning Agents
•Knowledge based

June 25, 2024 Artificial Intelligence, Lecturer #9 5
Simple Reflex Agent
Only use current perceptto select an action
Works only in fully observable environments
functionSIMPLE-REFLEX-AGENT(percept) returns
action
static: rules, a set of condition-action rules
stateINTERPRET-INPUT (percept)
ruleRULE-MATCH (state,rules)
actionRULE-ACTION [rule]
returnaction
Sample Agent Program:

June 25, 2024 Artificial Intelligence, Lecturer #9 6
Simple Reflex Agent Example
function REFLEX_VACUUM_AGENT( percept )
returnsan action
(location,status) = UPDATE_STATE( percept )
if status = DIRTY then return SUCK;
else if location = A then return RIGHT;
else if location = B then return LEFT;

June 25, 2024 Artificial Intelligence, Lecturer #9 7
Disadvantages
Applicable where limited intelligence is required
little bit of un-observability can cause serious
trouble
Simple Reflex Agent
Example –Vacuum Cleaner
If there is no location sensor

June 25, 2024 Artificial Intelligence, Lecturer #9 8
Model-based Reflex Agents
Deal with partially observable environment
an internal statemaintains important information
from previous percepts
sensors only provide a partial picture of the enviro
nment
The internal statesreflects the agent’s knowledge
about the world this knowledge is called a model.

June 25, 2024 Artificial Intelligence, Lecturer #9 9
Model-based Reflex Agents
•functionREFLEX-AGENT-WITH-STATE (percept) returns
action
•static: state, a description of the current world state
• rules, a set of condition-action rules
action, the most recent action, initially none
•stateUPDATE-STATE (state, percept)
•ruleRULE-MATCH (state, rules)
•actionRULE-ACTION [rule]
•returnaction
TO Update internal state information , agent must know
How the world evolves independently of the agent
How the agents own actions affect the world

June 25, 2024 Artificial Intelligence, Lecturer #9 10
Model-based Reflex Agents

June 25, 2024 Artificial Intelligence, Lecturer #9 11
Model-Based Vs. Simple Reflex
Example -Taxi Driver changing position
Model-based
•percept -no car
•internal state -to keep
track where the other
cars are
•update state-
overtaking car will
be closer behind
whether he should
turns the steering
wheel clockwise
or anti...
Simple Reflex
•percept -no car
•action -just change
his position

June 25, 2024 Artificial Intelligence, Lecturer #9 12
Advantages
•Sensors do not provide access to the complete state of
the world
•Internal state helps agent to distinguish between world
states
Disadvantages
•More complex than simple reflex agent
•Computation time increases
Model-based Reflex Agents

June 25, 2024 Artificial Intelligence, Lecturer #9 13
Goal-based Agents
Use current state and goal state to decide correct
actions.
Agents consider future influence when making
current decision.
More flexible for searching and planning.

June 25, 2024 Artificial Intelligence, Lecturer #9 14
Goal-based Agents
We have to choose actions to achieve a goal which is a descrip
tion of desirable situations. e.g. where the taxi wants to go
Keeping track of the current state is often not enough –need
to add goals to decide which situations are good
May have to consider long sequences of possible actions before
deciding if goal is achieved –involves considerations of the
future, “what will happen if I do…?” (search and planning)
More flexible than reflex agent. (e.g. rain / new destination)
In the reflex agent, the entire database of rules would have to b
e rewritten

June 25, 2024 Artificial Intelligence, Lecturer #9 15
Goal-based Agents

June 25, 2024 Artificial Intelligence, Lecturer #9 16
Example: Tracking a Target
target
robot
•The robot must keep
the target in view
•The target’s trajectory
is not known in advance
•The robot may not know
all the obstacles in
advance
•Fast decision is required

June 25, 2024 Artificial Intelligence, Lecturer #9 17
Goal-based Agents
The goal based agent appears less efficient, it is more
flexible because the knowledge that supports its decis
ion is represented explicitly and can be modified.
For example, if it starts to rain the agent can update it
s knowledge of how effectively the action will be ta
ke place. this will automatically cause all of the relev
ant behaviors to be altered to suit the new condition.
For the reflex agent on the other hand, we would hav
e to rewrite any condition-action rule.

June 25, 2024 Artificial Intelligence, Lecturer #9 18
Goal-based agents
Disadvantage:
The goal based agent appears less efficient because
the agent have to consider long sequences of possible
actions before deciding if goal is achieved.
It requires searching and planning because it involves
considerations of the future,“what will happen if I do?”
Advantage:
It is more flexible because the knowledge that supports
its decision is represented explicitly and can be modifi
ed.

June 25, 2024 Artificial Intelligence, Lecturer #9 19
Goals alone are not really enough to generate high quality behavio
r in most environment. Goals can be achieved in multiple ways. A
goal specifies a crude destination between a happy and unhappy st
ate, but often need a more general performance measure that descr
ibes “degree of happiness”
Utility-based agents specifies
How well can the goal be achieved (degree of happiness). Utility f
unction U: State Real indicating a measure of success or happi
ness when at a given state
Which goal should be selected if several can be achieved?
What to do if there are conflicting goals?
Speed and safety
Utility-based Agents:

June 25, 2024 Artificial Intelligence, Lecturer #9 20
Utility-based Agents:

June 25, 2024 Artificial Intelligence, Lecturer #9 21
A complete specification of the utility function allow
s rational decisions in two kinds of cases:
First :when there are conflicting goals, only some
of which can be achieved (for ex: speed and safety
), the utility function specifies the appropriate trad
eoff.
Second: when there are several goals that the age
nt can aim for, none of which can be achieved wit
h certainty, utility provides way in which the likeli
hood of success can be weighted up against the im
portance of the goals.
Utility-based Agents:

June 25, 2024 Artificial Intelligence, Lecturer #9 22
Advantage:
Utility based agents can handle the uncertainty inherent in
partially observable environments.
Consider the taxi driver example:
There are many action sequences that will get the taxi to it
s destination but some are quicker ,safer, more reliable or
cheaper than others.
Goals just provides a distinction between whether the pass
enger is ‘happy’ or ‘unhappy’. The utility function defines
the degree of happiness.
Utility-based Agents

June 25, 2024 Artificial Intelligence, Lecturer #9 23
The idea behind learning is that percepts should be
used not only for acting, but also for improving the
agent’s abilityto act in the future.
Learning takes place as a result of the interaction bet
ween the agent and the world, and from observation
by the agent of its own decision-making processes.
Learning Agents

June 25, 2024 Artificial Intelligence, Lecturer #9 24
A Learning Agent can be divided into four conceptual
components:
Leaning Element,
Performance Element,
Critic, and
Problem Generator.
Learning Agents

June 25, 2024 Artificial Intelligence, Lecturer #9 25
The Learning Elementis responsible for making i
mprovements. It takes some knowledge and some
feedback on how the agent is doing, and determine
s how the Performance Element should be modifie
d to do better in the future. It is also responsible fo
r improving the efficiencyof the Performance Ele
ment.
The Performance Elementis responsible for select
ing external actions. It is what we have previously
considered to be the entire agent---it takes in perc
epts and decides on actions. The design of the Lea
rning Element depends very much on the design of
the Performance Element.
Learning Agents

June 25, 2024 Artificial Intelligence, Lecturer #9 26
The Criticis designed to tell the Learning Element how well the a
gent is doing. It employs a fixed standard of performance. This is
necessary because the percepts themselves provide no indication o
f the agent’s success.
For example, A chess program may receive a percept indicatin
g that it has checkmated its opponent, but it needs a performan
ce standardto know that it is a good thing; the percept itself do
es not say so.
The Problem Generatoris responsible for suggesting actionsthat
will lead to new and informative experience. The point is that if th
e Performance Element had its way, it would keep doing the actio
ns that are best, given what it knows. But if the agent is willing to
explore a little, and do some (perhaps) sub-optimal actions in the s
hort run, it might discover much better actions for the long run. Th
e Problem Generator’s job is to suggest these exploratory actions.
Learning Agents

June 25, 2024 Artificial Intelligence, Lecturer #9 27
The Taxi-Driver Example
The Performance Element consists of whatever collection of knowledge and pro
cedures the taxi has for selecting its driving actions(turning, accelerating, brakin
g, and so on). The taxi goes out on the road and drives, using this performance el
ement.
The Learning Element formulates goals, for example, to learn better rules descri
bing the effects of braking and accelerating, to learn the geography of the area, t
o learn how the taxi behaves on wet roads, and to learn what causes annoyance t
o other drivers.
The Critic observes the world and passes information along to the learning elem
ent. For ex: after the taxi makes a quick left turn across the three lanes of traffic,
the critic observes the “shocking” language used by other drivers, the learning e
lement is able to formulate a rule saying this is a bad action, and the performanc
e element is modified by installing the new rule.
The Problem Generator kicks in with a new suggestion; try taking 7
th
avenue
uptown this time, and see if it is faster than the normal route.
Learning Agents

June 25, 2024 Artificial Intelligence, Lecturer #9 28
The Knowledge based approach is a particularly powerful way of
constructing an agent program.
It aims to implement a view of agents in which they can be seen
as knowingabout their world, and reasoningabout their possible
courses of actions.
Knowledge Based Agents are able to accept the new tasks in the
form of explicitly described goals; they can achieve competence
quickly by being told or learning new knowledge about the envir
onment; and they can adapt to changes in the environment by upd
ating the relevant knowledge.
Knowledge Based Agents

June 25, 2024 Artificial Intelligence, Lecturer #9 29
A Knowledge based agent needs to know many things ----
The current state of the world,
How to infer unseen properties of the world from perce
pts,
How the world evolves over time,
What it wants to achieve, and
What its own actions do in various circumstances.
The central component of a Knowledge based agent is its
Knowledge Base, or KB.
Knowledge Based Agents

June 25, 2024 Artificial Intelligence, Lecturer #9 30
A knowledge base is a set of representations of facts of the world.
Each individual representation is called a sentence.
The sentences are expressed in a Knowledge Representation Lang
uage(like English and other natural languages, but not identical).
The agent operates as follows:
It TELLs the knowledge base what it perceives.
It ASKs the knowledge base what action it should perform.
It performs the chosen action.
The agent’s initial program, before it starts to receive percepts, is
built up by adding one by one the sentence that represent the desi
gner’s knowledge of the environment.
By hooking up a Learningmechanism to a knowledge-based agen
t, one can make the agent fully autonomous.
Knowledge Based Agents

June 25, 2024 Artificial Intelligence, Lecturer #9 31
A Knowledge Based agent can be described at three levels
Knowledge Level
The most abstract level; we can describe the agent by saying what it knows.
Example: A taxi agent might know that, To reach Malibag from Mogbazar
he needs to go through Mouchak.
Logical Level
The level at which the knowledge is encoded into sentenc
Example: Links(Mouchak, Malibag, Mogbazar)
Implementation Level
It is the level that runs on the agent architecture.
The physical representation of the sentences at the logical level are containe
d in this level.
Example: “Links(Mouchak, Malibag, Mogbazar)” in the logical level can be
represented as a string “Links(Mouchak, Malibag, Mogbazar)”in the KB.
Knowledge Based Agents

June 25, 2024 Artificial Intelligence, Lecturer #9 32
Agents are autonomous, that is, they act on behalf of
the user
Agents contain some level of intelligence, from fixed
rules to learning engines that allow them to adapt to
changes in the environment
Agents don't only act reactively, but sometimes also
proactively
How is an Agent Different from
other Software?

June 25, 2024 Artificial Intelligence, Lecturer #9 33
How is an Agent Different from
other Software?
Agents have social ability, that is, they communicate
with the user, the system, and other agents as required
Agents may also cooperatewith other agents to carry
out more complex tasks than they themselves can
handle
Agents may migratefrom one system to another to
access remote resources or even to meet other agents

June 25, 2024 Artificial Intelligence, Lecturer #9 34
Recommended Textbooks
[Negnevitsky,2001]M.Negnevitsky“ArtificialIntelligence:Aguideto
IntelligentSystems”,PearsonEducationLimited,England,2002.
[Russel,2003]S.RussellandP.NorvigArtificialIntelligence:AModern
ApproachPrenticeHall,2003,SecondEdition
[Patterson,1990]D.W.Patterson,“IntroductiontoArtificialIntelligence
andExpertSystems”,Prentice-HallInc.,EnglewoodCliffs,N.J,USA,19
90.
[Minsky,1974]M.Minsky“AFrameworkforRepresentingKnowledge”,
MIT-AILaboratoryMemo306,1974.
[Hubel,1995]DavidH.Hubel,“Eye,Brain,andVision”
[Ballard,1982]D.H.BallardandC.M.Brown,“ComputerVision”,
PrenticeHall,1982.

June 25, 2024 Artificial Intelligence, Lecturer #9 35
References
Artificial Intelligence –A Modern Approach, By Stuart Russell & Peter Norvig
http://en.wikipedia.org/wiki/Intelligent_agent
http://aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt#3
http://www.cs.cmu.edu/~sandholm/cs15-381/Agents.ppt#2

June 25, 2024 Artificial Intelligence, Lecturer #9 36
End of Presentation
Thanks to all !!!
Tags