13-Real-World-Planning IN THE ARTIFICIAL INTELLIGENCE-Oct-22.pdf

prathapbadam 64 views 36 slides Jun 24, 2024
Slide 1
Slide 1 of 36
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36

About This Presentation

AI


Slide Content

PLANNING/ACTING IN REAL WORLD
Chapter 11

Topics
•The real world
•Time, schedules, resources
•Hierarchical planning
•Planning in nondeterministic domains
•Multi-agent planning

The real world
~Flat(Spare) Intact(Spare) Off(Spare)
On(Tire1) Flat(Tire1)
START
FINISH
On(x) ~Flat(x)
Remove(x)
On(x)
Off(x) ClearHub
Puton(x)
Off(x) ClearHub
On(x) ~ClearHub
Inflate(x)
Intact(x) Flat(x)
~Flat(x)
Chapter 12.3 - 12.5 3

Things go wrong
Incomplete information
Unknown preconditions, e.g.,Intact(Spare)?
Disjunctive effects, e.g.,Inflate(x)causes
Inflated(x)∨SlowHiss(x)∨Burst(x)∨BrokenPump∨...
Incorrect information
Current state incorrect, e.g., spare NOT intact
Missing/incorrect postconditions in operatorsQualification problem
:
can never finish listing all the required preconditions and
possible conditional outcomes of actions
Chapter 12.3 - 12.5 4

Time, Schedule, Resources

Hierarchical Planning – High Level Actions
Reachable states
Goal achievement

Planning and Acting with Nondeterminism
•Conformant planning (w/o observations)
•Contingency planning (for partially
observable/nondeterministic environments)
•Online planning/replanning (for unknown
environments)

Indeterminacy in the World
Boundedindeterminacy
: actions canhaveunpredictable effects, but thepos-
sible effects can be listed in the action description axioms
Unbounded indeterminacy
: set of possible preconditions or effects either is
unknown or is too large to be completely enumerated
Closely related to
qualification problem
Chapter 12.3 - 12.5 5

Solutions
Conformant
or
sensorless planning
Devise a plan that works regardless of state or outcome
Such plans may not existConditional planning
Plan to obtain information (observation actions)
Subplan for each contingency, e.g.,
[Check(Tire1),ifIntact(Tire1)thenInflate(Tire1)elseCallAAA
Expensive because it plans for many unlikely casesMonitoring/Replanning
Assume normal states, outcomes
Check progress
during execution
, replan if necessary
Unanticipated outcomes may lead to failure (e.g., no AAA card)(Really need a combination; plan for likely/serious eventualities,
deal with others when they arise, as they must eventually)
Chapter 12.3 - 12.5 6

Conformant planning
Search in space of
belief states
(sets of possible actual states)
L
R
L R
S
L R
S S
S S
R
L
S S
L
R
R
L
R
L
Chapter 12.3 - 12.5 7

Conditional planning
If the world is nondeterministic or partially observable
then percepts usually
provide information
,
i.e.,
split up
the belief state
ACTION
PERCEPT
Chapter 12.3 - 12.5 8

Conditional planning (con’t.)
Conditional plans check (any consequence of KB +) percept
[...,ifCthenPlanAelsePlanB,...]
Execution: checkCagainst current KB, execute “then” or “else”
Chapter 12.3 - 12.5 9

Conditional planning (con’t.)
Need to handle nondeterminism by building into the plan conditional steps
that check the state of the environment at run time, and then decide what
to do.
Augment STRIPS to allow for nondeterminism:
Add
Disjunctive effects
(e.g., to model when action sometimes fails):
Action(Left,Precond:AtR,Effect:AtL∨AtR)
Add
Conditionaleffects
(i.e.,dependsonstateinwhichit’sexecuted):
Form:when<condition>:<effect>
Action(Suck,Precond:,
Effect:(whenAtL: CleanL)∧(whenAtR: CleanR))
Create
Conditional steps
:
if<test>thenplan-Aelseplan-B
Chapter 12.3 - 12.5 10

Conditional planning (con’t.)
Need
some
plan for
every
possible percept and action outcome
(Cf. game playing:
some
response for
every
opponent move)
(Cf. backward chaining:
some
rule such that
every
premise satisfied
Use: AND–OR tree search (very similar to backward chaining algorithm)
Similar to game tree in minimax search
Differences:MaxandMinnodes becomeOrandAndnodes
Robot takes action in “state” nodes.
Nature decides outcome at “chance” nodes.
Plan needs to takesomeaction at every state it reaches (i.e.,Ornodes)
Plan must handleeveryoutcome for the action it takes (i.e.,Andnodes)
Solutionisasubtreewith(1)goalnodeateveryleaf,(2)oneactionspecified
at each state node, and (3) includes every outcome branch at chance nodes.
Chapter 12.3 - 12.5 11

Example: “Game Tree”, Fully Observable WorldDouble Murphy: sucking or arriving may dirty a clean square
8
3
6
8
7
1
5
7
8
4
2
Left Suck
Right Suck Left Suck
GOAL
GOAL
LOOP
LOOP
Plan: [Left,ifAtL∧CleanL∧CleanRthen[]elseSuck]
Chapter 12.3 - 12.5 12

Example
Triple Murphy: also sometimes stays put instead of moving
8
Left Suck
6
3
7
GOAL
[L1:Left,ifAtRthenL1else[ifCleanLthen[]elseSuck]]
or[whileAtRdo[Left],ifCleanLthen[]elseSuck]
“Infinite loop” but will eventually work unless action always fails
Chapter 12.3 - 12.5 13

Execution Monitoring
“Failure” = preconditions of
remaining plan
not met
Preconditions of remaining plan
=allpreconditionsofremainingstepsnotachievedbyremainingsteps
= all causal links
crossing
current time point
On failure, resume POP to achieve open conditions from current state
IPEM (Integrated Planning, Execution, and Monitoring):
keep updatingStartto match current state
links from actions replaced by links fromStartwhen done
Chapter 12.3 - 12.5 14

Example
At(SM)
At(Home)At(HWS)Buy(Drill)
Buy(Milk)
Buy(Ban.)
Go(Home) Go(HWS)
Go(SM)
Finish Start
Sells(SM,Milk)
At(Home) Have(Ban.) Have(Drill)Have(Milk)
Sells(SM,Milk)
At(SM)
Sells(SM,Ban.)
At(SM)
Sells(HWS,Drill)
At(HWS)
At(Home)
Sells(SM,Ban.)
Sells(HWS,Drill)
Chapter 12.3 - 12.5 15

Example
At(SM)
At(HWS)Buy(Drill)
Buy(Milk)
Buy(Ban.)
Go(Home) Go(HWS)
Go(SM)
Finish Start
Sells(SM,Milk)
At(Home) Have(Ban.) Have(Drill)Have(Milk)
Sells(SM,Milk)
At(SM)
Sells(SM,Ban.)
At(SM)
Sells(HWS,Drill)At(HWS)
Sells(SM,Ban.)
Sells(HWS,Drill)
At(HWS)
At(Home)
Chapter 12.3 - 12.5 16

Example
At(SM)
At(Home)
At(HWS)Buy(Drill)
Buy(Milk)
Buy(Ban.)
Go(Home) Go(HWS)
Go(SM)
Finish Start
At(HWS)
Have(Drill)
Sells(SM,Ban.)
Sells(SM,Milk)
At(Home) Have(Ban.)
Have(Drill)
Have(Milk)
Sells(SM,Milk)
At(SM)
Sells(SM,Ban.)
At(SM)
Sells(HWS,Drill)At(HWS)
Chapter 12.3 - 12.5 17

Example
At(SM)
At(Home)
At(HWS)
Buy(Drill)
Buy(Milk)
Buy(Ban.)
Go(Home) Go(HWS)
Go(SM)
Finish Start
Have(Drill)
Sells(SM,Ban.)
Sells(SM,Milk)
At(Home) Have(Ban.)
Have(Drill)
Have(Milk)
Sells(SM,Milk)At(SM) Sells(SM,Ban.)At(SM)
Sells(HWS,Drill)At(HWS)
At(SM)
Chapter 12.3 - 12.5 18

Example
At(SM)
At(Home)
At(HWS)
Buy(Drill)
Buy(Milk)
Buy(Ban.)
Go(Home) Go(HWS)
Go(SM)
Finish Start
Have(Drill)
At(Home)
Have(Ban.) Have(Drill)Have(Milk)
Sells(SM,Milk)At(SM) Sells(SM,Ban.)At(SM)
Sells(HWS,Drill)At(HWS)
At(SM)
Have(Ban.)
Have(Milk)
Chapter 12.3 - 12.5 19

Example
At(SM)
At(Home)
At(HWS)
Buy(Drill)
Buy(Milk)
Buy(Ban.)
Go(Home) Go(HWS)
Go(SM)
Finish Start
Have(Drill)
At(Home) Have(Ban.) Have(Drill)Have(Milk)
Sells(SM,Milk)At(SM) Sells(SM,Ban.)At(SM)
Sells(HWS,Drill)At(HWS)
Have(Ban.)
Have(Milk)
At(Home)
Chapter 12.3 - 12.5 20

Emergent behavior
START
Get(Red)
Color(Chair,Blue) ~Have(Red)
Paint(Red)Have(Red)
FINISH
Color(Chair,Red)
FAILURE RESPONSE
Have(Red)
PRECONDITIONS
Fetch more red
Chapter 12.3 - 12.5 21

Emergent behavior
START
Get(Red)
Color(Chair,Blue) ~Have(Red)
Paint(Red)
Have(Red)
FINISH
Color(Chair,Red)
FAILURE RESPONSE PRECONDITIONS
Color(Chair,Red)
Extra coat of paint
Chapter 12.3 - 12.5 22

Emergent behavior
START
Get(Red)
Color(Chair,Blue) ~Have(Red)
Paint(Red)
Have(Red)
FINISH
Color(Chair,Red)
FAILURE RESPONSE PRECONDITIONS
Color(Chair,Red)
Extra coat of paint
“Loopuntilsuccess”behavior
emerges
frominteractionbetweenmonitor/replan
agent design and uncooperative environment
Chapter 12.3 - 12.5 23

Summarizing Example
Assume: You have a chair, a table, and some cans of paint; all colors are
unknown. Goal: chair and table have same color.
How would each of the following handle this problem?
Classical planning
:
Chapter 12.3 - 12.5 24

Summarizing Example
Assume: You have a chair, a table, and some cans of paint; all colors are
unknown. Goal: chair and table have same color.
How would each of the following handle this problem?
Classical planning
: Can’t handle it, because initial state isn’t fully specified.
Chapter 12.3 - 12.5 25

Summarizing Example
Assume: You have a chair, a table, and some cans of paint; all colors are
unknown. Goal: chair and table have same color.
How would each of the following handle this problem?
Classical planning
: Can’t handle it, because initial state isn’t fully specified.
Sensorless/Conformant planning
:
Chapter 12.3 - 12.5 26

Summarizing Example
Assume: You have a chair, a table, and some cans of paint; all colors are
unknown. Goal: chair and table have same color.
How would each of the following handle this problem?
Classical planning
: Can’t handle it, because initial state isn’t fully specified.
Sensorless/Conformant planning
: Open can of paint and apply it to both
chair and table.
Chapter 12.3 - 12.5 27

Summarizing Example
Assume: You have a chair, a table, and some cans of paint; all colors are
unknown. Goal: chair and table have same color.
How would each of the following handle this problem?
Classical planning
: Can’t handle it, because initial state isn’t fully specified.
Sensorless/Conformant planning
: Open can of paint and apply it to both
chair and table.
Conditional planning
:
Chapter 12.3 - 12.5 28

Summarizing Example
Assume: You have a chair, a table, and some cans of paint; all colors are
unknown. Goal: chair and table have same color.
How would each of the following handle this problem?
Classical planning
: Can’t handle it, because initial state isn’t fully specified.
Sensorless/Conformant planning
: Open can of paint and apply it to both
chair and table.
Conditional planning
: Sense the color of the table and chair. If same, then
we’re done. If not, sense labels on the paint cans; if there isa can that is
the same color as one piece of furniture, then apply the paintto the other
piece. Otherwise, paint both pieces with any color.
Chapter 12.3 - 12.5 29

Summarizing Example
Assume: You have a chair, a table, and some cans of paint; all colors are
unknown. Goal: chair and table have same color.
How would each of the following handle this problem?
Classical planning
: Can’t handle it, because initial state isn’t fully specified.
Sensorless/Conformant planning
: Open can of paint and apply it to both
chair and table.
Conditional planning
: Sense the color of the table and chair. If same, then
we’re done. If not, sense labels on the paint cans; if there isa can that is
the same color as one piece of furniture, then apply the paintto the other
piece. Otherwise, paint both pieces with any color.
Monintoring/replanning
:
Chapter 12.3 - 12.5 30

Summarizing Example
Assume: You have a chair, a table, and some cans of paint; all colors are
unknown. Goal: chair and table have same color.
How would each of the following handle this problem?
Classical planning
: Can’t handle it, because initial state isn’t fully specified.
Sensorless/Conformant planning
: Open can of paint and apply it to both
chair and table.
Conditional planning
: Sense the color of the table and chair. If same, then
we’re done. If not, sense labels on the paint cans; if there isa can that is
the same color as one piece of furniture, then apply the paintto the other
piece. Otherwise, paint both pieces with any color.
Monintoring/replanning
: Similar to conditional planner, but perhaps with
fewer branches at first, which are filled in as needed at runtime. Also, would
check for unexpected outcomes (e.g., missed a spot in painting, so repaint)
Chapter 12.3 - 12.5 31

Summary
♦Incomplete info: use conditional plans, conformant planning (can use
belief states)
♦Incorrect info: use execution monitoring and replanning
Chapter 12.3 - 12.5 32

Multiagent Planning
•Multieffector planning
•Multibody planning
•Decentralized planning
•Coordination, cooperation
•Multiactor settings
•Joint actions/joint plans
Tags