Lecture 4 - Introduction Agent Programming.pptx

AhmadNaswin 5 views 55 slides May 27, 2024
Slide 1
Slide 1 of 55
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55

About This Presentation

agen cerdas


Slide Content

Introduction Agent Programming Koen Hindriks Delft University of Technology, The Netherlands

Agents: Act in environments Choose an action Percepts Action environment agent

Agents: Act to achieve goals Percepts Action events actions goals environment agent

Agents: Represent environment Percepts Action events actions goals plans beliefs environment agent

Agent Oriented Programming Agents provide a very effective way of building applications for dynamic and complex environments + Develop agents based on Belief-Desire-Intention agent metaphor , i.e. develop software components as if they have beliefs and goals , act to achieve these goals, and are able to interact with their environment and other agents.

A Brief History of AOP 1990: AGENT-0 (Shoham) 1993: PLACA (Thomas; AGENT-0 extension with plans) 1996: AgentSpeak(L) (Rao; inspired by PRS) 1996: Golog (Reiter, Levesque, Lesperance) 1997: 3APL (Hindriks et al.) 1998: ConGolog (Giacomo, Levesque, Lesperance) 2000: JACK ( Busetta, Howden, Ronnquist, Hodgson) 2000: GOAL (Hindriks et al.) 2000: CLAIM (Amal El FallahSeghrouchni) 2002: Jason (Bordini, Hubner; implementation of AgentSpeak) 2003: Jadex (Braubach, Pokahr, Lamersdorf) 2008: 2APL (successor of 3APL) This overview is far from complete!

A Brief History of AOP AGENT-0 Speech acts PLACA Plans AgentSpeak(L) Events /Intentions Golog Action theories , logical specification 3APL Practical reasoning rules JACK Capabilities , Java-based GOAL Declarative goals CLAIM Mobile agents (within agent community) Jason AgentSpeak + Communication Jadex JADE + BDI 2APL Modules, PG-rules, …

Outline Some of the more actively being developed APLs 2APL (Utrecht, Netherlands) Agent Factory ( Dublin, Ireland) Goal (Delft, Netherlands) Jason ( Porto Alegre , Brasil ) Jadex (Hamburg, Germany) JACK (Melbourne, Australia) JIAC (Berlin, Germany) References

2APL – Features 2APL is a rule-based language for programming BDI agents: actions : belief updates, send, adopt, drop, external actions beliefs : represent the agent’s beliefs goals : represents what the agent wants plans : sequence, while, if then PG-rules : goal handling rules PC-rules : event handling rules PR-rules : plan repair rules

2APL – Code Snippet Beliefs : worker(w1), worker(w2), worker(w3) Goals: findGold () and haveGold () Plans: = { send ( w3, play(explorer) ); } Rules = { … goal handling rule G ( findGold () ) <- B ( -gold(_) && worker(A) && -assigned(_, A) ) | send ( A, play(explorer) ); ModOwnBel ( assigned(_, A) ); , E ( receive ( A, gold(POS) ) ) | B ( worker(A) ) -> event handling rule { ModOwnBel ( gold(POS) ); }, E ( receive ( A, done(POS) ) ) | B ( worker(A) ) -> explicit operator for events { ModOwnBel ( -assigned(POS, A), -gold(POS) ); }, … } modules to combine and structure rules

JACK – Features The JACK agent Language is built on top of and extends Java and provides the following features: agents : used to define the overall behavior of mas beliefset : represents an agent’s beliefs view : allows to perform queries on belief sets capability : reusable functional component made up of plans, events, belief sets and other capabilities plan : instructions the agent follows to try to achieve its goals and handle events event : occurrence to which agent should respond

JACK – Agent Template agent AgentType extends Agent { // Knowledge bases used by the agent are declared here. #private data BeliefType belief_name ( arg_list ); // Events handled, posted and sent by the agent are declared here. #handles event EventType ; #posts event EventType reference ; used to create internal events #sends event EventType reference ; used to send messages to other agents // Plans used by the agent are declared here. Order is important. #uses plan PlanType ; // Capabilities that the agent has are declared here. #has capability CapabilityType reference ; // other Data Member and Method definitions }

Jason – Features beliefs: weak and strong negation to support both closed-world assumption and open-world belief annotations: label information source, e.g. self, percept events : internal, messages, percepts a library of “internal actions” , e.g. send user-defined internal actions: programmed in Java. automatic handling of plan failures annotations on plan labels : used to select a plan speech-act based inter-agent communication Java-based customization: (plan) selection functions, trust functions, perception, belief-revision, agent communication

Jason – Plans triggering event test on beliefs plan body

Summary Key language elements of APLs: beliefs and goals to represent environment events received from environment (& internal) actions to update beliefs, adopt goals, send messages, act in environment plans , capabilities & modules to structure action rules to select actions/plans/modules/capabilities support for multi-agent systems

How are these APLs related? AGENT-0 1 ( PLACA ) Family of Languages Basic concepts: beliefs, action, plans, goals-to-do): AgentSpeak(L) 1 , Jason 2 Golog 3APL 3 = = = 1 mainly interesting from a historical point of view 2 from a conceptual point of view, we identify AgentSpeak(L) and Jason 3 without practical reasoning rules Main addition: Declarative goals 2APL ≈ 3APL + Goal A comparison from a high-level, conceptual point, not taking into account any practical aspects (IDE, available docs, speed, applications, etc) Java-based BDI Languages Agent Factory, Jack (commercial) , Jadex , JIAC Mobile Agents CLAIM, AgentScape Multi-Agent Systems All of these languages (except AGENT-0, PLACA, JACK) have versions implemented “on top of” JADE.

References Websites 2APL: http://www.cs.uu.nl/2apl/ Agent Factory: http://www.agentfactory.com Goal : http://mmi.tudelft.nl/trac/goal JACK: http://www.agent-software.com.au/products/jack/ Jadex : http://jadex.informatik.uni-hamburg.de/ Jason: http://jason.sourceforge.net/ JIAC: http://www.jiac.de/ Books Bordini, R.H.; Dastani, M.; Dix, J.; El Fallah Seghrouchni, A. (Eds.), 2005 Multi-Agent Programming Languages, Platforms and Applications. presents 3APL, CLAIM, Jadex , Jason Bordini, R.H.; Dastani, M.; Dix, J.; El Fallah Seghrouchni, A. (Eds.), 2009, Multi-Agent Programming: Languages, Tools and Applications. presents a.o .: Brahms, CArtAgO , Goal , JIAC Agent Platform

The Goal Agent Programming Language

The Blocks World The Hello World example of Agent Programming

The Blocks World Positioning of blocks on table is not relevant . A block can be moved only if it there is no other block on top of it. Objective : Move blocks in initial state such that result is goal state. A classic AI planning problem.

The Blocks World (Cont’d) Key concepts: A block is in position if “it is in the right place”; otherwise misplaced A constructive move puts a block in position A self-deadlock is a misplaced block above a block it should be above

Mental States

Representing the Blocks World Basic predicates: block(X). on(X,Y). Defined predicates: tower([X]) :- on(X,table). tower([X,Y|T) :- on(X,Y),tower([Y|T]). clear(X) :- block(X), not(on(Y,X)). EXERCISE: Prolog is the knowledge representation language used in Goal .

Representing the Initial State Using the on(X,Y) predicate we can represent the initial state . beliefs{ on(a,b), on(b,c), on(c,table), on(d,e), on(e,table), on(f,g), on(g,table). } Initial belief base of agent

Representing the Blocks World What about the rules we defined before? Insert clauses that do not change into the knowledge base. tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y),tower([Y|T]). clear(X) :- block(X), not(on(Y,X)). knowledge{ block(a), block(b), block(c), block(d), block(e), block(f), block(g), block(h), block(i). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]) . } Static knowledge base of agent

Why a Separate Knowledge Base? Concepts defined in knowledge base can be used in combination with both the belief and goal base. Example Since agent believes on( e,table ),on( d,e ) , then infer : agent believes tower([ d,e ]) . If agent wants on( a,table ),on( b,a ) , then infer : agent wants tower([ b,a ]) . Knowledge base introduced to avoid duplicating clauses in belief and goal base.

Representing the Goal State Using the on(X,Y) predicate we can represent the goal state . goals{ on(a,e), on(b,table), on(c,table), on(d,c), on(e,b), on(f,d), on(g,table). } Initial goal base of agent

One or Many Goals In the goal base using the comma- or period-separator makes a difference! goals{ on(a,table), on(b,a), on(c,b). } goals{ on(a,table). on(b,a). on(c,b). }  Left goal base has three goals, right goal base has single goal. Moving c on top of b (3 rd goal), c to the table, a to the table (2 nd goal) , and b on top of a (1 st goal) achieves all three goals but not single goal of right goal base. The reason is that the goal base on the left does not require block c to be on b, b to be on a, and a to be on the table at the same time .

Mental State of Goal Agent knowledge{ block(a), block(b), block(c), block(d), block(e), block(f), block(g), block(h), block(i). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } beliefs{ on(a,b), on(b,c), on(c,table), on(d,e), on(e,table), on(f,g), on(g,table). } goals{ on(a,e), on(b,table), on(c,table), on(d,c), on(e,b), on(f,d), on(g,table). } The knowledge , belief , and goal sections together constitute the specification of the Mental State of a Goal Agent . Initial mental state of agent

Inspecting the Belief & Goal base Operator bel(  ) to inspect the belief base. Operator goal(  ) to inspect the goal base. Where  is a Prolog conjunction of literals. Examples : bel(clear(a), not(on(a,c))). goal(tower([a,b])).

Inspecting the Belief Base bel(  ) succeeds if  follows from the belief base in combination with the knowledge base . Example : bel(clear(a), not(on(a,c))) succeeds Condition  is evaluated as a Prolog query. knowledge{ block(a), block(b), block(c), block(d), block(e), block(f), block(g), block(h), block(i). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } beliefs{ on(a,b), on(b,c), on(c,table), on(d,e), on(e,table), on(f,g), on(g,table). }

Inspecting the Belief Base Which of the following succeed? bel(on(b,c), not(on(a,c))). bel(on(X,table), on(Y,X), not(clear(Y)). bel(tower([X,b,d]). [X=c;Y=b] knowledge{ block(a), block(b), block(c), block(d), block(e), block(f), block(g), block(h), block(i). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } beliefs{ on(a,b), on(b,c), on(c,table), on(d,e), on(e,table), on(f,g), on(g,table). } EXERCISE:

Inspecting the Goal Base goal(  ) succeeds if  follows from one of the goals in the goal base in combination with the knowledge base . Example : goal(clear(a)) succeeds. but not goal(clear(a),clear(c)) . Use the goal(…) operator to inspect the goal base. knowledge{ block(a), block(b), block(c), block(d), block(e), block(f), block(g), block(h), block(i). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } goals{ on(a,e), on(b,table), on(c,table), on(d,c), on(e,b), on(f,d), on(g,table). }

Inspecting the Goal Base Which of the following succeed? goal(on(b,table), not(on(d,c))). goal(on(X,table), on(Y,X), clear(Y)). goal(tower([d,X]). knowledge{ block(a), block(b), block(c), block(d), block(e), block(f), block(g), block(h), block(i). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } goals{ on(a,e), on(b,table), on(c,table), on(d,c), on(e,b), on(f,d), on(g,table). } EXERCISE:

Negation and Beliefs not(bel( on(a,c) )) = bel(not( on(a,c ))) ? Answer : Yes. Because Prolog implements negation as failure . If φ cannot be derived, then not( φ ) can be derived. We always have: not(bel(  )) = bel(not(  )) knowledge{ block(a), block(b), block(c), block(d), block(e), block(f), block(g), block(h), block(i). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } beliefs{ on(a,b), on(b,c), on(c,table), on(d,e), on(e,table), on(f,g), on(g,table). } EXERCISE:

Negation and Goals not(goal(  )) = goal(not(  )) ? Answer : No. We have, for example: goal( on(a,b) ) and goal(not( on(a,b))) . knowledge{ block(a), block(b), block(c), block(d), block(e), block(f), block(g), block(h), block(i). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } goals{ on(a,b), on(b,table). on(a,c), on(c,table). } EXERCISE:

Combining Beliefs and Goals Achievement goals : a-goal(  ) = goal(  ), not(bel(  )) Agent only has an achievement goal if it does not believe the goal has been reached already. Goal achieved : goal-a(  ) = goal(  ), bel(  ) A (sub)-goal  has been achieved if the agent believes in  . Useful to combine the bel(…) and goal(…) operators.

Expressing BW Concepts Define: block X is misplaced Solution : goal(tower([X|T])),not( bel (tower([X|T]))). But this means that saying that a block is misplaced is saying that you have an achievement goal : a-goal(tower([X|T])). Possible to express key Blocks World concepts by means of basic operators. Mental States EXERCISE:

Actions specifications Changing Blocks World Configurations

Actions Change the Environment… move(a,d)

and Require Updating Mental States. To ensure adequate beliefs after performing an action the belief base needs to be updated (and possibly the goal base). Add effects to belief base: insert on(a,d) after move(a,d) . Delete old beliefs: delete on(a,b) after move(a,d) .

and Require Updating Mental States. If a goal has been (believed to be) completely achieved, the goal is removed from the goal base. It is not rational to have a goal you believe to be achieved. Default update implements a blind commitment strategy . move(a,b) beliefs{ on(a,table), on(b,table). } goals{ on(a,b), on(b,table). } beliefs{ on(a,b), on(b,table). } goals{ }

Action Specifications Actions in GOAL have preconditions and postconditions . Executing an action in GOAL means: Preconditions are conditions that need to be true: Check preconditions on the belief base. Postconditions (effects) are add/delete lists (STRIPS): Add positive literals in the postcondition Delete negative literals in the postcondition STRIPS-style specification move(X,Y){ pre { clear(X), clear(Y), on(X,Z), not( on(X,Y) ) } post { not(on(X,Z)), on(X,Y) } }

move(X,Y){ pre { clear(X), clear(Y), on(X,Z), not( on(X,Y) )} post { not(on(X,Z)), on(X,Y) } } Example: move(a,b) Check: clear(a), clear(b), on(a,Z), not( on(a,b) ) Remove: on(a,Z) Add: on(a,b) Note : first remove, then add. Actions Specifications table

move(X,Y){ pre { clear(X), clear(Y), on(X,Z) } post { not(on(X,Z)), on(X,Y) } } Example: move(a,b) Actions Specifications beliefs{ on(a,table), on(b,table). } beliefs{ on(b,table). on(a,b). }

move(X,Y){ pre { clear(X), clear(Y), on(X,Z) } post { not(on(X,Z)), on(X,Y) } } Is it possible to perform move( a,b ) ? Is it possible to perform move( a,d ) ? Actions Specifications EXERCISE: knowledge{ block(a), block(b), block(c), block(d), block(e), block(f), block(g), block(h), block(i). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } beliefs{ on(a,b), on(b,c), on(c,table), on(d,e), on(e,table), on(f,g), on(g,table). } No, not( on(a,b) ) fails. Yes.

Action Rules Selecting actions to perform

Agent-Oriented Programming How do humans choose and/or explain actions? Examples: I believe it rains; so, I will take an umbrella with me. I go to the video store because I want to rent I-robot. I don’t believe busses run today so I take the train. Use intuitive common sense concepts: beliefs + goals => action See Chapter 1 of the Programming Guide

Selecting Actions: Action Rules Action rules are used to define a strategy for action selection. Defining a strategy for blocks world: If constructive move can be made, make it. If block is misplaced, move it to table. What happens: Check condition, e.g. can a-goal(tower([X|T])) be derived given current mental state of agent? Yes, then (potentially) select move( X,table ). program{ if bel(tower([Y|T])), a-goal(tower([X,Y|T])) then move(X,Y). if a-goal(tower([X|T])) then move(X,table). }

Order of Action Rules Action rules are executed by default in linear order. The first rule that fires is executed. Default order can be changed to random. Arbitrary rule that is able to fire may be selected. program{ if bel(tower([Y|T])), a-goal(tower([X,Y|T])) then move(X,Y). if a-goal(tower([X|T])) then move(X,table). } program[order=random]{ if bel(tower([Y|T])), a-goal(tower([X,Y|T])) then move(X,Y). if a-goal(tower([X|T])) then move(X,table). }

Example Program: Action Rules Agent program may allow for multiple action choices d To table Random, arbitrary choice program[order=random]{ if bel(tower([Y|T])), a-goal(tower([X,Y|T])) then move(X,Y). if a-goal(tower([X|T])) then move(X,table). }

The Sussman Anomaly (1/5) Non-interleaved planners typically separate the main goal, on(A,B),on(B,C) into 2 sub-goals: on(A,B) and on(B,C) . Planning for these two sub-goals separately and combining the plans found does not work in this case, however. a c Initial state b c b a Goal state

The Sussman Anomaly (2/5) Initially, all blocks are misplaced One constructive move can be made (c to table) Note: move(b,c) not enabled. Only action enabled: c to table (2x). Need to check conditions of action rules: if bel(tower([Y|T]),a-goal(tower([X,Y|T))then move(X,Y). if a-goal(tower([X|T))then move(X,table). We have bel(tower([c,a]) and a-goal(tower([c])) . c b a Goal state a c Initial state b

The Sussman Anomaly (3/5) Only constructive move enabled is Move b onto c Need to check conditions of action rules: if bel(tower([Y|T]), a-goal(tower([X,Y|T))then move(X,Y). if a-goal(tower([X|T))then move(X,table). Note that we have: a-goal(on(a,b),on(b,c),on(c,table)) , but not: a-goal(tower[c])) . Current state c b a Goal state a c b

The Sussman Anomaly (4/5) Again, only constructive move enabled Move a onto b Need to check conditions of action rules: if bel(tower([Y|T]), a-goal(tower([X,Y|T))then move(X,Y). if a-goal(tower([X,T))then move(X,Y). Note that we have: a-goal(on(a,b),on(b,c),on(c,table)) , but not: a-goal(tower[b,c]) . c b a Goal state a c b Current state
Tags