ARTIFICIAL INTELLIGENCE TO GENERATE BEST COMPILER DESIGN
BifaHirpo1
15 views
63 slides
Oct 05, 2024
Slide 1 of 63
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
About This Presentation
For me to download it
Size: 3.63 MB
Language: en
Added: Oct 05, 2024
Slides: 63 pages
Slide Content
Intelligent Agents in AI CHAPTER TWO 03-Apr-23 1 By :Abdela A.
Chapters Outline: 03-Apr-23 By :Abdela A. 2 Agents and Environment Rationality vs Omniscience Structure of intelligent agent Task environment Properties of task environment PEAS Environment Agents Types
Introduction 03-Apr-23 By :Abdela A. 3 The concept of rationality can be applied to a wide variety of agents operating in any imaginable environment . We begin by examining agents, environments, and the coupling between them. The observation that some agents behave better than others leads naturally to the idea of a rational agent one that behaves as well as possible. How well an agent can behave depends on the nature of the environment; some environments are more difficult than others.
Cont…. An AI system can be defined as the study of the rational agent and its environment . An agent can be anything that perceive its environment through sensors and act upon that environment through actuators. An AI agent can have mental properties such as knowledge, belief, intention, etc.` 03-Apr-23 4 By :Abdela A.
Cont… 03-Apr-23 By :Abdela A. 5
What is an Agent? ` ` An Agent runs in the cycle of perceiving , thinking , and acting . An agent gets percepts one at a time, and maps this percept sequence to actions. Hence the world around us is full of agents such as thermostat, cell phone, camera, and even we are also agents. Before moving forward, we should first know about sensors, effectors, and actuators . 03-Apr-23 6 By :Abdela A.
Agents …. An agent can be: Human-Agent : A human agent has eyes, ears, and other organs which work for sensors and hand, legs, vocal tract work for actuators. Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for sensors and various motors for actuators. Software Agent: Software agent can have keystrokes, file contents as sensory input and act on those inputs and display output on the screen. ` 03-Apr-23 ```` 7 By :Abdela A.
EXAMPLES OF AGENTS 03-Apr-23 By :Abdela A. 8
EXAMPLES OF AGENTS 03-Apr-23 By :Abdela A. 9
Agents… 03-Apr-23 10 By :Abdela A.
Agents and Environment An agent “ perceives its environment through sensors and acts upon that environment through actuators .” An agent's choice of action can depend on the entire history of percepts observed previously, but not on anything it has not perceived. An agent can be seen as a mapping between percept sequences and actions. Agent : Percept ∗ Action 03-Apr-23 11 By :Abdela A.
Structure of an AI Agent The less an agents relies on its built-in knowledge, as opposed to the current percept sequence, the more autonomous it is. The agent program runs on the physical architecture to produce function. The task of AI is to design an agent program which implements the agent function. 03-Apr-23 12 By :Abdela A.
Structure of an AI Agent… The structure of an intelligent agent is a combination of architecture and agent program. Agent = Architecture + Agent program Architecture : Architecture is machinery that an AI agent executes on. Agent program: Agent program is an implementation of agent function. An agent program executes on the physical architecture to produce function f. Agent Function: Agent function is used to map a percept to an action. f:P* → A 03-Apr-23 13 By :Abdela A.
Agents and environment 03-Apr-23 14 By :Abdela A.
Agents and environment Sensor: Sensor is a device which detects the change in the environment and sends the information to other electronic devices. An agent observes its environment through sensors. Actuators: Actuators are the component of machines that converts energy into motion. The actuators are only responsible for moving and controlling a system. An actuator can be an electric motor, gears, rails, etc. Effectors: Effectors are the devices which affect the environment. Effectors can be legs, wheels, arms, fingers, wings, fins, and display screen. 03-Apr-23 15 By :Abdela A.
Intelligent Agents: An intelligent agent is an autonomous entity which act upon an environment using sensors and actuators for achieving goals. An intelligent agent may learn from the environment to achieve their goals. Following are the main four rules for an AI agent: Rule 1: An AI agent must have the ability to perceive the environment. Rule 2: The observation must be used to make decisions. Rule 3: Decision should result in an action. Rule 4: The action taken by an AI agent must be a rational action 03-Apr-23 16 By :Abdela A.
Intelligent Agents… 03-Apr-23 17 By :Abdela A.
Intelligent Agents: 03-Apr-23 18 By :Abdela A.
Rational Agent: A rational agent is an agent which has clear preference, models uncertainty, and acts in a way to maximize its performance measure with all possible actions. A Rational agent is an agent whose acts try to maximize some performance measure. For an AI agent, the rational action is most important because in AI reinforcement learning algorithm, for each best possible action, agent gets the positive reward and for each wrong action, an agent gets a negative reward. A rational agent is said to perform the right things. Note: Rational agents in AI are very similar to intelligent agents. 03-Apr-23 19 By :Abdela A.
Cont... 03-Apr-23 20 By :Abdela A.
Acting of Intelligent Agents (Rationally) 03-Apr-23 By :Abdela A. 21 A rational agent is one that does the right thing. When an agent is plunked down in an environment, it generates a sequence of actions according to the percepts it receives. This sequence of actions causes the environment to go through a sequence of states. If the sequence is desirable, then the agent has performed well. This notion of desirability is captured by a performance measure that evaluates any given sequence of environment states.
Cont… 03-Apr-23 By :Abdela A. 22 Obviously, there is not one fixed performance measure for all tasks and agents; typically, a designer will devise one appropriate to the circumstances. As a general rule, it is better to design performance measures according to what one actually wants in the environment, rather than according to how one thinks the agent should behave.
Omniscience, Learning, and Autonomy: 03-Apr-23 By :Abdela A. 23 An omniscient agent knows the actual outcome of its actions and can act accordingly, but omniscience is impossible in reality Rationality maximizes expected performance , while perfection maximizes actual performance. Rationality does not require omniscience, then, because the rational choice depends only on the percept sequence to date . Doing actions in order to modify future percepts sometimes called information gathering is an important part of rationality. A rational agent is not only required to gather information but also to learn as much as possible from what it perceives.
Rationality: The rationality of an agent is measured by its performance measure. Rationality can be judged on the basis of following points: Performance measure which defines the success criterion. Agent prior knowledge of its environment. Best possible actions that an agent can perform. The sequence of percepts date . 03-Apr-23 24 By :Abdela A.
Rationality: 03-Apr-23 By :Abdela A. 25 What is rational at any given time depends on four things: The performance measure that defines the criterion of success. The agent's prior knowledge of the environment. The actions that the agent can perform. The agent's percept sequence to date
Task /Nature of Environments: PEAS Representation PEAS is a type of model on which an AI agent works upon. When we define an AI agent or rational agent, then we can group its properties under PEAS representation model. It is made up of four words: P: Performance measure. E: Environment. A: Actuators. S: Sensors. 03-Apr-23 26 By :Abdela A.
PEAS for self-driving cars: 03-Apr-23 27 By :Abdela A.
PEAS for self-driving cars: Let's suppose a self-driving car then PEAS representation will be: Performance measure : Safe, fast, legal, comfortable trip, maximize profits Environment: Roads, other traffic, pedestrians, customer Actuators: Steering wheel, accelerator, brake, signal, hor Sensors: Cameras, sonar, speedometer, GPS, odometer, engine sensors, keyboard 03-Apr-23 28 By :Abdela A.
Cont… 03-Apr-23 By :Abdela A. 29
Example of Agents 03-Apr-23 30 By :Abdela A.
What are the desirable qualities of PEAS? 03-Apr-23 By :Abdela A. 31 Fully Observable /Partially Observable Single Agent /Multi agent Known /Unknown Accessible/ Inaccessible Static /dynamic Deterministic/ Nondeterministic. Discrete / continuous • Episodic/ Nonepisodic .
Properties of Task Environment 03-Apr-23 By :Abdela A. 32 Fully Observable vs. Partially Observable If an agent’s sensors give it access to the complete state of the environment at each point in time, then is called fully observable . A task environment is effectively fully observable if the sensors detect all aspects that are relevant to the choice of action; relevance, in turn, depends on the performance measure. Fully observable environments are convenient because the agent need not maintain any internal state to keep track of the world.
Properties of Environments 03-Apr-23 By :Abdela A. 33 An environment might be partially observable because of noisy and inaccurate sensors or because parts of the state are simply missing from the sensor data for example, a vacuum agent with only a local dirt sensor cannot tell whether there is dirt in other squares, and an automated taxi cannot see what other drivers are thinking. If the agent has no sensors at all then the environment is unobservable.
Properties of Environments 03-Apr-23 By :Abdela A. 34 Single Agent vs. Multi agent an agent solving a crossword puzzle by itself is clearly in a single agent environment, whereas an agent playing chess is in a two agent environment. In chess, the opponent entity B is trying to maximize its performance measure, which, by the rules of chess, minimizes agent A’s performance measure. Thus, chess is a competitive multi agent environment. In the taxi-driving environment, on the other hand, avoiding collisions maximizes the performance measure of all agents, so it is a partially cooperative multi agent environment.
Properties of Environments 03-Apr-23 By :Abdela A. 35 Known vs. Unknown – it refers to both the environment and agent’s (or designer’s) state of knowledge about the “laws of physics” of the environment. In a known environment, the outcomes (or outcome probabilities if the environment is stochastic) for all actions are given. Obviously, if the environment is unknown, the agent will have to learn how it works in order to make good decisions. For a known environment to be partially observable like, in solitaire card games, I know the rules but am still unable to see the cards that have not yet been turned over. Conversely, an unknown environment can be fully observable in a new video game, the screen may show the entire game state but you still don’t know what the buttons do until you try them.
Properties of Environments 03-Apr-23 By :Abdela A. 36 • Accessible/ Inaccessible If an agent's sensors give it access to the complete state of the environment needed to choose an action, the environment is accessible. Such environments are convenient, since the agent is freed from the task of keeping track of the changes in the environment. • Deterministic/ Nondeterministi c. – An environment is deterministic if the next state of the environment is completely determined by the current state of the environment and the action of the agent. – In an accessible and deterministic environment the agent need not deal with uncertainty. • Episodic/ Nonepisodi c . An episodic environment means that subsequent episodes do not depend on what actions occurred in previous episodes. Such environments do not require the agent to plan ahead.
Properties of Environments 03-Apr-23 By :Abdela A. 37 • Static/ Dynamic. An environment which does not change while the agent is thinking is static. In a static environment the agent need not worry about the passage of time while he is thinking, nor does he have to observe the world while he is thinking. In static environments the time it takes to compute a good strategy does not matter. • Discrete/ Continuous. – If the number of distinct percepts and actions is limited the environment is discrete, otherwise it is continuous.
The Functions of an Artificial Intelligence Agent Artificial Intelligence agents perform these functions continuously: Perceiving dynamic conditions in the environment Acting to affect conditions in the environment Using reasoning to interpret perceptions Problem-solving Drawing inferences Determining actions and their outcomes 03-Apr-23 By :Abdela A. 38
Structure/Types of Intelligent Agents Agent is a part of AI system that takes actions or decision s based on the information it perceives from the environment. For example, Robot Agent utilizes information it senses from the environment using the sensors in order to carry out a particular action. On the other hand, Human Agent uses sensory organs to sense the environment and takes particular actions and decisions regarding the body parts of the human. 03-Apr-23 By :Abdela A. 39
Structure of AI Agents An AI agent comprises of Architecture and an Agent progra m. Architecture involves machinery for execution of tasks by agents. It consists of a device with sensors and effectors or actuators. An agent program refers to the process of implementation of an agent function, which is map of the percept sequence or the perceptual history of the agent for a particular actio n. 03-Apr-23 By :Abdela A. 40
Cont… Agent’s structure can be viewed as: Agent = Architecture + Agent Program Architecture = the machinery that an agent executes on. Agent Program = an implementation of an agent 03-Apr-23 By :Abdela A. 41
Interaction of Agents with Environment Interaction of the Agent with the environment uses Sensors and Effectors. Sensors perceive the environment and T he actuators or effectors act upon that environment. This interaction can occur in two different ways: Perception: Perception is a passive interaction between the agent and the environment where the environment remains unchanged when the agent takes up information from the environment. This involves gaining information using 'Sensors' from the surroundings without any change to the surroundings. Action: Action is an active interaction between the agent and the environment where the environment changes when the action is performed. This involves utilization of an ' Effector ' or an 'Actuator' which completes an action but leads to changes in the surroundings while doing so . 03-Apr-23 By :Abdela A. 42
Action of Agents In Artificial Intelligence Agents in Artificial Intelligence act by: Mapping of the Percept sequences or Perceptual history to the Actions : Mapping refers to a list that maps a particular percept sequence to the action . The design for an ideal agent can be figured out by specifying an action corresponding to the percept sequence or the perceptual history. Autonomy : The agent designer determines the behavior of the agent by determining its experience and its built-in knowledge. Autonomy refers to taking actions based on the experience of the agent . If the system comprises of an autonomous intelligent agent then it is able to operate and adapt successfully in a wide range of environments. 03-Apr-23 By :Abdela A. 43
Types of Intelligent agents(agent program types) Agents can be grouped into five classes based on their degree of perceived intelligence and capability . All these agents can improve their performance and generate better action over the time. These are given below: Simple Reflex Agent Model-based reflex agent Goal-based agents Utility-based agent Learning agent 03-Apr-23 44 By :Abdela A.
Simple Reflex Agent The Simple reflex agents are the simplest agents . These agents take decisions on the basis of the current percepts and ignore the rest of the percept history . These agents only succeed in the fully observable environment . The Simple reflex agent does not consider any part of percepts history during their decision and action process. The Simple reflex agent works on Condition-action rule, which means it maps the current state to action. It is a rule that maps a state (condition) to an action. 03-Apr-23 45 By :Abdela A.
Generally, This is a simple type of agent which works on the basis of current percept and not based on the rest of the percepts history. The agent function, in this case, is based on condition-action rule where the condition or the state is mapped to the action such that action is taken only when condition is true or else it is not. If the environment associated with this agent is fully observable, only then is the agent function successful, if it is partially observable, in that case the agent function enters into infinite loops that can be escaped only on randomization of its actions. The problems associated with this type include very limited intelligence, No knowledge of non-perceptual parts of the state, huge size for generation and storage and inability to adapt to changes in the environment. 03-Apr-23 By :Abdela A. 46
cont.. They choose actions only based on the current They are rational only if a correct decision is made only on the basis of current Example: ATM system if PIN matches with given account number than customer get money. 03-Apr-23 By :Abdela A. 47
Problem with it Problems for the simple reflex agent design approach: They have very limited intelligence They do not have knowledge of non-perceptual parts of the current state Mostly too big to generate and to store. Not adaptive to changes in the environment. 03-Apr-23 48 By :Abdela A.
Model-based reflex agent The Model-based agent can work in a partially observable environment , and track the situation . A model-based agent has two important factors: Model: It is knowledge about "how things happen in the world," so it is called a Model-based agent . Internal State : It is a representation of the current state based on percept history . These agents have the model, "which is knowledge of the world" and based on the model they perform actions. Updating the agent state requires information about: How the world evolves How the agent's action affects the world. 03-Apr-23 49 By :Abdela A.
Cont… It is an advanced version of the Simple Reflex agent. Like Simple Reflex Agents, it can also respond to events based on the pre-defined conditions; on top of that , it can store the internal state (past information) based on previous events. Model-Based Agents update the internal state at each step. These internal states aid agents in handling the partially observable environment. To perform any action, it relies on both internal state and current percept. However , it is almost impossible to find the exact state when dealing with a partially observable environment. 03-Apr-23 By :Abdela A. 50
Cont… Example: Car driving agent which maintains its own internal state and then take action as environment appears to it. 03-Apr-23 By :Abdela A. 51
GOAL-BASED AGENTS The knowledge of the current state environment is not always sufficient to decide for an agent to what to do. The agent needs to know its goal which describes desirable situations. Goal-based agents expand the capabilities of the model-based agent by having the "goal" information. They choose an action, so that they can achieve the goal. These agents may have to consider a long sequence of possible actions before deciding whether the goal is achieved or not. 03-Apr-23 52 By :Abdela A.
Cont… The action taken by these agents depends on the distance from their goal (Desired Situation ). The actions are intended to reduce the distance between the current state and the desired state . To attain its goal, it makes use of the search and planning algorithm. One drawback of Goal-Based Agents is that they don’t always select the most optimized path to reach the final goal. 03-Apr-23 By :Abdela A. 53
Goal-based agents… The agent program can combine this with the model to choose actions that achieve the goal. The knowledge that supports its decisions is represented explicitly and can be modified , which makes these agents more flexible. They usually require search and planning. The goal based agent’s behavior can easily be changed. 03-Apr-23 54 By :Abdela A.
Cont… Example: Searching solution for 8-queen puzzle. 03-Apr-23 By :Abdela A. 55
Utility-based agents These agents are similar to the goal-based agent but provide an extra component of utility measurement which makes them different by providing a measure of success at a given state . Utility-based agent act based not only goals but also the best way to achieve the goal. The Utility-based agent is useful when there are multiple possible alternatives, and an agent has to choose in order to perform the best action. The utility function maps each state to a real number to check how efficiently each action achieves the goals. 03-Apr-23 56 By :Abdela A.
Utility-based agents…. Goals alone are not enough to generate high-quality behavior in most environments. • For example, many action sequences will get the taxi to its destination (thereby achieving the goal) but some are quicker, safer, more reliable , or cheaper than others. • Goals just provide a crude binary distinction between “ happy ” and “ unhappy ” states. 03-Apr-23 57 By :Abdela A.
Cont… The action taken by these agents depends on the end objective, so they are called Utility Agents . Utility Agents are used when there are multiple solutions to a problem, and the best possible alternative has to be chosen. The alternative chosen is based on each state’s utility. Then , they perform a cost-benefit analysis of each solution and select one that can achieve the minimum cost goal. 03-Apr-23 By :Abdela A. 58
Cont… Example : Millitary planning robot which provides certain plan of action to be taken. 03-Apr-23 By :Abdela A. 59
Learning Agents A learning agent in AI is the type of agent which can learn from its past experiences, or it has learning capabilities. It starts to act with basic knowledge and then able to act and adapt automatically through learning. The learning agents have four major components which enable them to learn from their experience. : Learning element: It is responsible for making improvements by learning from environment Critic : Learning element takes feedback from critic which describes that how well the agent is doing with respect to a fixed performance standard. Performance element: It is responsible for selecting external action Problem generator: This component is responsible for suggesting actions that will lead to new and informative experiences. 03-Apr-23 60 By :Abdela A.