Lecture 3-Artificial Intelligence.pptx Mzuzu

cosumkondowe 44 views 52 slides Aug 21, 2024
Slide 1
Slide 1 of 52
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52

About This Presentation

Artificial intelligence


Slide Content

AI Lecture 3

Structure of Agents The job of AI is to design an agent program that implements the agent function —the mapping from percepts to actions. We assume this program will run on some sort of computing device with physical sensors and actuators—we call this the architecture: agent = architecture + program If the program is going to recommend actions like Walk, the architecture had better have legs

Agent Program Take the current percept as input from the sensors and return an action to the actuators To build a rational agent in this way, we as designers must construct a table that contains the appropriate action for every possible percept sequence

Types of Agent Programs we outline four basic kinds of agent programs that embody the principles underlying almost all intelligent systems Simple reflex agents Model-based reflex agents Goal-based agents Utility-based agents Learning-based agents

Simple reflex agents The simplest kind of agent is the simple reflex agent These agents select actions on the basis of the current percept, ignoring the rest of the percept history. For example, the vacuum agent because its decision is based only on the current location and on whether that location contains dirt

Simple reflex agents This is called a condition–action rule, written as if car-in-front-is-braking then initiate-braking. This also applies to human being eg blinking when something approaches the eye It acts according to a rule whose condition matches the current state, as defined by the percept.

Schematic diagram of a simple reflex agent

Simple reflex agents

Simple reflex agents This will work only if the correct decision can be made on the basis of only the current percept—that is, only if the environment is fully observable. Even a little bit of unobservability can cause serious trouble Simple reflex agents have the admirable property of being simple, but they turn out to be of limited intelligence. Infinite loops are often unavoidable for simple reflex agents operating in partially observable environments.

Model-based reflex agents The most effective way to handle partial observability is for the agent to keep track of the part of the world it can’t see now That is, the agent should maintain some sort of internal state that depends on the percept history and thereby reflects at least some of the unobserved aspects of the current state Updating this internal state information as time goes by requires two kinds of knowledge to be encoded in the agent program

Model-based reflex agents First, we need some information about how the world evolves independently of the agent—for example, that an overtaking car generally will be closer behind than it was a moment ago. Second, we need some information about how the agent’s own actions affect the world—for example, that when the agent turns the steering wheel clockwise, the car turns to the right This knowledge about “how the world works” is called a model of the world An agent that uses such a model is called a model-based agent

A model-based reflex agent

A model-based reflex agent Here's a breakdown of how model-based reflex agents work: Perception : The agent perceives its environment through sensors. State Representation : Using the perception, the agent updates its internal state to reflect the current state of the environment. This internal state is a model that includes not just the current percept, but also some history of previous percepts or actions. Model : The agent has a model of how the world works. This model helps in predicting the effects of its actions on the environment. It includes knowledge about how the world changes and the consequences of the agent's actions. Decision Making : The agent uses this model to choose actions. It considers its current state and the model to decide the best course of action. Action : The agent executes the chosen action through its actuators.

A model-based reflex agent Advantages of Model-Based Reflex Agents: Better Decision Making: Because they maintain an internal model of the world, these agents can make more informed and sophisticated decisions. Learning and Adaptation: They can adapt their behavior based on past experiences and learning. Handling Complex Environments: They are better suited for complex and dynamic environments where the state of the world changes frequently.

Example of model-based reflex agent Consider a robot vacuum cleaner as a model-based reflex agent. Instead of just moving randomly or following a simple rule like "turn when you hit an obstacle," a model-based robot vacuum could: Perceive : Detect dirt, obstacles, and the layout of the room. State Representation : Keep an internal map of the room and track where it has already cleaned. Model : Understand that certain areas are more likely to accumulate dirt and that moving in certain patterns will cover the room more efficiently. Decision Making : Decide to clean high-traffic areas more frequently or optimize its path to cover the entire room without redundancy. Action : Move in a calculated manner to clean the room efficiently. By maintaining a model of the environment, the robot vacuum can clean more effectively and efficiently than a simple reflex agent.

Goal Based Agents Knowing something about the current state of the environment is not always enough to decide what to do. For example, at a road junction, the taxi can turn left, turn right, or go straight on. The correct decision depends on where the taxi is trying to get to In other words, as well as a current state description, the agent needs some sort of goal information that describes situations that are desirable—for example, being at the passenger’s destination

Goal Based Agents The agent program can combine this with the model (the same information as was used in the model-based reflex agent) to choose actions that achieve the goal. Search and planning are the subfields of AI devoted to finding action sequences that achieve the agent’s goals.

Utility-based agents Goals alone are not enough to generate high-quality behavior in most environments. For example, many action sequences will get the taxi to its destination (thereby achieving the goal) but some are quicker, safer, more reliable, or cheaper than others Utility-based agents are a type of intelligent agent in artificial intelligence that make decisions based on a utility function. The primary goal of utility-based agents is to maximize their utility over time.

Utility-based agents The utility function assigns a numerical value to each possible state, representing the agent's preferences. The higher the value, the more desirable the state is for the agent. Utility-based agents aim to select actions that maximize their expected utility. This involves evaluating the potential outcomes of various actions and choosing the one with the highest expected utility.

Learning Agents Learning agents are a type of intelligent agent in artificial intelligence that have the ability to learn from their experiences and improve their performance over time. Unlike traditional agents that operate based on predefined rules or models, learning agents adapt to new situations and improve their decision-making abilities through learning processes.

Components of a Learning Agent Learning Element : This component is responsible for improving the agent's performance based on feedback from the environment. It modifies the agent's behavior using various learning techniques, such as supervised learning, unsupervised learning, reinforcement learning, or other methods. Performance Element : This component decides the actions to take based on the current percepts and knowledge. It is the part of the agent that interacts with the environment.

Components of a Learning Agent Critic : The critic evaluates the actions taken by the performance element and provides feedback. This feedback is used by the learning element to improve future actions. The critic helps the agent understand how well it is performing and identifies areas for improvement. Problem Generator : This component suggests actions that lead to new and informative experiences. By exploring different actions, the agent can discover better strategies and improve its performance

Solving Problem by Searching Problem-solving agent is a kind of goal based agents A problem-solving agent is a goal-driven agent and focuses on satisfying the goal Goal formulation, based on the current situation and the agent’s performance measure, is the first step in problem solving We will consider a goal to be a set of world states—exactly those states in which the goal is satisfied

Well-defined problems and solutions A problem can be defined formally by five components The initial state that the agent starts in. For example, the initial state for our agent might be eg In(Arad ). A description of the possible actions available to the agent. Given a particular state s, ACTIONS(s) returns the set of actions that can be executed in s. We say that each of these actions is applicable in s A description of what each action does; the formal name for this is the transition model , specified by a function RESULT (s, a) that returns the state that results from doing action a in state s.

Well-defined problems and solutions We also use the term successor to refer to any state reachable from a given state by a single action eg RESULT(In(Arad ), Go(Zerind )) = In(Zerind ) . Together, the initial state, actions, and transition model implicitly define the state space of the problem --the set of all states reachable from the initial state by any sequence of actions

Well-defined problems and solutions The state space forms a directed network or graph in which the nodes are states and the links between nodes are actions A path in the state space is a sequence of states connected by a sequence of actions The goal test , which determines whether a given state is a goal state Sometimes there is an explicit set of possible goal states, and the test simply checks whether the given sstate is one of them

Well-defined problems and solutions A path cost function that assigns a numeric cost to each path The problem-solving agent chooses a cost function that reflects its own performance measure A solution to a problem is an action sequence that leads from the initial state to a goal state

Vacuum Cleaner Example

Measuring problem-solving performance We evaluate an algorithm’s performance in four ways: Completeness: Is the algorithm guaranteed to find a solution when there is one? Optimality: Does the strategy find the optimal solution Time complexity: How long does it take to find a solution? Space complexity : How much memory is needed to perform the search?

Uninformed Search Strategies (Blind Search) The term means that the strategies have no additional information about states beyond that provided in the problem definition All they can do is generate successors and distinguish a goal state from a non-goal state. All search strategies are distinguished by the order in which nodes are expanded.

Breadth-first search Breadth-first search is a simple strategy in which the root node is expanded first, then all the successors of the root node are expanded next, then their successors, and so on. In general,all the nodes are expanded at a given depth in the search tree before any nodes at the next level are expanded This is achieved very simply by using a FIFO queue for the frontier. Thus, new nodes (which are always deeper than their parents) go to the back of the queue, and old nodes, which are shallower than the new nodes, get expanded first.

Breadth-first search Note also that the algorithm, following the general template for graph search, discards any new path to a state already in the frontier or explored set; It is easy to see that any such path must be at least as deep as the one already found. Thus, breadth-first search always has the shallowest path to every node on the frontier. Note that as soon as a goal node is generated, we know it is the shallowest goal node because all shallower nodes must have been generated already and failed the goal test.

Breadth-first search

Breadth-first search Breadth first search is complete—if the shallowest goal node is at some finite depth d, breadth-first search will eventually find it after generating all shallower nodes (provided the branching factor b is finite) Breadth-first search is optimal if the path cost is a non decreasing function of the depth of the node

Uniform Cost Search When all step costs are equal, breadth-first search is optimal because it always expands the shallowest unexpanded node Instead of expanding the shallowest node, uniform-cost search expands the node n with the lowest path cost g(n) It performs a search based on the lowest path cost. UCS helps us find the path from the starting node to the goal node with the minimum path cost.

Uniform Cost Search Considering the scenario that we need to move from point A to point B, which path would you choose? A->C->B or A->B:

Uniform Cost Search The path cost of going from path A to path B is 5 and the path cost of path A to C to B is 4 (2+2). As UCS will consider the least path cost, that is, 4. Hence, A to C to B would be selected in terms of uniform cost search. The UCS algorithm uses a function f(n) to decide which node of tree should be expanded in order to find the optimal path to the goal node. where, f(n) = g(n), here g(n) is the path cost function It is easy to see that uniform-cost search is optimal in genera l

Depth-first search Depth-first search always expands the deepest node in the current frontier of the search tree. The search proceeds immediately to the deepest level of the search tree, where the nodes have no successors As those nodes are expanded, they are dropped from the frontier, so then the search “backs up” to the next deepest node that still has unexplored successors. Depth-first search uses a LIFO queue. A LIFO queue means that the most recently generated node is chosen for expansion

Depth-first search This must be the deepest unexpanded node because it is one deeper than its parent—which, in turn, was the deepest unexpanded node when it was selected.

Depth-first search

Depth-first search

Depth-first search

Depth-first search

Depth-first search

Depth-first search

Depth-first search

Depth-first search

Depth-first search

Depth-first search

Depth-first search

Depth First Search

Depth First Search Expansion order: (S,d,b,a,c,a,e,h,p,q,q,r,f,c,a,G)
Tags