Greedy best-first search algorithm always selects the path which appears best at that moment. It is the combination of depth-first search and breadth-first search

abihanaqvi8 39 views 38 slides Sep 01, 2025
Slide 1
Slide 1 of 38
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38

About This Presentation

Greedy best-first search algorithm always selects the path which appears best at that moment. It is the combination of depth-first search and breadth-first search algorithms. It uses the heuristic function and search. Best-first search allows us to take the advantages of both algorithms.


Slide Content

Intelligent Systems Course No. CS 6109 / CE 7205

Education PhD in Computer Science, University of Karachi, Karachi, Pakistan, 2019. MS in Computer Engineering (Computer Networking), Sir Syed University of Engineering & Technology, Karachi, Pakistan, 2003. Bachelor of Science in Computer Engineering, Sir Syed University of Engineering & Technology, Karachi, Pakistan, 1999.

Publication

CS 6109 / CE 7205 : Intelligent Systems Class Hour : 3 hours per week Class Time: Saturday 06:00 PM - 09:00 PM Email: [email protected] Mid Term : 25% Final : 50% Assignment : 10% Quiz : 05% Presentation : 10% Total : 100% Assignment Quiz Presentation

CS 6109 / CE 7205 : Intelligent Systems Text Book: Building Intelligent Systems: A Guide to Machine Learning Engineering, Geoff Hulten , Apress . (2018). Reference Book: Hands-on Machine Learning with Scikit-Learn, Keras & TensorFlow, Concepts, Tools and Techniques to Build Intelligent Systems, 2 nd Edition, Aurelien Geron , O’Reilly, 2019. Artificial Intelligence: A Modern Approach, 3 rd Edition, S. Russel & P. Norvig .

Objective Introduction Knowing When to Use IS Types of Problems That Need IS Agents Agents and Environments Agent Terminologies A vacuum-cleaner world with just two locations Intelligent Agents Rational Agents Task Environment Environment Types Agent types

Introduction Intelligent Systems are all around us. Like bulbs, watches, thermostats

Knowing When to Use IS So you have a problem -> you want to solve. A new idea to create a business. Something you think your customers will love. Sometimes YES? Sometime NO? IS might be the right approach and provides guidance on when other approaches might be better. It begins by describing the types of problems that can benefit from IS.

Types of Problems That Need IS It is always best to do things the easy way. If you can solve your problem without an IS, maybe you should. One key factor in knowing whether you’ll need an IS is how often you think you’ll need to update the system before you have it right. If the no. is small, then an IS is probably not right. E.g NewBal = OldBal – Withdrawal.

Types of Problems That Need IS There are four situations that clearly require that level of iteration:

Big Problems Some problems are big, they have so many variable and conditions that need to be addressed that they can’t really be completed in a single shot. Like Web pages, Books, Television Programs etc.

Open-Ended Problems Some problems are more than big. That is, they don’t have a single fixed solution at all. They go on and on, requiring more work, without end. Like Web pages, Books, Television Programs etc.

Time-Changing Problems Things change. Sometimes the right answer is wrong tomorrow Imagine a system for identifying human faces. Imagine a system for predicting Stock Prices. Imagine a system for moving spam email to a junk folder.

Intrinsically Hard Problems Understanding Human Speech. Identifying Objects in Pictures. Predicting the weather more than a few minutes in the future. Competing with humans in complex, open-ended games. Understanding Human expressions of emotion in text and video.

Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators. Human Agent: eyes, ears, and other organs for sensors; hands, legs, mouth, and other body parts for actuators. Robotic Agent: cameras and infrared range finders for sensors; various motors for actuators.

Agents and Environments The agent function maps from percept histories to actions: [f: P* → A] The agent program runs on the physical architecture to produce f agent = architecture + program

Agents and Environment Intelligent Agent Sense – Think – Act OR Perception-Action Cycle.

Agents and Environment Example: Robotic Agent Sensors: cameras, microphone, touch screen, balance sensor, distance sensor etc. Actuators Motors, speakers etc.

Agent Terminologies Percept → agent’s input at any given instant Percept sequence → complete history of everything that the agent has ever perceived. Agent function → is defined by agent behavior that maps any given percept sequence to an action Agent Program → Implementation of agent function

A vacuum-cleaner world with just two locations Two location: A & B Percepts: location & Content. E.g. [A, dirty] Action: left, right, suck, No op Simple Function: if current location == dirty then suck; else move to other location

Intelligent Agents Must sense Must act Must be autonomous (to some extend) Must be rational

Rational Agents Rational Agent : For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure , based on the evidence provided by the percept sequence and whatever built-in knowledge the agent has. Performance Measure : An objective criterion for success of an agent's behavior. E.g., performance measure of a vacuum-cleaner agent could be amount of dirt cleaned up, amount of time taken, amount of electricity consumed, amount of noise generated, etc.

Rational Agents Rationality is distinct from omniscience (all-knowing with infinite knowledge). Agents can perform actions in order to modify future percepts so as to obtain useful information (information gathering, exploration). An agent is autonomous if its behavior is determined by its own percepts & experience (with ability to learn and adapt) without depending solely on build-in knowledge

Task Environment Before we design an intelligent agent, we must specify its “task environment”: PEAS: Performance measure Environment Actuators Sensors

PEAS for an automated taxi driver Performance Measure Environment Actuators Sensors Getting to the correct destination Safe Fast comfortable trip maximize profits Roads Traffic Pedestrians customers Accelerator brake Steering wheel Cameras Sonar speedometer GPS, odometer, engine sensors, keyboard

PEAS for a part-picking robot Performance Measure Environment Actuators Sensors Percentage of parts in correct bins Conveyor belt with parts, bins Jointed arm and hand Camera, joint angle sensors

PEAS for Interactive English tutor Performance Measure Environment Actuators Sensors Maximize student's score on test Set of Students Screen display (exercises, suggestions, corrections) Keyboard

Environment Types Fully observable (vs. partially observable): An agent's sensors give it access to the complete state of the environment at each point in time. Deterministic (vs. stochastic): The next state of the environment is completely determined by the current state and the action executed by the agent. (If the environment is deterministic except for the actions of other agents, then the environment is strategic) Episodic (vs. sequential): An agent’s action is divided into atomic episodes. Decisions do not depend on previous decisions/actions.

Environment Types Static (vs. dynamic): The environment is unchanged while an agent is deliberating. (The environment is semi dynamic if the environment itself does not change with the passage of time but the agent's performance score does) Discrete (vs. continuous): A limited number of distinct, clearly defined percepts and actions. How do we represent or abstract or model the world? Single agent (vs. multi-agent): An agent operating by itself in an environment. Does the other agent interfere with my performance measure?

Examples of task environments & characteristics Task Environment Observable Deterministic Episodic Static Discrete Agents Crossword Puzzle Chess with a clock Taxi driving Medical diagnosis Image analysis Part picking robot Interactive English tutor

Agent Types Five basic types in order of increasing generality: Simple reflex agents Model-based reflex agents Goal-based agents Utility-based agents Learning-based agents

Simple reflex agents …… example: vacuum cleaner world NO MEMORY Fails if environment is partially observable

Model-based reflex agents Model the state of the world by: modeling how the world chances how it’s actions change the world description of current world state This can work even with partial information It’s is unclear what to do without a clear goal

Goal-based agents Goals provide reason to prefer one action over the other. We need to predict the future: we need to plan & search

Utility-based agents Some solutions to goal states are better than others. Which one is best is given by a utility function. Which combination of goals is preferred?

Learning Agents How does an agent improve over time? By monitoring it’s performance and suggesting better modeling, new action rules, etc. Evaluates current world state changes action rules suggests explorations “old agent”= model world and decide on actions to be taken

Query Regarding Lecture Email: [email protected]

Thanks Email: [email protected]