AIML Unit1 ppt for the betterment and help of students
ssusere4bf4a
33 views
35 slides
Oct 20, 2024
Slide 1 of 35
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
About This Presentation
AIML unit1
Size: 4.02 MB
Language: en
Added: Oct 20, 2024
Slides: 35 pages
Slide Content
1
School of Computer Science & Artificial Intelligence Dr Mohammed A l i Shaik Assistant Professor Email:[email protected] Phone:9000498496 30 July 2024
Artificial Intelligence
Introduction to Artificial Intelligence how we think?
Acting humanly: The Turing Test approach The Turing Test, proposed by Alan Turing (1950). The computer passes the test if a human interrogator, after posing some written questions, cannot tell whether the written responses come from a person or not.
The computer would need to possess the following capabilities: natural language processing to enable it to communicate successfully in English. knowledge representation to store what it knows or hears. automated reasoning to use the stored information to answer questions and to draw new conclusions. machine learning to adapt to new circumstances and to detect and extrapolate patterns. Total Turing test includes a video signal, so the computer will need • Computer vision to perceive objects • Robotics to manipulate objects and move about
Thinking humanly: The cognitive modeling approach The interdisciplinary field of cognitive science brings together computer models from AI and experimental techniques from psychology to construct precise and testable theories of the human mind Real cognitive science is necessarily based on experimental investigation of actual humans or animals In the early days of AI, people think that an algorithm performs well on a task ⇔ it is a good model of human performance
Thinking rationally: The "laws of thought" approach The Greek philosopher Aristotle was one of the first to attempt to codify " right thinking ,“ " Socrates is a man; all men are mortal”; These laws of thought were LOGIC supposed to govern the operation of the mind; their study initiated the field called logic. The so-called logicist tradition within artificial intelligence hopes to build on such programs to create intelligent systems. Socrates is mortal .
Acting rationally: The rational agent approach Making correct inferences is sometimes part of being a rational agent, but not all An agent is just something that acts (agent comes from the Latin agere , to do) A rational agent is one that acts so as to achieve the best outcome or, when there is uncertainty, the best expected outcome • This approach has two advantages: • It is more general than the “laws of thought” approach because correct inference is just one of several possible mechanisms for achieving rationality • It is more amenable to scientific development than are approaches based on human behavior or human thought.
Good Behavior: The Concept of Rationality A rational agent is one that does the right thing. Obviously, doing the right thing is better than doing the wrong thing, but what does it mean to do the right thing? Performance measures : consequentialism: we evaluate an agent’s behavior by its consequences. When an agent is plunked down in an environment, it generates a sequence of actions according to the percepts it receives. This sequence of actions causes the environment to go through a sequence of states. If the sequence is desirable, then the agent has performed well. This notion of desirability is captured by a performance measure that evaluates any given sequence of environment states.
Conti… Rationality: The performance measure that defines the criterion of success. The agent’s prior knowledge of the environment. The actions that the agent can perform. The agent’s percept sequence to date. Rational agent: “For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has ”.
AI Definition What is artificial intelligence? • It is the science and engineering of making intelligent machines, especially intelligent computer programs • What is intelligence • Intelligence is the computational part of the ability to achieve goals in the world. by John McCarthy
Proposing and evaluating AI applications Autonomous planning and scheduling: Game playing: Autonomous control: Diagnosis: Logistics Planning: traffic Robotics: Language understanding and problem solving:
AI challenges Building trust: AI human interface: a huge shortage of working manpower having data analytics and data science skills; Investment: computing power. Software malfunction: Black Box tools. Non-invincible: it simply cannot replace all tasks . High expectations: Data security: Algorithm bias: Data scarcity: AI applications depend directly on the accuracy and relevancy of supervised and labeled datasets being used for training and learning.
Intelligent Agents Agents and environments: An agent is anything that can be viewed as perceiving its environment through sensors and SENSOR acting upon that environment through actuators. A human agent A robot agent an agent's choice of action at any given instant can depend on the entire percept sequence observed to date.
Eg : the vacuum-cleaner world
Omniscience, learning, and autonomy Omniscience: An omniscient agent knows the actual outcome of its actions and can act accordingly; but omniscience is impossible in reality . Rationality maximizes expected performance, while perfection maximizes actual performance. Agent can perform actions in order to modify future percepts so as to obtain useful information—called information gathering .
Learning : a rational agent not only to gather information but also to learn as much as possible from what it perceives. An agent is autonomous if its behavior is determine by its own experience. A rational agent should be autonomous —it should learn what it can to compensate for partial or incorrect prior knowledge. For example: a vacuum-cleaning agent that learns to predict where and when additional dirt will appear will do better than one that does not.
The Nature of Environments The nature of the task environment directly affects the appropriate design for the agent program. task environment: group all these- t he performance measure, the environment, and the agent’s actuators and sensors. PEAS (Performance, Environment, Actuators, Sensors) In designing an agent, the first step must always be to specify the task environment as fully as possible.
Eg : automated taxi driver
Task: medical diagnosing system Agent name Performance Measure Environment Actuators Sensors
Properties of task environments FULLY OBSERVABLE VS. PARTIALLY OBSERVABLE: If an agent’s sensors give it access to the complete state of the environment at each point in time, then we say that the task environment is fully observable. the agent need not maintain any internal state to keep track of the world. parts of the state are simply missing from the sensor data — for example , a vacuum agent with only a local dirt sensor cannot tell whether there is dirt in other squares. 2. SINGLE-AGENT VS. MULTIAGENT: competitive multiagent environment cooperative multiagent environment
3. Deterministic vs. nondeterministic : If the next state of the environment is completely determined by the current state and the action executed by the agent(s), then we say the environment is deterministic; otherwise, it is nondeterministic. 4. EPISODIC VS. SEQUENTIAL: In an episodic task environment, the agent’s experience is divided into atomic episodes. In each episode the agent receives a percept and then performs a single action. Crucially, the next episode does not depend on the actions taken in previous episodes. In sequential environments, on the other hand, the current decision could affect all future decisions. Chess and taxi driving are sequential.
5. STATIC VS. DYNAMIC: If the environment can change while an agent is deliberating, then we say the environment is dynamic for that agent; otherwise, it is static. Eg : Taxi driving is clearly dynamic 6. DISCRETE VS. CONTINUOUS: The discrete/continuous distinction applies to the state of the environment, to the way time is handled, and to the percepts and actions of the agent. For example, the chess environment has a finite number of distinct states (excluding the clock). Chess also has a discrete set of percepts and actions. Taxi driving is a continuous-state and continuous-time problem:
7. KNOWN VS. UNKNOWN In a known environment, the outcomes (or outcome probabilities if the environment is nondeterministic) for all actions are given. Obviously, if the environment is unknown, the agent will have to learn how it works in order to make good decisions.
The Structure of Agents an agent program that implements the agent function—the mapping from percepts to actions.
four basic kinds of agent programs Simple reflex agents; Model-based reflex agents; Goal-based agents; Utility-based agents.
Simple reflex agents These agents select actions on the basis of the current percept, ignoring the rest of the percept history. a condition–action rule
They are of limited intelligence. The agent will work only if the correct decision can be made on the basis of just the current percept—that is, only if the environment is fully observable. Infinite loops are often unavoidable for simple reflex agents operating in partially observable environments.
Model-based reflex agents : It works by finding a rule whose condition matches with the current situation. To handle partial observability is for the agent to keep track of the part of the world it can’t see now. the effects of the agent’s actions and how the world evolves independently of the agent. How the agent actions affect the world.
Goal-based agents; the agent needs some sort of goal information that describes situations that are desirable. Their every action is intended to minimize their distance from the goal. This agent is more flexible and the agent develops decisions making skill by choosing the right path from various options.
Utility-based agents These agents are similar to the goal-based agents but provides an extra component of utility measurement which makes them different by providing a measure of success at a given state. Utility based agent act based not only on goals but also the best way to achieve the goals. The utility based agent is useful when there are multiple possible alternatives, and an agent has to choose in order to perform best action. The utility function maps each state to a real number to check how efficiently each action achieves the goals.
Learning agents It can learn from its past experience or it has a learning capabilities It states to act with basic knowledge and then able to act and adapt automatically through learning A learning agent has mainly four conceptual components, which are: Learning Element : it is responsible for making improvements by learning from environment. Critic : A learning element takes feedback from critic which describes that how well the agent is doing with respect to a fixed performance standard Performance Element : it is responsible for selecting external action. Problem Generator : the component is responsible for suggesting actions that will lead to new and informative experiences. Hence learning agents are able to learn, analyze performance and look for new ways to improve the performance