Sr. No. Program outcomes (POs):
1PO1: Engineering knowledge
2PO2: Problem Analysis
3 PO3: Design/Development of solutions
4 PO4: Conduct investigations of complex
problems
5 PO5: Modern tool usage
6 PO6: The engineer and society
7 PO7: Environment and sustainability
8 PO8: Ethics
9 PO9: Individual and team work
10PO10: Communication
11PO11: Project management and finance
12PO12: Life long learning
Sr. No. Program Specific outcomes (PSOs):
1 PSO 1: Employ modern tools to model, simulate, experiment with,
and analyze the performance of Electronics and Telecommunication
systems.
2 PSO2: Address economic, social, environmental, ethical, health and
safety issues keeping in pace with latest technological concepts.
3 PSO3: Drive need-based innovations in Electronics
Telecommunication Engineering, fostering Make in India through an
understanding of finance management and entrepreneurship product
development
1 1 1
Approaches of AI
There are a total of four approaches of AI and that are as follows:
Acting humanly (The Turing Test approach):This approach was
designed by Alan Turing. The ideology behind this approach is that
a computer passes the test if a human interrogator, after asking
some written questions, cannot identify whether the written
responses come from a human or from a computer.
Thinking humanly (The cognitive modeling approach):The idea
behind this approach is to determine whether the computer
thinks like a human.
Thinking rationally (The “laws of thought” approach):The idea
behind this approach is to determine whether the computer
thinks rationally i.e. with logical reasoning.
Acting rationally (The rational agent approach):The idea behind
this approach is to determine whether the computer acts
rationally i.e. with logical reasoning.
An AI system is composed of an agent and its environment.
An agent(e.g., human or robot) is anything that can perceive its environment
through sensors and acts upon that environment through effectors. Intelligent
agents must be able to set goals and achieve them.
In classical planning problems, the agent can assume that it is the only system
acting in the world, allowing the agent to be certain of the consequences of its
actions. However, if the agent is not the only actor, then it requires that the
agent can reason under uncertainty.
This calls for an agent that cannot only assess its environment and make
predictions but also evaluate its predictions and adapt based on its assessment.
Natural language processing gives machines the ability to read and understand
human language. Some straightforward applications of natural language
processing include information retrieval, text mining, question answering, and
machine translation.
Machine perception is the ability to use input from sensors (such as cameras,
microphones, sensors, etc.) to deduce aspects of the world. e.g., Computer
Vision. Concepts such as game theory, and decision theory, necessitate that an
agent can detect and model human emotions.
Many times, students get confused between Machine Learning
and Artificial Intelligence, but Machine learning, a fundamental
concept of AI research since the field’s inception, is the study of
computer algorithms that improve automatically through
experience.
The mathematical analysis of machine learning algorithms and
their performance is a branch of theoretical computer science
known as a computational learning theory.
Stuart Shapiro divides AI research into three approaches, which
he calls computational psychology, computational philosophy, and
computer science.
Computational psychology is used to make computer programs
that mimic human behavior.
Computational philosophy is used to develop an adaptive, free-
flowing computer mind.
Implementing computer science serves the goal of creating
computers that can perform tasks that only people could
previously accomplish.
Q. Define AI. Describe the organization of AI Definition.
John McCarthy in mid-1950’scoined the term ―Artificial Intelligence‖
which he would define as ―the science and engineering of making
intelligent machines‖
AI is about teaching the machines to learn, to act, and think as humans
would do. We can organize AI definition into 4 categories:
The definitions on top are concerned with thought processes and reasoning,
whereas the ones on the bottom address behavior.
The definitions on the left measure success in terms of conformity to human
performance whereas the ones on the right measure against an ideal
performance measure called rationality.
A system is rational if it does the "right thing," given what it knows.
Historically, all four approaches to AI have been followed, each by different
people with different methods.
A human-centered approach must be in part an empirical science, involving
observations and hypotheses about human behavior.
A rationalist’s approach involves a combination of mathematics and
engineering. The various groups have both disparaged and helped each other.
Let us look at the four approaches in more detail.
Thinkinghumanly:Thecognitivemodelingapproach
Ifwearegoingtosaythatagivenprogramthinkslikeahuman,we
musthavesomewayofdetermininghowhumansthink.Weneedtoget
insidetheactualworkingsofhumanminds.
Therearethreewaystodothis:
A)throughintrospection-tryingtocatchourownthoughtsastheygo
by
B)throughpsychologicalexperiments—observingapersoninaction;
and
C)throughbrainimaging—observingthebraininaction.
•Thinking humanly is to make a system or program to think like a human. But to
achieve that, we need to know how does a human thinks.
•Suppose if we ask a person to explain how his brain connects different things
during the thinking process, he/she will probably close both eyes and will start to
check how he/she thinks but he/she cannot explain or interpret the process.
•Ask the same question to yourself, and most likely you will have the same
pattern and will end up saying “you do not know, or you may say something like
“I am thinking through my mind”, but you cannot express more than that.
•For example –If we want to model the thinking of Roger Federerand make the
model system to compete with someone or against him to play in a tennis game,
it may not be possible to replicate the exact thinking as Roger Federer, however, a
good build of Intelligence systems (Robot) can play and win the game against him.
•To understand the exact process of how we think, we need to go inside the human
mind to see how this giant machine works.
•We can interpret how the human mind thinks in theory, in three ways as follows:
1.Introspection method –Catch our thoughts and see how it flows.
2.Psychological Inspections method –Observe a person on the action.
3.Brain Imaging method (MRI (Magnetic resonanceimaging) or fMRI(Functional
Magnetic resonance imaging) scanning) –Observe a person’s brain in action.
Using the above methods, if we are able to catch the human brain’s actions and
give it as a theory, then we can convert that theory into a computer
program. If the input/output of the computer program matches with human
behavior, then it may be possible that a part of the program may be behaving
like a human brain.
So far we have seen “Thinking Humanly: The cognitive modeling approach” ,
“Acting Humanly: The Touring Test approach”, “Four Main Approaches to
Artificial Intelligence“, andabout how to make an AI model think and act like
a human.
We will take a closer look at“Thinking Rationally: The “laws of thought”
approach.
The Greek philosopher Aristotle was the one who first codifies “right-thinking” reasoning
processes.
Aristotle provided patterns for argument structures that always provide correct premises.
A famous example, “Sachinis a man; all men are mortal; therefore, Sachinis mortal.”
Another example –All TVs use energy; Energy always generates heat; therefore, all TVs
generate heat.”
These arguments initiated the field called logic.Notations for statements for all kinds of
objects were developed and interrelated between them to show logic.
By 1965, programs existed that could solve problems that were described in logical
notation and provides a solution.
The logical tradition inArtificial Intelligencehopes to build on such programs to create
intelligence systems or programs or computational models.
There are two limitations to this approach:
1.First, it is not easy to take informal knowledge to use logical
notation when there is not enough certainty on the knowledge.
2.Solving in principle and solving in practice varies hugely.
Let’s see a couple of examples of syllogism argument statements in Logical,
Mathematical, and Programming notations(Prolog programming is a logic
computer language).
Logical notation:
Sachinis a man;
All men are mortal;
therefore, Sachinis mortal.
Mathematical predicate calculus notation:
Ps is the statement “Sachinis a man.”
Qs is the statement “Sachinis mortal.”
∀x[Px→ Qx]
Ps
∴Qs
You can see syllogism argument shares a common term in predicate
calculus.
Prolog programming notation:
Fact Statement: Sachinis a man.
man(Sachin).
Rule (Headed horn clause) Statement: All men are mortal.
mortal(X):-man(X)
Goal or Query Statement: Is Sachinmortal?
?-mortal(Sachin)
Logical notation:
All TV’s uses energy;
Energy always generates heat;
therefore, All TVs generate heat.
Mathematical predicate calculus notation:
Ps is the statement “All TV’s uses energy.”
Qs is the statement “All TVs generate heat.”
∀x[Px→ Qx]
Ps
Ps
————— –
∴Qs
You can see syllogism argument shares a common term in
predicate calculus.
Prolog programming notation:
Fact Statement: All TVs use energy.
energy(tv).
Rule (Headed horn clause) Statement: Energy always generates heat
heat(X):-energy(X)
Goal or Query Statement: Is TV generates heat?
?-heat(tv)
We will continue on “Acting Rationally: The rational agent approach” in the
next blog.
Most of the excerpts/contents referred to the book “Artificial
Intelligence A Modern Approach”. It is one of the best books in
Artificial Intelligence.
Acting Rationally: The rational agent
approach
•A traditional computer program blindly executes the code that we write. Neither
it acts on its own nor it adapts to change itself based on the outcome.
•The so-called agent program that we refer to here is expected to do more than
the traditional computer program.
•It is expected to create and pursue the goal, change state, and operate
autonomously.
•A rational agent is an agent that acts to achieve its best performance for a given
task.
•The “Logical Approach” to AI emphasizes correct inferences and achieving a
correct inference is a part of the rational agent. Being able to give a logical
reason is one way of acting rationally.But all correct inferences cannot be called
rationality, because there are situations that don’t always have a correct thing to
do. It is also possible to act rationally without involving inferences. Our reflex
actions are considered as best examples of acting rationally without inferences.
•The rational agent approach to AI has a couple of advantage over other
approaches:
1.A correct inference is considered a possible way to achieve rationality but is
not always required to achieve rationality.
2.It is a more manageable scientific approach to define rationality than others
that are based on human behavior or human thought.
For current scenarios the computer would need to possess the
following capabilities:
natural language processing to enable it to communicate
successfully in English
knowledge representation to store what it knows or hears;
automated reasoning to use the stored information to answer
questions and to draw new conclusions
machine learning to adapt to new circumstances and to detect
and the patterns.
Total Turing Test includes a video signal so that the interrogator can
test the subject’s perceptual abilities, as well as the opportunity for the
interrogator to pass physical objects ―through the hatch.‖
To pass the total Turing Test, the computer will need
computer vision to perceive objects, and
Robotics to manipulate objects and move about.
These six disciplines compose most of AI.
Foundationof AI.
A brief history of the disciplines that contributed ideas,
viewpoints, and techniques to AI are as follows:
•Philosophy(the study of the fundamental nature of
knowledge):
1.Can formal rules be used to draw valid conclusions?
2.How does the mind arise from a physical brain?
3.Where does knowledge come from?
4.How does knowledge lead to action?
•Mathematics
•What are the formal rules to draw valid conclusions?
•What can be computed?
Formal science required a level of mathematical formalization in three
fundamental areas: logic, computation, and probability.
Logic:
George Boole (1815–1864), who worked out the details of propositional, or
Boolean, logic.
In 1879, GottlobFrege(1848–1925) extended Boole’s logic to include objects
and relations, creating the first order logic that is used today.
First order logic –Contains predicates, quantifiers and variables
E.g. Philosopher(a) ⇒ Scholar(a)
∀x, effect_carona(x) ⇒ quarantine(x)
∀x, King(x) ^ Greedy (x) ⇒ Evil (x)
Alfred Tarski(1902–1983) introduced a theory of reference that shows how
to relate the objects in a logic to objects in the real world.
Neuroscience:
•How do brain process information?
•Neuroscience is the study of the nervous system, particularly the brain.
•335 B.C. Aristotle wrote, "Of all the animals, man has the largest brain
in proportion to his size."
•Nicolas Rashevsky(1936, 1938) was the first to apply mathematical
models to the study of the nervous system.
A neuron cell of human brain
•The measurement of intact brain activity began in 1929 with the invention
by Hans Berger of the electroencephalograph (EEG).
•The recent development of functional magnetic resonance imaging (fMRI)
(Ogawa et al., 1990; Cabezaand Nyberg, 2001) is giving neuroscientists
unprecedentedly detailed images of brain activity, enabling measurements
that correspond in interesting ways to ongoing cognitive processes.
AI becomes an industry (1980--present)
The first successful commercial expert system, R1, began operation at the Digital
Equipment Corporation (McDermott, 1982).
The program helped configure orders for new computer systems; by 1986, it was
saving the company an estimated $40 million a year.
By 1988, DEC’s AI group had 40 expert systems deployed, with more on the way.
DuPont had 100 in use and 500 in development, saving an estimated $10 million a
year.
Nearly every major U.S. corporation had its own AI group and was either using or
investigating expert systems.
In 1981, the Japanese announced the “Fifth Generation” project, a 10-year
plan to build intelligent computers running Prolog.
In response, the United States formed the Microelectronics and Computer
Technology Corporation (MCC) as a research consortium designed to assure
national competitiveness.
In both cases, AI was part of a broad effort, including chip design and
human-interface research.
In Britain, the Alveyreport reinstated the funding that was cut by the
Lighthillreport.13 In all three countries, however, the projects never met their
ambitious goals.
Overall, the AI industry boomed from a few million dollars in 1980 to
billions of dollars in 1988, including hundreds of companies building expert
systems, vision systems, robots, and software and hardware specialized for
these purposes
. Soon after that came a period called the “AI Winter,” in which many
companies fell by the wayside as they failed to deliver on extravagant
promises.
Anagentis anything that can perceive its environment throughsensorsand
acts upon that environment througheffectors.
Ahuman agenthas sensory organs such as eyes, ears, nose, tongue and skin
parallel to the sensors, and other organs such as hands, legs, mouth, for
effectors.
Arobotic agentreplaces cameras and infrared range finders for the sensors,
and various motors and actuators for effectors.
Asoftware agenthas encoded bit strings as its programs and actions.
Agents and Environments
Agent Terminology
Performance Measure of Agent− It is the criteria, which determines how
successful an agent is.
Behavior of Agent− It is the action that agent performs after any given
sequence of percepts.
Percept− It is agent’s perceptual inputs at a given instance.
Percept Sequence− It is the history of all that an agent has perceived till date.
Agent Function− It is a map from the precept sequence to an action.
Rationality
Rationality is nothing but status of being reasonable, sensible, and having good
sense of judgment.
Rationality is concerned with expected actions and results depending upon what
the agent has perceived. Performing actions with the aim of obtaining useful
information is an important part of rationality.
What is Ideal Rational Agent?
An ideal rational agent is the one, which is capable of doing expected actions to
maximize its performance measure, on the basis of −
Its percept sequence
Its built-in knowledge base
Rationality of an agent depends on the following −
Theperformance measures, which determine the degree of success.
Agent’sPercept Sequencetill now.
The agent’sprior knowledge about the environment.
Theactionsthat the agent can carry out.
A rational agent always performs right action, where the right action means the
action that causes the agent to be most successful in the given percept
sequence. The problem the agent solves is characterized by Performance
Measure, Environment, Actuators, and Sensors (PEAS).
The Structure of Intelligent Agents
Agent’s structure can be viewed as −
Agent = Architecture + Agent Program
Architecture = the machinery that an agent executes on.
Agent Program = an implementation of an agent function.
Simple Reflex Agents
•They choose actions only based on the current percept.
•They are rational only if a correct decision is made only on the basis of current
precept.
•Their environment is completely observable.
Condition-Action Rule− It is a rule that maps a state (condition) to an action.
Model Based Reflex Agents
They use a model of the world to choose their actions. They maintain an internal
state.
Model− knowledge about “how the things happen in the world”.
Internal State− It is a representation of unobserved aspects of current state
depending on percept history.
Updating the state requires the information about −
How the world evolves.
How the agent’s actions affect the world.
Goal Based Agents
They choose their actions in order to achieve goals. Goal-based approach is more
flexible than reflex agent since the knowledge supporting a decision is explicitly
modeled, thereby allowing for modifications.
Goal− It is the description of desirable situations.
Utility Based Agents
They choose actions based on a preference (utility) for each state.
Goals are inadequate when −
There are conflicting goals, out of which only few can be achieved.
Goals have some uncertainty of being achieved and you need to weigh likelihood of
success against the importance of a goal.
The Nature of Environments
Some programs operate in the entirelyartificial environmentconfined to
keyboard input, database, computer file systems and character output on a
screen.
In contrast, some software agents (software robots or softbots) exist in rich,
unlimited softbotsdomains. The simulator has avery detailed, complex
environment. The software agent needs to choose from a long array of
actions in real time.
A softbotdesigned to scan the online preferences of the customer and
show interesting items to the customer works in therealas well as
anartificialenvironment.
The most famousartificial environmentis theTuring Test environment, in
which one real and other artificial agents are tested on equal ground.
This is a very challenging environment as it is highly difficult for a software
agent to perform as well as a human.
Turing Test
The success of an intelligent behavior of a system can be measured
with Turing Test.
Two persons and a machine to be evaluated participate in the test.
Out of the two persons, one plays the role of the tester.
Each of them sits in different rooms. The tester is unaware of who is
machine and who is a human.
He interrogates the questions by typing and sending them to both
intelligences, to which he receives typed responses.
This test aims at fooling the tester. If the tester fails to determine
machine’s response from the human response, then the machine is
said to be intelligent.
Properties of Environment
The environment has multifold properties −
Discrete / Continuous− If there are a limited number of distinct, clearly defined, states of
the environment, the environment is discrete (For example, chess); otherwise it is
continuous (For example, driving).
Observable / Partially Observable− If it is possible to determine the complete state of the
environment at each time point from the percepts it is observable; otherwise it is only
partially observable.
Static / Dynamic− If the environment does not change while an agent is acting, then it is
static; otherwise it is dynamic
Single agent / Multiple agents− The environment may contain other agents which may be
of the same or different kind as that of the agent.
Accessible / Inaccessible− If the agent’s sensory apparatus can have access to the
complete state of the environment, then the environment is accessible to that agent.
Deterministic / Non-deterministic− If the next state of the environment is completely
determined by the current state and the actions of the agent, then the environment is
deterministic; otherwise it is non-deterministic.
Episodic / Non-episodic− In an episodic environment, each episode consists of
the agent perceiving and then acting. The quality of its action depends just on
the episode itself. Subsequent episodes do not depend on the actions in the
previous episodes. Episodic environments are much simpler because the agent
does not need to think ahead.
Good Behavior: The Concept of Rationality
What is Rational Behavior?
Rational behavior is used to describe a decision-making process that results in the
optimal level of benefit, or alternatively, the maximum amount of utility.
Individuals who exhibit rational behavior make decisions that provide them with
the highest amount of personal satisfaction.
Agent in an Environment
ARationalagentis any piece of software, hardware or a
combination of the two which can interact with
theenvironmentwithactuatorsafter perceiving the
environment withsensors.
A rational agent is an agent that Acts in order to achieve
the best outcome, or where there is uncertainty, the best-
expected outcome.
Conceptually speaking, it does the “right thing”.
When an agent is plunked down in an environment, it
generates a sequence of actions according to the percepts
it receives.
This sequence of actions causes the environment to go
through a sequence of states . If the sequence is desirable,
then the agent has performed well .
This notion of desirability is captured by a performance
measure that evaluates any given sequence of environment
states.
Notice that we said environment states, not agent states.
If we define success in terms of agent’s opinion of its own
performance, an agent could achieve perfect rationality
simply by deluding itself that its performance was perfect .
As a general rule, it is better to design performance
measures according to what one actually wants in the
environment, rather than according to how one thinks the
agent should behave
For example, consider the case of this Vacuum cleaner as aRationalagent. It
has theenvironmentas the floor which it is trying to clean. It hassensorslike
Camera’s or dirt sensors which try to sense the environment. It has the brushes
and the suction pumps asactuatorswhich take action.Perceptis the agent’s
perceptual inputs at any given point of time. The action that the agent takes on
the basis of the perceptual input is defined by theagent function.
Hence before an agent is put into the environment, aPercept sequenceand the
correspondingactionsare fed into the agent. This allows it to take action on the
basis of the inputs.
Percept Sequence Action
Area1 Dirty Clean
Area1 Clean Move to Area2
Area2 Clean Move to Area1
Area2 Dirty Clean
An example would be something like a table
Based on the input (percept), the vacuum cleaner would either keep moving
between Area1 and Area2 or perform a clean operation. This is a simplistic
example but more complexity could be built in with the environmental
factors.
For example, depending on the amount of dirt, the cleaning could be a
power clean or a regular clean. This would further result in introducing a
sensor which could calculate the amount of dirt and so on.
This percept sequence is not only fed into the agent before it starts but it can
also be learned as the agent encounters newer percepts. The agent’s initial
configuration could reflect some prior knowledge of the environment, but as
the agent gains experiencethis may be modified and augmented.This is
achieved through reinforcement learning or other learning techniques.
The environment is theTask Environment (problem)for which theRational Agent
is the solution.Any task environment is characterized on the basis of PEAS.
Performance–What is the performance characteristic which would either make
the agent successful or not. For example,clean floor, optimal energy consumption
might be performance measures.
Environment–Physical characteristics and constraints expected.
For example, wood floors, furniture in the way etc
Actuators–The physical or logical constructs which would take action.
For example for the vacuum cleaner, these are the suction pumps
Sensors–Again physical or logical constructs which would sense the environment.
For example, these are cameras and dirt sensors.
The Nature of Environments
Rational Agents could be physical agents like the one described
above or it could also be a program that operates in a non-
physical environment like an operating system.
Imagine a botweb site operator designed to scan Internet news
sources and show the interesting items to its users, while selling
advertising space to generate revenue.
Agent PerformanceEnvironmentActuator Sensor
Math E
learning
system
SLA defined
score on the
test
Student,
Teacher,
parents
Computer
display
system for
exercises,
corrections,
feedback
Keyboard,
Mous
Environments can further be classified into various buckets. This
would help determine the intelligence which would need to be built
in the agent. These are
Observable–Full or Partial? If the agents sensors get full access then
they do not need to pre-store any information. Partial may be due to
inaccuracy of sensors or incomplete information about an
environment, like limited access to enemy territory
Number of Agents–For the vacuum cleaner, it works in a single agent
environment but for driver-less taxis, every driver-less taxi is a
separate agent and hence multi agent environment
.
Deterministic–The number of unknowns in the environment which
affect the predictability of the environment. For example, floor space
for cleaning is mostly deterministic, the furniture is where it is most
of the time but taxi driving on a road is non-deterministic.
Discrete–Does the agent respond when needed or does it have to
continuously scan the environment. Driver-less is continuous, online
tutor is discrete
Static–How often does the environment change. Can the agent
learn about the environment and always do the same thing?
Episodic–If the response to a certain precept is not dependent on
the previous one i.e. it is stateless (static methods in Java) then it is
discrete. If the decision taken now influences the future decisions
then it is a sequential environment
Hence to summarise
An agent is something that perceives and acts in an environment.
The performance measure evaluates the behaviourof the agent in
an environment.
A rational agent acts so as to maximisethe expected value of the
performance measure.
A task environment specification includes the PEAS i.e.
performance measure, the external environment, the actuators,
and the sensors.
Task environments vary along several significant dimensions. In
designing an agent, the first step must always be to specify the
task environment as fully as possible.
The Structure of Agents
Agent’s structure can be viewed as:
Agent = Architecture + Agent Program
Architecture = the machinery that an agent executes on.
Agent Program = an implementation of an agent
Different forms of Agent
As the degree of perceived intelligence and capability varies to frame into four
categories as,
Simple Reflex Agents
ModelBased Reflex Agents
Goal Based Agents
Utility Based agents
1. Simple Reflex Agents
They choose actions only based on the current
They are rational only if a correct decision is made only on the basis of current
Their environment is completely
Condition-Action Rule− It is a rule that maps a state (condition) to an action.
Example:ATM system if PIN matches with given account number than customer
get money.
2. Model Based Reflex Agents
They use a model of the world to choose their actions. They maintain an internal
state.
Model− The knowledge about how the things happen in the world.
Internal State− It is a representation of unobserved aspects of current state
depending on percept history.
Updating the state requires the information about −
How the world
How the agent’s actions affect the
Example:Car driving agent which maintains its own internal state and then
take action as environment appears to it.
3. Goal Based Agents
They choose their actions in order to achieve goals. Goal-based
approach is more flexible than reflex agent since the knowledge
supporting a decision is explicitly modeled, thereby allowing for
modifications.
Goal− It is the description of desirable situations.
Example:Searching solution for 8-queen puzzle.
4. Utility Based Agents
They choose actions based on a preference (utility) for each state. Goals are
inadequate when
There are conflicting goals, out of which only few can be
Goals have some uncertainty of being achieved and you need to weigh likelihood of
success against the importance of a
Example: Millitaryplanning robot which provides certain plan of action to be taken.