Module 1
Module 1
INTELLIGENCE
3 January 2024
COURSE OBJECTIVES
3 January 2024
COURSE OUTCOMES
On completion of this course, student should be able to:
1. Evaluate Artificial Intelligence (AI) methods and describe their
foundations.
2. Apply basic principles of AI in solutions that require problem-
solving, inference, perception, knowledge representation and learning.
3. Demonstrate knowledge of reasoning, uncertainty, and knowledge
representation for solving real-world problems
4. Analyze and illustrate how search algorithms play a vital role in
problem-solving
3 January 2024
MODULE 1
TOPICS
• Introduction- Evolution of AI, State of Art -Different Types of
Artificial Intelligence-Applications of AI-Subfields of AI-Intelligent
Agents- Structure of Intelligent Agents-Environments
3 January 2024
INTRODUCTION
3 January 2024
INTRODUCTION
• What is Artificial?
• Made or produced by human beings rather than occurring naturally, especially
as a copy of something natural.
• What is Intelligence?
• The ability to acquire and apply knowledge and skills.
3 January 2024
ARTIFICIAL INTELLIGENCE
Artificial Intelligence:
Artificial Intelligence, is the ability of a computer to
act like a human being.
3 January 2024
WHERE WE ARE?
3 January 2024
EVOLUTION OF AI
3 January 2024
THE TURING TEST
1950 – Alan Turing devised a test for
intelligence called the Imitation Game
• Ask questions of two entities, receive
answers from both
• If you can’t tell which of the entities is
human and which is a computer
program, then you are fooled and we Questions
should therefore consider the computer Answers Answers
to be intelligent
• 1952-1969
• GPS- Newell and Simon
• Geometry theorem prover - Gelernter (1959)
• Samuel Checkers that learns (1952)
• McCarthy - Lisp (1958), Advice Taker, Robinson’s
resolution
• Microworlds: Integration, block-worlds.
• 1962- the perceptron convergence (Rosenblatt)
3 January 2024
History of Artificial Intelligence
• 1950: Alan Turing publishes Computing Machinery and Intelligence.
• 1956: John McCarthy coins the term 'artificial intelligence' at the first-ever AI conference at
Dartmouth College.
• 1967: Frank Rosenblatt builds the Mark 1 Perceptron, the first computer based on a neural
network that 'learned' though trial and error.
• 1980s: Neural networks which use a backpropagation algorithm to train itself become
widely used in AI applications.
• 1997: IBM's Deep Blue beats then world chess champion Garry Kasparov, in a chess match
(and rematch).
• 2011: IBM Watson beats champions Ken Jennings and Brad Rutter at Jeopardy!
• 2015: Baidu's Minwa supercomputer uses a special kind of deep neural network called a
convolutional neural network to identify and categorize images with a higher rate of
accuracy than the average human.
• 2016: DeepMind's AlphaGo program, powered by a deep neural network, beats Lee Sodol,
the world champion Go player, in a five-game match
3 January 2024
Shakey-AI First Robot-1960s
3 January 2024
4 Categories of Definition for AI
• Systems that act like humans
• Systems that think like humans
• Systems that think rationally
• Systems that act rationally
3 January 2024
1. Acting Humanly
• Also called Turing Test Approach
• The art of creating machines that perform functions that require
intelligence when performed by people
• i.e., making computers to act like humans.
• Example : Turing Test
3 January 2024
Turing Test Approach
• Provides satisfactory operational definition for intelligence.
3 January 2024
The Turing Test Approach
3 January 2024
Qualities required to pass Turing Test
• The system requires these abilities to pass the test
3 January 2024
Total Turing Test
• Can test subject’s perceptual abilities
3 January 2024
2. Thinking Humanly
• Making computers to think like humans
• Goal is to build systems that function internally in some way
similar to human mind
• Also called cognitive modeling approach
3 January 2024
Cognitive Modeling Approach
• If we are going to say that a
• given program thinks like a human,
• Give a way of determining how humans thinks;
• We need to get inside the actual workings of human minds.
• Precise theory of the mind =>becomes possible to express the theory
as a computer program.
3 January 2024
Cognitive Science
• Combines computer models of AI and experimental techniques of psychology.
• Cognitive Science
• Is an interdisciplinary science
3 January 2024
3 January 2024
3. Thinking Rationally
• Also called Laws of Thought approach
• Thinking “Right Thing”
• Making computers to think the “Right Thing”
• Relies on logic(to make inferences) rather than human to measure correctness.
• Logic: provide a precise notation for statements about all kinds of things in the
world and the relations between them.
• Syllogism: Provide patterns for argument structures
• Always give correct conclusion given correct premises.
• For example,
• Premise: John is a human and all humans are mortal
• Conclusion: John is mortal
• Can be done using logics. Example Propositional and predicate logic.
3 January 2024
3 January 2024
Two obstacles: Laws of Thought of
Approach
• It’s not easy to take informal knowledge and state it in the formal
terms required by logical notation, particularly when the knowledge
is less than 100% certain.
• Being able to solve a problem “in principle” and doing so “in
practice” are very different.
• i.e., 1. Informal Knowledge is not precise.
2. Difficult to model uncertainty
3. Theory and practice cannot be put together.
3 January 2024
4. Acting Rationally
• Also called Rational Agent Approach.
• Doing “Right Thing”
• Rational Agent – acts to achieve the best outcome.
• An agent acts rationally if it selects the action that maximizes it
performance measure.
3 January 2024
Rational Agent Approach
• Design of rational agent
• Advantages
• More general than laws of thought approach
• Concentrates on scientific development
Limited Rationality
• acting appropriately when there is not enough time to do all
computation
3 January 2024
STATE OF THE ART
Robotic vehicles
• A driverless robotic car named STANLEY sped through the
rough terrain of the Mojave desert at 22 mph, finishing the 132-
mile course first to win the 2005 DARPA Grand Challenge
3 January 2024
2004: Barstow, CA, to Primm, NV
3 January 2024
ROBOTICStanley
VEHICLES
Robot
Stanford Racing Team www.stanfordracing.org
3 January 2024
SENSOR INTERFACE PERCEPTION PLANNING&CONTROL USER INTERFACE
pause/disable command
Wireless E-Stop
Laser 1 interface
RDDF corridor (smoothed and original) driving mode
Laser 2 interface
map trajectory
Laser 5 interface Laser mapper VEHICLE
Camera interface Vision mapper
vision map INTERFACE
obstacle list Steering control
Radar interface Radar mapper
vehicle state (pose, velocity) Touareg interface
vehicle
state Throttle/brake
GPS position UKF Pose
control Power server
estimation
vehicle state (pose, velocity) interface
GPS compass
assessment
Wheel velocity
Brake/steering
heart beats Linux processes start/stop emergency stop
health status
Process controller Health monitor
power on/off
data
3 January 2024
Planning = Rolling out Trajectories
3 January 2024
Game playing
IBM's DEEP BLUE became the first computer program to defeat the
world champion in a chess match when it bested Garry Kasparov by a score of 3.5
to 2.5 in an exhibition match
3 January 2024
• Some of the applications are given below:
• Business : Financial strategies, give advice
• Engineering: check design, offer suggestions to
create new product
• Manufacturing: Assembly, inspection & maintenance
• Mining: used when conditions are dangerous
• Hospital : monitoring, diagnosing & prescribing
• Education : In teaching e-tutoring
• household : Advice on cooking, shopping etc.
• farming : prune trees & selectively harvest mixed
crops.
3 January 2024
Applications of AI
• Robots
• Chess-playing program
• Voice recognition system
• Speech recognition system
• Grammar checker
• Pattern recognition
• Medial diagnosis
• Game Playing
• Machine Translation
• Resource Scheduling
• Expert systems (diagnosis, advisory, planning, etc)
• Machine learning
3 January 2024
Subfields of AI
3 January 2024
What are Agent and Environment?
• An agent is anything that can perceive its environment through sensors and
acts upon that environment through effectors(actuators).
• A human agent has sensory organs such as eyes, ears, nose, tongue and skin
parallel to the sensors, and other organs such as hands, legs, mouth, for
effectors.
• A robotic agent replaces cameras and infrared range finders for the sensors,
and various motors and actuators for effectors.
• A software agent has encoded bit strings as its programs and actions.
3 January 2024
Agents and environments
3 January 2024
• Percept: the agent’s perceptual inputs
• Percept sequence: the complete history of everything the agent has
perceived
• Agent function maps any given percept sequence to an action [f: p*
A]
• The agent program runs on the physical architecture to produce f
• Agent = architecture + program
3 January 2024
Agent Function and Agent Program
• The agent function that maps any given percept
histories(sequence) to an actions:
[f: P* 🡪 A]
Agent: Percept* 🡪 Action *
The ideal mapping specifies which actions an agent to take
at any point in time
• The agent program runs on the physical architecture to
produce agent function f
• Agent = Architecture + Program
3 January 2024
Vacuum-cleaner world
3 January 2024
A vacuum-cleaner agent
• \input{tables/vacuum-agent-function-table}
3 January 2024
Rational agents-The Concept of rationality
• A rational agent is an agent whose acts try to maximize
some performance measure.
• An agent should strive to "do the right thing", based on
what it can perceive and the actions it can perform.
• The right action is the one that will cause the agent to be
most successful
• Performance measure: An objective criterion for success of
an agent's behavior
• E.g., performance measure of a vacuum-cleaner agent could
be amount of dirt cleaned up, amount of time taken,
amount of electricity consumed, amount of noise
generated, etc.
3 January 2024
Rational agents
• Rational Agent: For each possible percept sequence, a
rational agent should select an action that is expected to
maximize its performance measure, given the evidence
provided by the percept sequence and whatever built-in
knowledge the agent has.
3 January 2024
Rational agents
• Rationality is distinct from omniscience (all-knowing
with infinite knowledge)
3 January 2024
• The rationality of an agent depends on four
things
• the performance measure defining the agent's
degree of success
• the percept sequence, the sequence of all the
things perceived by the agent
• the agent's prior knowledge of the
environment
• the actions that the agent can perform
3 January 2024
Task environment
• To design a rational agent we need to specify a task
environment
• a problem specification for which the agent is a solution
3 January 2024
PEAS: Specifying an automated taxi driver
Performance measure:
•?
Environment:
•?
Actuators:
•?
Sensors:
•?
3 January 2024
PEAS: Specifying an automated taxi driver
Performance measure:
• safe, fast, legal, comfortable, maximize profits
Environment:
•?
Actuators:
•?
Sensors:
•?
3 January 2024
PEAS: Specifying an automated taxi driver
Performance measure:
• safe, fast, legal, comfortable, maximize profits
Environment:
• roads, other traffic, pedestrians, customers
Actuators:
•?
Sensors:
•?
3 January 2024
PEAS: Specifying an automated taxi driver
Performance measure:
• safe, fast, legal, comfortable, maximize profits
Environment:
• roads, other traffic, pedestrians, customers
Actuators:
• steering, accelerator, brake, signal, horn
Sensors:
•?
3 January 2024
PEAS: Specifying an automated taxi driver
Performance measure:
• safe, fast, legal, comfortable, maximize profits
Environment:
• roads, other traffic, pedestrians, customers
Actuators:
• steering, accelerator, brake, signal, horn
Sensors:
• cameras, sonar, speedometer, GPS
3 January 2024
3 January 2024
3 January 2024
Properties of task environments
• 1.Fully observable vs. Partially observable
• If an agent’s sensors give it access to the complete state
of the environment at each point in time then the
environment is effectively and fully observable
• if the sensors detect all aspects
• That are relevant to the choice of action
3 January 2024
• Partially observable
An environment might be Partially observable because of noisy and
inaccurate sensors or because parts of the state are simply missing
from the sensor data.
Example:
• A local dirt sensor of the cleaner cannot tell
• Whether other squares are clean or not
3 January 2024
Properties of task environments
3 January 2024
Properties of task environments
• 3.Episodic vs. sequential
• An episode = agent’s single pair of perception & action
• The quality of the agent’s action does not depend on other
episodes
• Every episode is independent of each other
• Episodic environment is simpler
• The agent does not need to think ahead
• Sequential
• Current action may affect all future decisions
-Ex. Taxi driving and chess.
3 January 2024
Properties of task environments
• 4.Static vs. dynamic
• A dynamic environment is always changing over time
• E.g., the number of people in the street
• While static environment
• E.g., the destination
• Semidynamic
• environment is not changed over time
• but the agent’s performance score does
3 January 2024
Properties of task environments
3 January 2024
Properties of task environments
• 6.Single agent VS. multiagent
• Playing a crossword puzzle – single agent
• Chess playing – two agents
• Competitive multiagent environment
• Chess playing
• Cooperative multiagent environment
• Automated taxi driver
• Avoiding collision
3 January 2024
Properties of task environments
• 7.Known vs. unknown
This distinction refers not to the environment itself but to the agent’s
(or designer’s) state of knowledge about the environment.
-In known environment, the outcomes for all actions are
given. ( example: solitaire card games).
- If the environment is unknown, the agent will have to learn how it
works in order to make good decisions.( example: new video game).
3 January 2024
Examples of task environments
3 January 2024
Structure of agents
3 January 2024
Structure of agents
• Agent = architecture + program
▪ Architecture = some sort of computing device (sensors + actuators)
▪ (Agent) Program = some function that implements the agent mapping = “?”
▪ Agent Program = Job of AI
3 January 2024
Agent programs
• Input for Agent Program
• Only the current percept
• Input for Agent Function
• The entire percept sequence
• The agent must remember all of them
• Implement the agent program as
• A look up table (agent function)
3 January 2024
Agent programs
• Skeleton design of an agent program
3 January 2024
Agent Programs
3 January 2024
Agent programs
• Despite of huge size, look up table does what we
want.
• The key challenge of AI
• Find out how to write programs that, to the extent
possible, produce rational behavior
• From a small amount of code
• Rather than a large amount of table entries
• E.g., a five-line program of Newton’s Method
• V.s. huge tables of square roots, sine, cosine, …
3 January 2024
Types of agent programs
• Four types
• Simple reflex agents
• Model-based reflex agents
• Goal-based agents
• Utility-based agents
3 January 2024
Reflex Agent Diagram
Sensors
Environment
What the world is like now
Condition-action rules
What should I do now
Agent
Actuators
3 January 2024
Reflex Agent Diagram 2
Sensors
What the world is like now
Condition-action rules
What should I do now
Agent Actuators
3 January 2024
Environment
Reflex Agent Program
• application of simple rules to situations
condition := INTERPRET-INPUT(percept)
rule := RULE-MATCH(condition, rules)
action := RULE-ACTION(rule)
return action
3 January 2024
Simple reflex agents
3 January 2024
Simple reflex agents
3 January 2024
A Simple Reflex Agent in Nature
percepts
(size, motion)
RULES:
(1) If small moving object,
then activate SNAP
(2) If large moving object,
then activate AVOID and inhibit SNAP
ELSE (not moving) then NOOP
needed for
completeness Action: SNAP or AVOID or NOOP
3 January 2024
Model-based Reflex Agents
3 January 2024
Model-based reflex agents
3 January 2024
Example Table Agent
With Internal State
THEN
IF
3 January 2024
Example Reflex Agent With Internal State:
Wall-Following
start
3 January 2024
Goal-based agents
3 January 2024
Goal-based agents
• Conclusion
• Goal-based agents are less efficient
• but more flexible
• Agent 🡪 Different goals 🡪 different tasks
• Search and planning
• two other sub-fields in AI
• to find out the action sequences to achieve its goal
3 January 2024
Goal-based agents
3 January 2024
Utility-based agents
3 January 2024
Utility-based agents
3 January 2024
Utility-based agents
• it is said state A has higher utility
• If state A is more preferred than others
• Utility is therefore a function
• that maps a state onto a real number
• the degree of success
3 January 2024
Utility-based agents (3)
• Utility has several advantages:
• When there are conflicting goals,
• Only some of the goals but not all can be achieved
• utility describes the appropriate trade-off
• When there are several goals
• None of them are achieved certainly
• utility provides a way for the decision-making
3 January 2024
Agents
Some general features characterizing agents:
• Autonomy
• goal-orientedness
• collaboration
• flexibility
• ability to be self-starting
• temporal continuity
• character
• adaptiveness
• mobility
• capacity to learn.
3 January 2024
Classification of agents
🔾 Interface Agents
AI techniques to provide assistance to the user
🔾 Mobile agents
capable of moving around networks gathering information
🔾 Co-operative agents
communicate with, and react to, other agents in a multi-agent
systems within a common environment
🔾 Reactive agents
“reacts” to a stimulus or input that is governed by some state or
event in its environment
3 January 2024
Distributed Computing Agents
🔾 Common learning goal (strong sense)
🔾 Separate goals but information sharing
(weak sense)
3 January 2024
Learning Agents
• After an agent is programmed, can it work
immediately?
• No, it still need teaching
• In AI,
• Once an agent is done
• We teach it by giving it a set of examples
• Test it by using another set of examples
• We then say the agent learns
• A learning agent
3 January 2024
Learning Agents
• Four conceptual components
• Learning element
• Making improvement
• Performance element
• Selecting external actions
• Critic
• Tells the Learning element how well the agent is doing with
respect to fixed performance standard.
(Feedback from user or examples, good or not?)
• Problem generator
• Suggest actions that will lead to new and informative
experiences.
3 January 2024
Learning agents
3 January 2024
Automated taxi example
• Performance element-whatever collection of
knowledge and procedures the taxi has for selecting
its driving actions.
• The taxi goes out on the road and drives, using this
performance element
3 January 2024
Automated taxi example
• Critic-observes the world and passes information
along to the learning element.
• For example, after the taxi makes a quick left turn
across three lanes of traffic, the critic observes the
shocking language used by other drivers.
• From this experience, the learning element is able to
formulate a rule saying this was a bad action, and the
performance element is modified by installation of
the new rule.
3 January 2024
Automated taxi example
• The problem generator- might identify certain areas
of behavior in need of improvement and suggest
experiments, such as trying out the brakes on
different road surfaces under different conditions.
• The Learning element-if the taxi exerts a certain
braking pressure when driving on a wet road, then it
will soon find out how much deceleration is actually
achieved.
• Clearly, these two learning tasks are more difficult if
the environment is only partially observable.
3 January 2024
Automated taxi example
• The situation is slightly more complex for a utility-
based agent that wishes to learn utility information.
For example, suppose the taxi-driving agent receives
no tips from passengers who have been thoroughly
shaken up during the trip.
• The external performance standard must inform the
agent that the loss of tips is a negative contribution to
its overall performance; then the agent might be able
to learn that violent maneuvers do not contribute to
its own utility.
3 January 2024
Automated taxi example
• In a sense, the performance standard distinguishes
part of the incoming percept as a reward (or penalty)
that provides direct feedback on the quality of the
agent’s behavior.
• Hard-wired performance standards such as pain and
hunger in animals can be understood in this way.
3 January 2024
How the components of agent programs
work?
• Question for a student of AI is, “How on earth do these components
work?
3 January 2024
How the components of agent programs
work?
• Agent’s organization
a) Atomic Representation: In this representation, each state of the
world is a black box that has no internal structure. E.g., finding each
state is a city. AI algorithms: search, games, Markov decision processes,
hidden Markov models, etc.
b) Factored Representation: In this representation, each state has
some attribute value properties. E.g., GPS location, amount of gas in
the tank. AI algorithms: constraint satisfaction, and Bayesian networks.
c) Structured Representation: Relationships between the objects of a
state can be explicitly expressed. AI algorithms: first-order logic,
knowledge-based learning, natural language understanding.
3 January 2024
The major points to recall are as follows:
• An agent is something that perceives and acts in an environment.
• The agent function for an agent specifies the action taken by the
agent in response to any percept sequence.
• The performance measure evaluates the behavior of the agent in an
environment.
• A rational agent acts so as to maximize the expected value of the
performance measure, given the percept sequence it has seen so far.
• A task environment specification includes the performance measure,
the external environment, the actuators, and the sensors. In
designing an agent, the first step must always be to specify the task
environment as fully as possible.
3 January 2024
The major points to recall are as follows:
• The agent program implements the agent function. There exists a
variety of basic agent-program designs reflecting the kind of
information made explicit and used in the decision process.
• The designs vary in efficiency, compactness, and flexibility. The
appropriate design of the agent program depends on the nature of
the environment.
• Simple reflex agents respond directly to percepts, whereas model-
based reflex agents maintain internal state to track aspects of the
world that are not evident in the current percept.
• Goal-based agents act to achieve their goals, and utility-based
agents try to maximize their own expected “happiness.”
• All agents can improve their performance through learning.
3 January 2024