SlideShare a Scribd company logo
Module-1
INTRODUCTION
• We call ourselves Homo sapiens—man the wise—because our
intelligence is so important to us. For thousands of years, we have tried
to understand how we think; that is, how a mere handful of matter can
perceive, understand, predict, and manipulate a world far larger and
more complicated than itself.
• The field of artificial intelligence, or AI, goes further still: it attempts
not just to understand but also to build intelligent entities.
INTRODUCTION => WHAT IS AI?
• In Figure 1.1 we see eight definitions
of AI, laid out along two dimensions.
• The definitions on top are concerned
with THOUGHT PROCESSES and
REASONING, whereas the ones on
the bottom address BEHAVIOR.
• The definitions on the left measure
success in terms of fidelity to
HUMAN performance, whereas the
ones on the right measure against an
ideal performance measure, called
RATIONALITY.
• A system is rational if it does the
“right thing,” given what it knows.
INTRODUCTION  WHAT IS AI?  Acting humanly: The Turing Test approach
The Turing Test, proposed by Alan Turing (1950), was designed to provide a satisfactory operational definition of intelligence. A
computer passes the test if a human interrogator, after posing some written questions, cannot tell whether the written responses
come from a person or from a computer
The computer would need to possess the following capabilities:
• Natural Language Processing to enable it to communicate
successfully in English.
• Knowledge Representation to store what it knows or hears;
• Automated Reasoning to use the stored information to answer
questions and to draw new conclusions;
• Machine Learning to adapt to new circumstances and to detect and
extrapolate patterns.
Total Turing Test includes a video signal so that the interrogator
can test the subject’s perceptual abilities, as well as the
opportunity for the interrogator to pass physical objects “through
the hatch.” To pass the total Turing Test, the computer will need
• Computer Vision to perceive objects, and
• Robotics to manipulate objects and move about.
INTRODUCTION  WHAT IS AI?  Thinking humanly: The cognitive modeling
approach
If we are going to say that a given program thinks like a human, we must have some way of
determining how humans think. We need to get inside the actual workings of human minds. There
are three ways to do this:
1. Through introspection—trying to catch our own thoughts as they go by;
2. Through psychological experiments—observing a person in action; and
3. Through brain imaging—observing the brain in action.
The interdisciplinary field of cognitive science brings
together computer models from AI and experimental
techniques from psychology to construct precise and
testable theories of the human mind.
INTRODUCTION  WHAT IS AI?  Thinking rationally: The “laws of thought”
approach
• The Greek philosopher Aristotle was one of the first to attempt to codify “right thinking,” that
is, irrefutable reasoning processes. His syllogisms provided patterns for argument structures
that always yielded correct conclusions when given correct premises.
• For example, “Socrates is a man; all men are mortal; therefore, Socrates is mortal.”
• These laws of thought were supposed to govern the operation of the mind; their study initiated
the field called logic.
• The so-called logicist tradition within artificial intelligence hopes to build on such programs to
create intelligent systems.
• There are two main obstacles to this approach:
1. First, it is not easy to take informal knowledge and state it in the formal terms required by
logical notation, particularly when the knowledge is less than 100% certain.
2. Second, there is a big difference between solving a problem “in principle” and solving it
in practice.
INTRODUCTION  WHAT IS AI?  Acting rationally: The rational agent approach
• An agent is just something that acts (agent comes from the Latin agere, to do).
• Computer agents are expected to do:
 operate autonomously,
 perceive their environment,
 persist over a prolonged time period,
 adapt to change,
 create and pursue goals.
• A rational agent is one that acts so as to achieve the best outcome or, when there is uncertainty, the best expected
outcome.
• In the “laws of thought” approach to AI, the emphasis was on correct inferences (conclusions).
• Making correct inferences is sometimes part of being a rational agent, because one way to act rationally is to
reason logically to the conclusion that a given action will achieve one’s goals and then to act on that conclusion.
• On the other hand, correct inference is not all of rationality; in some situations, there is no provably correct thing
to do, but something must still be done.
• There are also ways of acting rationally that cannot be said to involve inference.
THE STATE OF ART
• What can AI do today? A concise answer is difficult because there are
so many activities in so many subfields. Here we sample a few
applications:
• Robotic vehicles
• Speech recognition
• Autonomous planning and scheduling
• Game playing
• Spam fighting
• Logistics planning
• Robotics
• Machine Translation
Robotic Vehicles
• A driverless robotic car named STANLEY sped through the rough terrain of
the Mojave dessert at 22 mph, finishing the 132-mile course first to win the
2005 DARPA Grand Challenge. STANLEY is a Volkswagen Touareg outfitted
with cameras, radar, and laser rangefinders to sense the environment and
onboard software to command the steering, braking, and acceleration
(Thrun, 2006). The following year CMU’s BOSS won the Urban Challenge,
safely driving in traffic through the streets of a closed Air Force base, obeying
traffic rules and avoiding pedestrians and other vehicles.
Speech Recognition
• A traveler calling United Airlines to book a flight can have the entire
conversation guided by an automated speech recognition and dialog
management system.
Autonomous planning and scheduling
• A hundred million miles from Earth, NASA’s Remote Agent program
became the first on-board autonomous planning program to control
the scheduling of operations for a spacecraft (Jonsson et al., 2000).
REMOTE AGENT generated plans from high-level goals specified from
the ground and monitored the execution of those plans—detecting,
diagnosing, and recovering from problems as they occurred.
Successor program MAPGEN (Al-Chang et al., 2004) plans the daily
operations for NASA’s Mars Exploration Rovers, and MEXAR2 (Cesta et
al., 2007) did mission planning—both logistics and science planning—
for the European Space Agency’s Mars Express mission in 2008
Game Playing
• IBM’s DEEP BLUE became the first computer
program to defeat the world champion in a chess
match when it bested Garry Kasparov by a score of
3.5 to 2.5 in an exhibition match (Goodman and
Keene, 1997). Kasparov said that he felt a “new
kind of intelligence” across the board from him.
Newsweek magazine described the match as “The
brain’s last stand.” The value of IBM’s stock
increased by $18 billion. Human champions
studied Kasparov’s loss and were able to draw a
few matches in subsequent years, but the most
recent human-computer matches have been won
convincingly by the computer
Spam fighting
• Each day, learning algorithms classify over a billion messages as spam,
saving the recipient from having to waste time deleting what, for
many users, could comprise 80% or 90% of all messages, if not
classified away by algorithms. Because the spammers are continually
updating their tactics, it is difficult for a static programmed approach
to keep up, and learning algorithms work best (Sahami et al., 1998;
Goodman and Heckerman, 2004).
Logistics planning
• During the Persian Gulf crisis of 1991, U.S. forces deployed a Dynamic
Analysis and Replanning Tool, DART (Cross and Walker, 1994), to do
automated logistics planning and scheduling for transportation. This
involved up to 50,000 vehicles, cargo, and people at a time, and had
to account for starting points, destinations, routes, and conflict
resolution among all parameters. The AI planning techniques
generated in hours a plan that would have taken weeks with older
methods. The Defense Advanced Research Project Agency (DARPA)
stated that this single application more than paid back DARPA’s 30-
year investment in AI.
Robotics
• The iRobot Corporation has sold over two million Roomba robotic
vacuum cleaners for home use. The company also deploys the more
rugged PackBot to Iraq and Afghanistan, where it is used to handle
hazardous materials, clear explosives, and identify the location of
snipers.
Machine Translation
• A computer program automatically translates from Arabic to English,
allowing an English speaker to see the headline “Ardogan Confirms
That Turkey Would Not Accept Any Pressure, Urging Them to
Recognize Cyprus.” The program uses a statistical model built from
examples of Arabic-to-English translations and from examples of
English text totaling two trillion words (Brants et al., 2007). None of
the computer scientists on the team speak Arabic, but they do
understand statistics and machine learning algorithms
Chapter 2
Intelligent Agents
Agents and Environment
An agent is anything that can be viewed as
perceiving its environment through sensors and
SENSOR acting upon that environment through
actuators. This simple idea is illustrated in Figure
2.1. ACTUATOR A human agent has eyes, ears, and
other organs for sensors and hands, legs, vocal
tract, and so on for actuators.
We use the term percept to refer to the agent’s
perceptual inputs at any given instant. An PERCEPT
SEQUENCE agent’s percept sequence is the
complete history of everything the agent has ever
perceived.
Agents and Environment
• Mathematically speaking, we say that an agent’s behavior is AGENT
FUNCTION described by the agent function that maps any given percept
sequence to an action.
• We can imagine tabulating the agent function that describes any given agent;
• Given an agent to experiment with, we can, in principle, construct this table
by trying out all possible percept sequences and recording which actions the
agent does in response. The table is, of course, an external characterization of
the agent. Internally, the agent function for an artificial agent will be
implemented by an AGENT PROGRAM agent program.
• It is important to keep these two ideas distinct. The agent function is an
abstract mathematical description; the agent program is a concrete
implementation, running within some physical system.
The Vacuum-Cleaner World
• This world is so simple that we can describe everything that happens;
it’s also a made-up world, so we can invent many variations. This
particular world has just two locations: squares A and B. The vacuum
agent perceives which square it is in and whether there is dirt in the
square. It can choose to move left, move right, suck up the dirt, or do
nothing. One very simple agent function is the following: if the
current square is dirty, then suck; otherwise, move to the other
square.
The Vacuum-Cleaner World
Program implements the agent function
tabulated in Fig. 2.3
Function Reflex-Vacuum-Agent([location,status]) return an action
If status = Dirty then return Suck
else if location = A then return Right
else if location = B then return left
Concept of Rationality
Rational agent
 One that does the right thing
 = every entry in the table for the agent function is
correct (rational).
What is correct?
 The actions that cause the agent to be most successful
 So we need ways to measure success.
Performance measure
Performance measure
 An objective function that determines
 How the agent does successfully
 E.g., 90% or 30% ?
An agent, based on its percepts
  action sequence :
if desirable, it is said to be performing well.
 No universal performance measure for all agents
Performance measure
A general rule:
 Design performance measures according to
 What one actually wants in the environment
 Rather than how one thinks the agent should behave
E.g., in vacuum-cleaner world
 We want the floor clean, no matter how the agent
behave
 We don’t restrict how the agent behaves
Rationality
What is rational at any given time depends on four
things:
 The performance measure defining the criterion of success
 The agent’s prior knowledge of the environment
 The actions that the agent can perform
 The agents’s percept sequence up to now
Rational agent
For each possible percept sequence,
 an rational agent should select
 an action expected to maximize its performance measure, given the
evidence provided by the percept sequence and whatever built-in
knowledge the agent has
E.g., an exam
 Maximize marks, based on
the questions on the paper & your knowledge
Example of a rational agent
Performance measure
 Awards one point for each clean square
 at each time step, over 10000 time steps
Prior knowledge about the environment
 The geography of the environment
 Only two squares
 The effect of the actions
Actions that can perform
Left, Right, Suck and NoOp
Percept sequences
Where is the agent?
Whether the location contains dirt?
Under this circumstance, the agent is
rational.
Example of a rational agent
An omniscient agent
Knows the actual outcome of its actions in
advance
No other possible outcomes
However, impossible in real world
An example
crossing a street but died of the fallen cargo
door from 33,000ft  irrational?
Omniscience
Based on the circumstance, it is rational.
As rationality maximizes
Expected performance
Perfection maximizes
Actual performance
Hence rational agents are not omniscient.
Omniscience
Learning
Does a rational agent depend on only current
percept?
 No, the past percept sequence should also be used
 This is called learning
 After experiencing an episode, the agent
 should adjust its behaviors to perform better for the same job
next time.
Autonomy
If an agent just relies on the prior knowledge of its
designer rather than its own percepts then the
agent lacks autonomy
A rational agent should be autonomous- it should
learn what it can to compensate for partial or
incorrect prior knowledge.
E.g., a clock
 No input (percepts)
 Run only but its own algorithm (prior knowledge)
 No learning, no experience, etc.
Sometimes, the environment may not be the
real world
E.g., flight simulator, video games, Internet
They are all artificial but very complex
environments
Those agents working in these environments are
called
 Software agent (softbots)
 Because all parts of the agent are software
Software Agents
Task environments
Task environments are the problems
 While the rational agents are the solutions
Specifying the task environment
 PEAS description as fully as possible
 Performance
 Environment
 Actuators
 Sensors
In designing an agent, the first step must always be to specify
the task environment as fully as possible.
Use automated taxi driver as an example
Task environments
Performance measure
 How can we judge the automated driver?
 Which factors are considered?
 getting to the correct destination
 minimizing fuel consumption
 minimizing the trip time and/or cost
 minimizing the violations of traffic laws
 maximizing the safety and comfort, etc.
Environment
A taxi must deal with a variety of roads
Traffic lights, other vehicles, pedestrians, stray
animals, road works, police cars, etc.
Interact with the customer
Task environments
Actuators (for outputs)
Control over the accelerator, steering, gear
shifting and braking
A display to communicate with the customers
Sensors (for inputs)
Detect other vehicles, road situations
GPS (Global Positioning System) to know where
the taxi is
Many more devices are necessary
Task environments
A sketch of automated taxi driver
Task environments
Properties of task environments
Fully observable vs. Partially observable
 If an agent’s sensors give it access to the complete state
of the environment at each point in time then the
environment is effectively and fully observable
 if the sensors detect all aspects
 That are relevant to the choice of action
Partially observable
• An environment might be Partially observable
because of noisy and inaccurate sensors or
because parts of the state are simply missing from
the sensor data.
• Example:
 A local dirt sensor of the cleaner cannot tell
 Whether other squares are clean or not
Deterministic vs. stochastic
 next state of the environment Completely determined
by the current state and the actions executed by the
agent, then the environment is deterministic,
otherwise, it is Stochastic.
 Strategic environment: deterministic except for actions
of other agents
• -Cleaner and taxi driver are:
 Stochastic because of some unobservable aspects  noise or
unknown
Properties of task environments
Episodic vs. sequential
 An episode = agent’s single pair of perception & action
 The quality of the agent’s action does not depend on other
episodes
 Every episode is independent of each other
 Episodic environment is simpler
 The agent does not need to think ahead
Sequential
 Current action may affect all future decisions
• -Ex. Taxi driving and chess.
Properties of task environments
Static vs. dynamic
A dynamic environment is always changing
over time
 E.g., the number of people in the street
While static environment
 E.g., the destination
Semidynamic
environment is not changed over time
but the agent’s performance score does
Properties of task environments
Discrete vs. continuous
If there are a limited number of distinct states,
clearly defined percepts and actions, the
environment is discrete
E.g., Chess game
Continuous: Taxi driving
Properties of task environments
Single agent VS. multiagent
Playing a crossword puzzle – single agent
Chess playing – two agents
Competitive multiagent environment
 Chess playing
Cooperative multiagent environment
 Automated taxi driver
 Avoiding collision
Properties of task environments
Properties of task environments
Known vs. unknown
This distinction refers not to the environment itslef but to the
agent’s (or designer’s) state of knowledge about the
environment.
-In known environment, the outcomes for all actions are
given. ( example: solitaire card games).
- If the environment is unknown, the agent will have to learn
how it works in order to make good decisions.( example:
new video game).
Examples of task environments
Structure of agents
Structure of agents
Agent = architecture + program
 Architecture = some sort of computing device (sensors
+ actuators)
 (Agent) Program = some function that implements the
agent mapping = “?”
 Agent Program = Job of AI
Agent programs
Input for Agent Program
 Only the current percept
Input for Agent Function
 The entire percept sequence
 The agent must remember all of them
Implement the agent program as
 A look up table (agent function)
Agent Programs
Skeleton design of an agent program
Agent Programs
P = the set of possible percepts
T= lifetime of the agent
 The total number of percepts it receives
Size of the look up table
Consider playing chess
 P =10, T=150
 Will require a table of at least 10150
entries
 
T
t
t
P
1
Agent Programs
Despite of huge size, look up table does what we
want.
The key challenge of AI
 Find out how to write programs that, to the extent
possible, produce rational behavior
 From a small amount of code
 Rather than a large amount of table entries
 E.g., a five-line program of Newton’s Method
 V.s. huge tables of square roots, sine, cosine, …
Types of agent programs
Four types
 Simple reflex agents
 Model-based reflex agents
 Goal-based agents
 Utility-based agents
Simple reflex agents
It uses just condition-action rules
 The rules are like the form “if … then …”
 efficient but have narrow range of applicability
 Because knowledge sometimes cannot be stated explicitly
 Work only
 if the environment is fully observable
Simple reflex agents
Simple reflex agents (2)
A Simple Reflex Agent in Nature
percepts
(size, motion)
RULES:
(1) If small moving object,
then activate SNAP
(2) If large moving object,
then activate AVOID and inhibit SNAP
ELSE (not moving) then NOOP
Action: SNAP or AVOID or NOOP
needed for
completeness
Model-based Reflex Agents
For the world that is partially observable
 the agent has to keep track of an internal state
 That depends on the percept history
 Reflecting some of the unobserved aspects
 E.g., driving a car and changing lane
Requiring two types of knowledge
 How the world evolves independently of the agent
 How the agent’s actions affect the world
Example Table Agent
With Internal State
Saw an object ahead,
and turned right, and
it’s now clear ahead
Go straight
Saw an object Ahead,
turned right, and object
ahead again
Halt
See no objects ahead Go straight
See an object ahead Turn randomly
IF THEN
Example Reflex Agent With Internal State:
Wall-Following
Actions: left, right, straight, open-door
Rules:
1. If open(left) & open(right) and open(straight) then
choose randomly between right and left
2. If wall(left) and open(right) and open(straight) then straight
3. If wall(right) and open(left) and open(straight) then straight
4. If wall(right) and open(left) and wall(straight) then left
5. If wall(left) and open(right) and wall(straight) then right
6. If wall(left) and door(right) and wall(straight) then open-door
7. If wall(right) and wall(left) and open(straight) then straight.
8. (Default) Move randomly
start
Model-based Reflex Agents
The agent is with memory
Model-based Reflex Agents
Goal-based agents
Current state of the environment is always not
enough
The goal is another issue to achieve
 Judgment of rationality / correctness
Actions chosen  goals, based on
 the current state
 the current percept
Goal-based agents
Conclusion
 Goal-based agents are less efficient
 but more flexible
 Agent  Different goals  different tasks
 Search and planning
 two other sub-fields in AI
 to find out the action sequences to achieve its goal
Goal-based agents
Utility-based agents
Goals alone are not enough
 to generate high-quality behavior
 E.g. meals in Canteen, good or not ?
Many action sequences  the goals
 some are better and some worse
 If goal means success,
 then utility means the degree of success (how
successful it is)
Utility-based agents
Utility-based agents
it is said state A has higher utility
 If state A is more preferred than others
Utility is therefore a function
 that maps a state onto a real number
 the degree of success
Utility-based agents
Utility has several advantages:
 When there are conflicting goals,
 Only some of the goals but not all can be achieved
 utility describes the appropriate trade-off
 When there are several goals
 None of them are achieved certainly
 utility provides a way for the decision-making
Learning Agents
After an agent is programmed, can it work
immediately?
 No, it still need teaching
In AI,
 Once an agent is done
 We teach it by giving it a set of examples
 Test it by using another set of examples
We then say the agent learns
 A learning agent
Learning Agents
Four conceptual components
 Learning element
 Making improvement
 Performance element
 Selecting external actions
 Critic
 Tells the Learning element how well the agent is doing with
respect to fixed performance standard.
(Feedback from user or examples, good or not?)
 Problem generator
 Suggest actions that will lead to new and informative experiences.
Learning Agents
How the components of agent programs work
THANK YOU

More Related Content

Similar to AI module 1 presentation under VTU Syllabus (20)

PPT
artificial engineering the future of computing
angelinjeba6
 
PPTX
UNIT I - AI.pptx
DeepaK577816
 
PPT
EELU AI lecture 1- fall 2022-2023 - Chapter 01- Introduction.ppt
DaliaMagdy12
 
PPTX
Artificial intelligence
saloni sharma
 
PPTX
Artificial Intelligence and its application
FELICIALILIANJ
 
PDF
Ch 1 Introduction to AI Applications.pdf
bilqesahmed60
 
PPT
AI_Intro2.ppt
ssuserf35e0b1
 
PPT
Introduction and deep understanding of AIML
bansalpra7
 
PPT
intro-class.ppt
AnishaR20
 
PPT
intro-class.ppt
ManasviVerma8
 
PPT
intro-class.ppt
FatimaZohra227608
 
PPT
All about AI
minewetech
 
PPT
about the ai very good subject....thanks for provding
chougulesup79
 
PPT
intro-class.ppt
MuhammadJaved672061
 
PPT
intro-class.ppt
securework
 
PPT
intro-class.ppt
ssuser23fbce
 
PPT
intro-class.ppt
NirmalaShinde3
 
PPT
intro-class.ppt
AhmadSajjad34
 
PPT
intro-class.ppt
MohamedKhedr90
 
PDF
Introduction to artificial intelligence and basic
c8h5fzfpf6
 
artificial engineering the future of computing
angelinjeba6
 
UNIT I - AI.pptx
DeepaK577816
 
EELU AI lecture 1- fall 2022-2023 - Chapter 01- Introduction.ppt
DaliaMagdy12
 
Artificial intelligence
saloni sharma
 
Artificial Intelligence and its application
FELICIALILIANJ
 
Ch 1 Introduction to AI Applications.pdf
bilqesahmed60
 
AI_Intro2.ppt
ssuserf35e0b1
 
Introduction and deep understanding of AIML
bansalpra7
 
intro-class.ppt
AnishaR20
 
intro-class.ppt
ManasviVerma8
 
intro-class.ppt
FatimaZohra227608
 
All about AI
minewetech
 
about the ai very good subject....thanks for provding
chougulesup79
 
intro-class.ppt
MuhammadJaved672061
 
intro-class.ppt
securework
 
intro-class.ppt
ssuser23fbce
 
intro-class.ppt
NirmalaShinde3
 
intro-class.ppt
AhmadSajjad34
 
intro-class.ppt
MohamedKhedr90
 
Introduction to artificial intelligence and basic
c8h5fzfpf6
 

More from shilpabl1803 (7)

PPTX
AI module 2 presentation under VTU Syllabus
shilpabl1803
 
PPTX
PPT on Introduction_to_AI _ Bridge Course.pptx
shilpabl1803
 
PPTX
Big Data Analytics Module-4 as per vtu .pptx
shilpabl1803
 
PPTX
Big Data Analytics Module-3 as per vtu syllabus.pptx
shilpabl1803
 
PPTX
BIg Data Analytics-Module-2 as per vtu syllabus.pptx
shilpabl1803
 
PPTX
Big Data Analytics-Module-1 for vtu syllabus.pptx
shilpabl1803
 
PDF
Final 22POP13 Lab Manual- By SBL & PK.pdf
shilpabl1803
 
AI module 2 presentation under VTU Syllabus
shilpabl1803
 
PPT on Introduction_to_AI _ Bridge Course.pptx
shilpabl1803
 
Big Data Analytics Module-4 as per vtu .pptx
shilpabl1803
 
Big Data Analytics Module-3 as per vtu syllabus.pptx
shilpabl1803
 
BIg Data Analytics-Module-2 as per vtu syllabus.pptx
shilpabl1803
 
Big Data Analytics-Module-1 for vtu syllabus.pptx
shilpabl1803
 
Final 22POP13 Lab Manual- By SBL & PK.pdf
shilpabl1803
 
Ad

Recently uploaded (20)

PDF
تقرير عن التحليل الديناميكي لتدفق الهواء حول جناح.pdf
محمد قصص فتوتة
 
PDF
Clustering Algorithms - Kmeans,Min ALgorithm
Sharmila Chidaravalli
 
PDF
FSE-Journal-First-Automated code editing with search-generate-modify.pdf
cl144
 
PPT
دراسة حاله لقرية تقع في جنوب غرب السودان
محمد قصص فتوتة
 
PPTX
Work at Height training for workers .pptx
cecos12
 
PDF
01-introduction to the ProcessDesign.pdf
StiveBrack
 
PDF
Module - 4 Machine Learning -22ISE62.pdf
Dr. Shivashankar
 
PPTX
Introduction to File Transfer Protocol with commands in FTP
BeulahS2
 
PDF
PRIZ Academy - Process functional modelling
PRIZ Guru
 
PPTX
Unit_I Functional Units, Instruction Sets.pptx
logaprakash9
 
PDF
輪読会資料_Miipher and Miipher2 .
NABLAS株式会社
 
PDF
Module - 5 Machine Learning-22ISE62.pdf
Dr. Shivashankar
 
PDF
Artificial Neural Network-Types,Perceptron,Problems
Sharmila Chidaravalli
 
PDF
13th International Conference of Security, Privacy and Trust Management (SPTM...
ijcisjournal
 
PDF
June 2025 Top 10 Sites -Electrical and Electronics Engineering: An Internatio...
elelijjournal653
 
PDF
NFPA 10 - Estandar para extintores de incendios portatiles (ed.22 ENG).pdf
Oscar Orozco
 
PDF
Generative AI & Scientific Research : Catalyst for Innovation, Ethics & Impact
AlqualsaDIResearchGr
 
PDF
LLC CM NCP1399 SIMPLIS MODEL MANUAL.PDF
ssuser1be9ce
 
PDF
June 2025 - Top 10 Read Articles in Network Security and Its Applications
IJNSA Journal
 
PDF
Plant Control_EST_85520-01_en_AllChanges_20220127.pdf
DarshanaChathuranga4
 
تقرير عن التحليل الديناميكي لتدفق الهواء حول جناح.pdf
محمد قصص فتوتة
 
Clustering Algorithms - Kmeans,Min ALgorithm
Sharmila Chidaravalli
 
FSE-Journal-First-Automated code editing with search-generate-modify.pdf
cl144
 
دراسة حاله لقرية تقع في جنوب غرب السودان
محمد قصص فتوتة
 
Work at Height training for workers .pptx
cecos12
 
01-introduction to the ProcessDesign.pdf
StiveBrack
 
Module - 4 Machine Learning -22ISE62.pdf
Dr. Shivashankar
 
Introduction to File Transfer Protocol with commands in FTP
BeulahS2
 
PRIZ Academy - Process functional modelling
PRIZ Guru
 
Unit_I Functional Units, Instruction Sets.pptx
logaprakash9
 
輪読会資料_Miipher and Miipher2 .
NABLAS株式会社
 
Module - 5 Machine Learning-22ISE62.pdf
Dr. Shivashankar
 
Artificial Neural Network-Types,Perceptron,Problems
Sharmila Chidaravalli
 
13th International Conference of Security, Privacy and Trust Management (SPTM...
ijcisjournal
 
June 2025 Top 10 Sites -Electrical and Electronics Engineering: An Internatio...
elelijjournal653
 
NFPA 10 - Estandar para extintores de incendios portatiles (ed.22 ENG).pdf
Oscar Orozco
 
Generative AI & Scientific Research : Catalyst for Innovation, Ethics & Impact
AlqualsaDIResearchGr
 
LLC CM NCP1399 SIMPLIS MODEL MANUAL.PDF
ssuser1be9ce
 
June 2025 - Top 10 Read Articles in Network Security and Its Applications
IJNSA Journal
 
Plant Control_EST_85520-01_en_AllChanges_20220127.pdf
DarshanaChathuranga4
 
Ad

AI module 1 presentation under VTU Syllabus

  • 2. INTRODUCTION • We call ourselves Homo sapiens—man the wise—because our intelligence is so important to us. For thousands of years, we have tried to understand how we think; that is, how a mere handful of matter can perceive, understand, predict, and manipulate a world far larger and more complicated than itself. • The field of artificial intelligence, or AI, goes further still: it attempts not just to understand but also to build intelligent entities.
  • 3. INTRODUCTION => WHAT IS AI? • In Figure 1.1 we see eight definitions of AI, laid out along two dimensions. • The definitions on top are concerned with THOUGHT PROCESSES and REASONING, whereas the ones on the bottom address BEHAVIOR. • The definitions on the left measure success in terms of fidelity to HUMAN performance, whereas the ones on the right measure against an ideal performance measure, called RATIONALITY. • A system is rational if it does the “right thing,” given what it knows.
  • 4. INTRODUCTION  WHAT IS AI?  Acting humanly: The Turing Test approach The Turing Test, proposed by Alan Turing (1950), was designed to provide a satisfactory operational definition of intelligence. A computer passes the test if a human interrogator, after posing some written questions, cannot tell whether the written responses come from a person or from a computer The computer would need to possess the following capabilities: • Natural Language Processing to enable it to communicate successfully in English. • Knowledge Representation to store what it knows or hears; • Automated Reasoning to use the stored information to answer questions and to draw new conclusions; • Machine Learning to adapt to new circumstances and to detect and extrapolate patterns. Total Turing Test includes a video signal so that the interrogator can test the subject’s perceptual abilities, as well as the opportunity for the interrogator to pass physical objects “through the hatch.” To pass the total Turing Test, the computer will need • Computer Vision to perceive objects, and • Robotics to manipulate objects and move about.
  • 5. INTRODUCTION  WHAT IS AI?  Thinking humanly: The cognitive modeling approach If we are going to say that a given program thinks like a human, we must have some way of determining how humans think. We need to get inside the actual workings of human minds. There are three ways to do this: 1. Through introspection—trying to catch our own thoughts as they go by; 2. Through psychological experiments—observing a person in action; and 3. Through brain imaging—observing the brain in action. The interdisciplinary field of cognitive science brings together computer models from AI and experimental techniques from psychology to construct precise and testable theories of the human mind.
  • 6. INTRODUCTION  WHAT IS AI?  Thinking rationally: The “laws of thought” approach • The Greek philosopher Aristotle was one of the first to attempt to codify “right thinking,” that is, irrefutable reasoning processes. His syllogisms provided patterns for argument structures that always yielded correct conclusions when given correct premises. • For example, “Socrates is a man; all men are mortal; therefore, Socrates is mortal.” • These laws of thought were supposed to govern the operation of the mind; their study initiated the field called logic. • The so-called logicist tradition within artificial intelligence hopes to build on such programs to create intelligent systems. • There are two main obstacles to this approach: 1. First, it is not easy to take informal knowledge and state it in the formal terms required by logical notation, particularly when the knowledge is less than 100% certain. 2. Second, there is a big difference between solving a problem “in principle” and solving it in practice.
  • 7. INTRODUCTION  WHAT IS AI?  Acting rationally: The rational agent approach • An agent is just something that acts (agent comes from the Latin agere, to do). • Computer agents are expected to do:  operate autonomously,  perceive their environment,  persist over a prolonged time period,  adapt to change,  create and pursue goals. • A rational agent is one that acts so as to achieve the best outcome or, when there is uncertainty, the best expected outcome. • In the “laws of thought” approach to AI, the emphasis was on correct inferences (conclusions). • Making correct inferences is sometimes part of being a rational agent, because one way to act rationally is to reason logically to the conclusion that a given action will achieve one’s goals and then to act on that conclusion. • On the other hand, correct inference is not all of rationality; in some situations, there is no provably correct thing to do, but something must still be done. • There are also ways of acting rationally that cannot be said to involve inference.
  • 8. THE STATE OF ART • What can AI do today? A concise answer is difficult because there are so many activities in so many subfields. Here we sample a few applications: • Robotic vehicles • Speech recognition • Autonomous planning and scheduling • Game playing • Spam fighting • Logistics planning • Robotics • Machine Translation
  • 9. Robotic Vehicles • A driverless robotic car named STANLEY sped through the rough terrain of the Mojave dessert at 22 mph, finishing the 132-mile course first to win the 2005 DARPA Grand Challenge. STANLEY is a Volkswagen Touareg outfitted with cameras, radar, and laser rangefinders to sense the environment and onboard software to command the steering, braking, and acceleration (Thrun, 2006). The following year CMU’s BOSS won the Urban Challenge, safely driving in traffic through the streets of a closed Air Force base, obeying traffic rules and avoiding pedestrians and other vehicles. Speech Recognition • A traveler calling United Airlines to book a flight can have the entire conversation guided by an automated speech recognition and dialog management system.
  • 10. Autonomous planning and scheduling • A hundred million miles from Earth, NASA’s Remote Agent program became the first on-board autonomous planning program to control the scheduling of operations for a spacecraft (Jonsson et al., 2000). REMOTE AGENT generated plans from high-level goals specified from the ground and monitored the execution of those plans—detecting, diagnosing, and recovering from problems as they occurred. Successor program MAPGEN (Al-Chang et al., 2004) plans the daily operations for NASA’s Mars Exploration Rovers, and MEXAR2 (Cesta et al., 2007) did mission planning—both logistics and science planning— for the European Space Agency’s Mars Express mission in 2008
  • 11. Game Playing • IBM’s DEEP BLUE became the first computer program to defeat the world champion in a chess match when it bested Garry Kasparov by a score of 3.5 to 2.5 in an exhibition match (Goodman and Keene, 1997). Kasparov said that he felt a “new kind of intelligence” across the board from him. Newsweek magazine described the match as “The brain’s last stand.” The value of IBM’s stock increased by $18 billion. Human champions studied Kasparov’s loss and were able to draw a few matches in subsequent years, but the most recent human-computer matches have been won convincingly by the computer
  • 12. Spam fighting • Each day, learning algorithms classify over a billion messages as spam, saving the recipient from having to waste time deleting what, for many users, could comprise 80% or 90% of all messages, if not classified away by algorithms. Because the spammers are continually updating their tactics, it is difficult for a static programmed approach to keep up, and learning algorithms work best (Sahami et al., 1998; Goodman and Heckerman, 2004).
  • 13. Logistics planning • During the Persian Gulf crisis of 1991, U.S. forces deployed a Dynamic Analysis and Replanning Tool, DART (Cross and Walker, 1994), to do automated logistics planning and scheduling for transportation. This involved up to 50,000 vehicles, cargo, and people at a time, and had to account for starting points, destinations, routes, and conflict resolution among all parameters. The AI planning techniques generated in hours a plan that would have taken weeks with older methods. The Defense Advanced Research Project Agency (DARPA) stated that this single application more than paid back DARPA’s 30- year investment in AI.
  • 14. Robotics • The iRobot Corporation has sold over two million Roomba robotic vacuum cleaners for home use. The company also deploys the more rugged PackBot to Iraq and Afghanistan, where it is used to handle hazardous materials, clear explosives, and identify the location of snipers.
  • 15. Machine Translation • A computer program automatically translates from Arabic to English, allowing an English speaker to see the headline “Ardogan Confirms That Turkey Would Not Accept Any Pressure, Urging Them to Recognize Cyprus.” The program uses a statistical model built from examples of Arabic-to-English translations and from examples of English text totaling two trillion words (Brants et al., 2007). None of the computer scientists on the team speak Arabic, but they do understand statistics and machine learning algorithms
  • 17. Agents and Environment An agent is anything that can be viewed as perceiving its environment through sensors and SENSOR acting upon that environment through actuators. This simple idea is illustrated in Figure 2.1. ACTUATOR A human agent has eyes, ears, and other organs for sensors and hands, legs, vocal tract, and so on for actuators. We use the term percept to refer to the agent’s perceptual inputs at any given instant. An PERCEPT SEQUENCE agent’s percept sequence is the complete history of everything the agent has ever perceived.
  • 18. Agents and Environment • Mathematically speaking, we say that an agent’s behavior is AGENT FUNCTION described by the agent function that maps any given percept sequence to an action. • We can imagine tabulating the agent function that describes any given agent; • Given an agent to experiment with, we can, in principle, construct this table by trying out all possible percept sequences and recording which actions the agent does in response. The table is, of course, an external characterization of the agent. Internally, the agent function for an artificial agent will be implemented by an AGENT PROGRAM agent program. • It is important to keep these two ideas distinct. The agent function is an abstract mathematical description; the agent program is a concrete implementation, running within some physical system.
  • 19. The Vacuum-Cleaner World • This world is so simple that we can describe everything that happens; it’s also a made-up world, so we can invent many variations. This particular world has just two locations: squares A and B. The vacuum agent perceives which square it is in and whether there is dirt in the square. It can choose to move left, move right, suck up the dirt, or do nothing. One very simple agent function is the following: if the current square is dirty, then suck; otherwise, move to the other square.
  • 21. Program implements the agent function tabulated in Fig. 2.3 Function Reflex-Vacuum-Agent([location,status]) return an action If status = Dirty then return Suck else if location = A then return Right else if location = B then return left
  • 22. Concept of Rationality Rational agent  One that does the right thing  = every entry in the table for the agent function is correct (rational). What is correct?  The actions that cause the agent to be most successful  So we need ways to measure success.
  • 23. Performance measure Performance measure  An objective function that determines  How the agent does successfully  E.g., 90% or 30% ? An agent, based on its percepts   action sequence : if desirable, it is said to be performing well.  No universal performance measure for all agents
  • 24. Performance measure A general rule:  Design performance measures according to  What one actually wants in the environment  Rather than how one thinks the agent should behave E.g., in vacuum-cleaner world  We want the floor clean, no matter how the agent behave  We don’t restrict how the agent behaves
  • 25. Rationality What is rational at any given time depends on four things:  The performance measure defining the criterion of success  The agent’s prior knowledge of the environment  The actions that the agent can perform  The agents’s percept sequence up to now
  • 26. Rational agent For each possible percept sequence,  an rational agent should select  an action expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has E.g., an exam  Maximize marks, based on the questions on the paper & your knowledge
  • 27. Example of a rational agent Performance measure  Awards one point for each clean square  at each time step, over 10000 time steps Prior knowledge about the environment  The geography of the environment  Only two squares  The effect of the actions
  • 28. Actions that can perform Left, Right, Suck and NoOp Percept sequences Where is the agent? Whether the location contains dirt? Under this circumstance, the agent is rational. Example of a rational agent
  • 29. An omniscient agent Knows the actual outcome of its actions in advance No other possible outcomes However, impossible in real world An example crossing a street but died of the fallen cargo door from 33,000ft  irrational? Omniscience
  • 30. Based on the circumstance, it is rational. As rationality maximizes Expected performance Perfection maximizes Actual performance Hence rational agents are not omniscient. Omniscience
  • 31. Learning Does a rational agent depend on only current percept?  No, the past percept sequence should also be used  This is called learning  After experiencing an episode, the agent  should adjust its behaviors to perform better for the same job next time.
  • 32. Autonomy If an agent just relies on the prior knowledge of its designer rather than its own percepts then the agent lacks autonomy A rational agent should be autonomous- it should learn what it can to compensate for partial or incorrect prior knowledge. E.g., a clock  No input (percepts)  Run only but its own algorithm (prior knowledge)  No learning, no experience, etc.
  • 33. Sometimes, the environment may not be the real world E.g., flight simulator, video games, Internet They are all artificial but very complex environments Those agents working in these environments are called  Software agent (softbots)  Because all parts of the agent are software Software Agents
  • 34. Task environments Task environments are the problems  While the rational agents are the solutions Specifying the task environment  PEAS description as fully as possible  Performance  Environment  Actuators  Sensors In designing an agent, the first step must always be to specify the task environment as fully as possible. Use automated taxi driver as an example
  • 35. Task environments Performance measure  How can we judge the automated driver?  Which factors are considered?  getting to the correct destination  minimizing fuel consumption  minimizing the trip time and/or cost  minimizing the violations of traffic laws  maximizing the safety and comfort, etc.
  • 36. Environment A taxi must deal with a variety of roads Traffic lights, other vehicles, pedestrians, stray animals, road works, police cars, etc. Interact with the customer Task environments
  • 37. Actuators (for outputs) Control over the accelerator, steering, gear shifting and braking A display to communicate with the customers Sensors (for inputs) Detect other vehicles, road situations GPS (Global Positioning System) to know where the taxi is Many more devices are necessary Task environments
  • 38. A sketch of automated taxi driver Task environments
  • 39. Properties of task environments Fully observable vs. Partially observable  If an agent’s sensors give it access to the complete state of the environment at each point in time then the environment is effectively and fully observable  if the sensors detect all aspects  That are relevant to the choice of action
  • 40. Partially observable • An environment might be Partially observable because of noisy and inaccurate sensors or because parts of the state are simply missing from the sensor data. • Example:  A local dirt sensor of the cleaner cannot tell  Whether other squares are clean or not
  • 41. Deterministic vs. stochastic  next state of the environment Completely determined by the current state and the actions executed by the agent, then the environment is deterministic, otherwise, it is Stochastic.  Strategic environment: deterministic except for actions of other agents • -Cleaner and taxi driver are:  Stochastic because of some unobservable aspects  noise or unknown Properties of task environments
  • 42. Episodic vs. sequential  An episode = agent’s single pair of perception & action  The quality of the agent’s action does not depend on other episodes  Every episode is independent of each other  Episodic environment is simpler  The agent does not need to think ahead Sequential  Current action may affect all future decisions • -Ex. Taxi driving and chess. Properties of task environments
  • 43. Static vs. dynamic A dynamic environment is always changing over time  E.g., the number of people in the street While static environment  E.g., the destination Semidynamic environment is not changed over time but the agent’s performance score does Properties of task environments
  • 44. Discrete vs. continuous If there are a limited number of distinct states, clearly defined percepts and actions, the environment is discrete E.g., Chess game Continuous: Taxi driving Properties of task environments
  • 45. Single agent VS. multiagent Playing a crossword puzzle – single agent Chess playing – two agents Competitive multiagent environment  Chess playing Cooperative multiagent environment  Automated taxi driver  Avoiding collision Properties of task environments
  • 46. Properties of task environments Known vs. unknown This distinction refers not to the environment itslef but to the agent’s (or designer’s) state of knowledge about the environment. -In known environment, the outcomes for all actions are given. ( example: solitaire card games). - If the environment is unknown, the agent will have to learn how it works in order to make good decisions.( example: new video game).
  • 47. Examples of task environments
  • 49. Structure of agents Agent = architecture + program  Architecture = some sort of computing device (sensors + actuators)  (Agent) Program = some function that implements the agent mapping = “?”  Agent Program = Job of AI
  • 50. Agent programs Input for Agent Program  Only the current percept Input for Agent Function  The entire percept sequence  The agent must remember all of them Implement the agent program as  A look up table (agent function)
  • 51. Agent Programs Skeleton design of an agent program
  • 52. Agent Programs P = the set of possible percepts T= lifetime of the agent  The total number of percepts it receives Size of the look up table Consider playing chess  P =10, T=150  Will require a table of at least 10150 entries   T t t P 1
  • 53. Agent Programs Despite of huge size, look up table does what we want. The key challenge of AI  Find out how to write programs that, to the extent possible, produce rational behavior  From a small amount of code  Rather than a large amount of table entries  E.g., a five-line program of Newton’s Method  V.s. huge tables of square roots, sine, cosine, …
  • 54. Types of agent programs Four types  Simple reflex agents  Model-based reflex agents  Goal-based agents  Utility-based agents
  • 55. Simple reflex agents It uses just condition-action rules  The rules are like the form “if … then …”  efficient but have narrow range of applicability  Because knowledge sometimes cannot be stated explicitly  Work only  if the environment is fully observable
  • 58. A Simple Reflex Agent in Nature percepts (size, motion) RULES: (1) If small moving object, then activate SNAP (2) If large moving object, then activate AVOID and inhibit SNAP ELSE (not moving) then NOOP Action: SNAP or AVOID or NOOP needed for completeness
  • 59. Model-based Reflex Agents For the world that is partially observable  the agent has to keep track of an internal state  That depends on the percept history  Reflecting some of the unobserved aspects  E.g., driving a car and changing lane Requiring two types of knowledge  How the world evolves independently of the agent  How the agent’s actions affect the world
  • 60. Example Table Agent With Internal State Saw an object ahead, and turned right, and it’s now clear ahead Go straight Saw an object Ahead, turned right, and object ahead again Halt See no objects ahead Go straight See an object ahead Turn randomly IF THEN
  • 61. Example Reflex Agent With Internal State: Wall-Following Actions: left, right, straight, open-door Rules: 1. If open(left) & open(right) and open(straight) then choose randomly between right and left 2. If wall(left) and open(right) and open(straight) then straight 3. If wall(right) and open(left) and open(straight) then straight 4. If wall(right) and open(left) and wall(straight) then left 5. If wall(left) and open(right) and wall(straight) then right 6. If wall(left) and door(right) and wall(straight) then open-door 7. If wall(right) and wall(left) and open(straight) then straight. 8. (Default) Move randomly start
  • 62. Model-based Reflex Agents The agent is with memory
  • 64. Goal-based agents Current state of the environment is always not enough The goal is another issue to achieve  Judgment of rationality / correctness Actions chosen  goals, based on  the current state  the current percept
  • 65. Goal-based agents Conclusion  Goal-based agents are less efficient  but more flexible  Agent  Different goals  different tasks  Search and planning  two other sub-fields in AI  to find out the action sequences to achieve its goal
  • 67. Utility-based agents Goals alone are not enough  to generate high-quality behavior  E.g. meals in Canteen, good or not ? Many action sequences  the goals  some are better and some worse  If goal means success,  then utility means the degree of success (how successful it is)
  • 69. Utility-based agents it is said state A has higher utility  If state A is more preferred than others Utility is therefore a function  that maps a state onto a real number  the degree of success
  • 70. Utility-based agents Utility has several advantages:  When there are conflicting goals,  Only some of the goals but not all can be achieved  utility describes the appropriate trade-off  When there are several goals  None of them are achieved certainly  utility provides a way for the decision-making
  • 71. Learning Agents After an agent is programmed, can it work immediately?  No, it still need teaching In AI,  Once an agent is done  We teach it by giving it a set of examples  Test it by using another set of examples We then say the agent learns  A learning agent
  • 72. Learning Agents Four conceptual components  Learning element  Making improvement  Performance element  Selecting external actions  Critic  Tells the Learning element how well the agent is doing with respect to fixed performance standard. (Feedback from user or examples, good or not?)  Problem generator  Suggest actions that will lead to new and informative experiences.
  • 74. How the components of agent programs work