0% found this document useful (0 votes)
0 views

Artificial Intelligence (Week 1-l2)

The document provides an introduction to artificial intelligence, covering its history, goals, applications, and various approaches including the Turing Test and rational agents. It discusses the concept of agents, their types, and the importance of task environments defined by performance measures, environments, actuators, and sensors. Additionally, it categorizes environments and agents based on observability, determinism, and other characteristics, while outlining different types of agents such as simple reflex, model-based, goal-based, and utility-based agents.

Uploaded by

sidranaveedafzal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

Artificial Intelligence (Week 1-l2)

The document provides an introduction to artificial intelligence, covering its history, goals, applications, and various approaches including the Turing Test and rational agents. It discusses the concept of agents, their types, and the importance of task environments defined by performance measures, environments, actuators, and sensors. Additionally, it categorizes environments and agents based on observability, determinism, and other characteristics, while outlining different types of agents such as simple reflex, model-based, goal-based, and utility-based agents.

Uploaded by

sidranaveedafzal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 29

ARTIFICIAL INTELLIGENCE (WEEK 1 – LEC 1)

BSCS (6A + 6B)

INTRODUCTION TO ARTIFICIAL INTELLIGENCE:


HISTORY, GOALS, APPLICATIONS, DIFFERENT AI
APPROACHES, TURING TEST, COGNITIVE MODELS,
SYLLOGISM, RATIONAL AGENTS
INTELLIGENT AGENTS, THE CONCEPT OF RATIONALITY, THE
NATURE OF ENVIRONMENTS, THE STRUCTURE OF AGENTS
RECOMMENDED BOOK

Artificial
Intelligence A
Modern
Approach
Stuart J. Russell
and Peter
Norvig
AGENT

An agent is anything


that can be viewed as
perceiving its
environment through
sensors and acting upon
the environment through
effectors/actuators
AGENT WITH AN ENVIRONMENT
AGENT

 Abstractly, an agent is a function which maps percept


histories to actions:
 [f: P*  A]
 Internally, the agent function will be implemented by an
agent program which runs on the physical architecture to
produce f.
 Agent = architecture + program
AGENT EXAMPLES
Human agent:
 Sensors  eyes, ears, and other organs
 Actuators  hands, legs, mouth, and other body parts
Robotic agent:
 Sensors  cameras and infrared range finders
 Actuators  various motors for actuators
A software agent:
 Sensors  Keystrokes, file contents, received network packages
 Actuators Displays on the screen, files, sent network packets
AGENT EXAMPLES
A vacuum-cleaner world with
just two locations. Each
location can be clean or dirty,
and the agent can move left or
right and can clean the square
that it occupies. Different
versions of the vacuum world
allow for different rules about
what the agent can perceive,
whether its actions always
succeed, and so on.
AGENT EXAMPLES
RATIONALITY

What is rational at any given time depends on four things:


1. The performance measure that defines the criterion of success.
2. The agent’s prior knowledge of the environment.
3. The actions that the agent can perform.
4. The agent’s percept sequence to date
TASK ENVIRONMENT

In designing an agent, the first step must always be to


specify the task environment (PEAS) as fully as possible.
PEAS:
 Performance measure,
 Environment,
 Actuators,
 Sensors
TASK ENVIRONMENT EXAMPLE

PEAS for Medical Diagnosis System


 Performance measure: Healthy patient, minimize costs etc.
 Environment: Patient, hospital, staff etc.
 Actuators: Screen display (questions, tests, diagnoses,
treatments, referrals)
 Sensors: Keyboard (entry of symptoms, findings, patient's
answers)
TASK ENVIRONMENT EXAMPLE

PEAS for Automated Taxi-driving


 Performance measure: Safe, fast, legal, comfortable trip,
maximize profits etc.
 Environment: Roads, other traffic, pedestrians, customers etc.
 Actuators: Steering wheel, accelerator, brakes, signal, horn etc.
 Sensors: Cameras, speedometer, GPS, odometer, engine sensors,
keyboard etc
TASK ENVIRONMENT EXAMPLE

PEAS for Satellite Image Analysis System


 Performance measure: correct image categorization.
 Environment: downlink from the orbiting satellite.
 Actuators: display categorization of scene.
 Sensors: color pixel arrays.
ENVIRONMENT TYPES

 Fully observable vs. partially observable


 Deterministic vs. stochastic
 Episodic vs. sequential
 Static vs. dynamic
 Discrete vs. continuous
 Single agent vs. multi-agent
FULLY OBSERVABLE VS. PARTIALLY OBSERVABLE

Fully Observable Partially Observable


 If an agent's sensors give it access  Parts of the state are simply
to the complete state of the missing from sensor data. Noisy
environment at each point in time. and inaccurate sensors
 Convenient, because the agent
 A vacuum agent with only a
need not to maintain any internal
state to keep track of the world
local dirt sensor cannot tell
whether there is dirt in other
 Chess game – the agent sees the
squares
entire board.
 An automated taxi cannot see
what other drivers are thinking.
DETERMINISTIC VS. STOCHASTIC

Deterministic Stochastic:
 If the next state of the  If the environment is partially
environment is completely observable then it could be
determined by the current state stochastic
and the action executed by the  Taxi driving is clearly
agent. stochastic in this sense,
 Vacuum-cleaner world is because one can never predict
deterministic the behavior of the traffic
exactly
 Rolling a dice in a Ludo game
STATIC VS. DYNAMIC

Static Dynamic
 Static environments are  If the environment can be
unchanged and easy to deal changed while an agent is
with because the agent need deliberating, then we say the
not keep looking at the world environment is dynamic for
while it is deciding on an that agent.
action.  Real-time traffic while using
 Crossword puzzle Google Maps
DISCRETE VS. CONTINUOUS

Discrete Continuous
 The discrete/continuous  The discrete/continuous
distinction applies to the state distinction applies to the state of
of the environment, to the way the environment, to the way time
time is handled, and to the is handled, and to the percepts
percepts and actions of the and actions of the agent.
agent.  Infinite or very large number of
 A limited number of possible possible states/actions.
states or actions. • Robot arm movement in
 Chess (finite moves) physical space
EPISODIC VS. SEQUENTIAL

Episodic Sequential
 In episodic environments, the  In sequential environments, the
agent's experience is divided into current decision could affect all
atomic episodes. future decisions.
 Each episode consists of the agent
Example:
perceiving and then performing a
single action.  Chess and taxi driving
 The next episode does not depend
on the actions taken in previous
episodes.
 E.g. Classification of Tasks
SINGLE AGENT VS. MULTI-AGENT

Single Agent Multi-Agent


 Only one agent operates in the  Multiple agents interact or
environment. compete in the same
Example environment.
 Vacuum cleaner robot Example:
 Crossword puzzle  Online multiplayer game or
autonomous cars on a road
 Chess
TYPES OF AGENT

There are following four kinds of agent


 Simple reflex agents
 Model-based reflex agents
 Goal-based agents
 Utility-based agents
SIMPLE REFLEX AGENTS

Select actions on the basis of the current percept and


ignoring the rest of the percept history
 Condition-action rule
SIMPLE REFLEX AGENTS

Schematic diagram of a
simple reflex agent. We
use rectangles to denote
the current internal state
of the agent’s decision
process, and ovals to
represent the
background information
used in the process.
MODEL BASED REFLEX AGENTS

It handles partial observability in a more effective way.


It maintains some sort of internal state that depends on the
percept history and thereby reflects at least some of the
unobserved aspects of the current state.
Updating this internal state information requires two kinds
of knowledge:
1. First, how the world evolves independently of the agent.
2. Second, how the agent's own actions affect the world.
MODEL BASED REFLEX AGENTS
GOAL BASED AGENTS

Information about the current state of the environment is


not always enough to decide what to do (e.g. decision at a
road junction).
 The agent needs some sort of goal information that
describes situations that are desirable.
 The agent program can combine this with information
about the results of possible actions in order to choose
actions that achieve the goal.
 Usually requires some Search and Planning.
GOAL BASED AGENTS
UTILITY BASED AGENTS

Goals provide a binary distinction between happy and


unhappy states.
 Agents so far we have discussed had a single goal.
 Agents may have to juggle conflicting goals.
 Need to optimize utility over a range of goals.
 Utility: measure of happiness, the quality of being useful
UTILITY BASED AGENTS

You might also like