0% found this document useful (0 votes)
4 views

2__Agents

The document discusses intelligent agents, defining them as entities that perceive their environment through sensors and act upon it through actuators. It outlines key concepts such as rationality, PEAS (Performance measure, Environment, Actuators, Sensors), and various types of agents, including simple reflex, goal-based, and utility-based agents. The document also categorizes environments based on observability, determinism, and other properties, emphasizing the importance of understanding these factors in agent design.

Uploaded by

c60109771
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

2__Agents

The document discusses intelligent agents, defining them as entities that perceive their environment through sensors and act upon it through actuators. It outlines key concepts such as rationality, PEAS (Performance measure, Environment, Actuators, Sensors), and various types of agents, including simple reflex, goal-based, and utility-based agents. The document also categorizes environments based on observability, determinism, and other properties, emphasizing the importance of understanding these factors in agent design.

Uploaded by

c60109771
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 38

Intelligent Agents

CHAPTER 2
Oliver Schulte
Summer2011
https://www2.cs.sfu.ca/CourseCentral/310/oschulte/
chapter2.ppt
Outline
2

Agents and environments


Rationality
PEAS (Performance measure, Environment,
Actuators, Sensors)
Environment types
Agent types

Artificial Intelligence a modern approach


Agents
3

• An agent is anything that can be viewed as


perceiving its environment through sensors
and acting upon that environment through
actuators

• Human agent:
– eyes, ears, and other organs for sensors;
– hands, legs, mouth, and other body parts for
actuators

• Robotic agent:
– cameras and infrared range finders for sensors
– various motors for actuators

Artificial Intelligence a modern approach


Agents and environments
4

• The agent function maps from percept histories


to actions:
[f: P*  A]
• The agent program runs on the physical
architecture to produce f
• agent = architecture + program

Artificial Intelligence a modern approach
Vacuum-cleaner world
5

Demo:
http://www.ai.sri.com/~oreilly/aima3ejava/aima3ejavademos.
html

 Percepts: location and contents, e.g., [A,Dirty]


 Actions: Left, Right, Suck, NoOp
 Agent’s function  look-up table
 For many agents this is a very large table

Artificial Intelligence a modern approach


Rational agents
6

• Rationality
– Performance measuring success
– Agents prior knowledge of environment
– Actions that agent can perform
– Agent’s percept sequence to date

• Rational Agent: For each possible percept


sequence, a rational agent should select an
action that is expected to maximize its
performance measure, given the evidence
provided by the percept sequence and whatever
built-in knowledge the agent has.

Artificial Intelligence a modern approach


Examples of Rational Choice
7

See File: intro-choice.doc

Artificial Intelligence a modern approach


Rationality
8

Rational is different from omniscience


 Percepts may not supply all relevant information

 E.g., in card game, don’t know cards of others.

Rational is different from being perfect


 Rationality maximizes expected outcome while

perfection maximizes actual outcome.

Artificial Intelligence a modern approach


Autonomy in Agents

The autonomy of an agent is the extent to which its


behaviour is determined by its own experience,
rather than knowledge of designer.

Extremes
 No autonomy – ignores environment/data
 Complete autonomy – must act randomly/no
program
Example: baby learning to crawl
Ideal: design agents to have some autonomy
 Possibly become more autonomous with experience
PEAS
10
• PEAS: Performance measure, Environment,
Actuators, Sensors

• Must first specify the setting for intelligent


agent design
• Consider, e.g., the task of designing an
automated taxi driver:
– Performance measure: Safe, fast, legal, comfortable
trip, maximize profits
– Environment: Roads, other traffic, pedestrians,
customers
– Actuators: Steering wheel, accelerator, brake, signal,
horn
– Sensors: Cameras, sonar, speedometer, GPS,
odometer, engine sensors, keyboard
Artificial Intelligence a modern approach
PEAS
11

Agent: Part-picking robot


Performance measure: Percentage of parts in
correct bins
Environment: Conveyor belt with parts, bins
Actuators: Jointed arm and hand
Sensors: Camera, joint angle sensors

Artificial Intelligence a modern approach


PEAS
12

Agent: Interactive English tutor


Performance measure: Maximize student's
score on test
Environment: Set of students
Actuators: Screen display (exercises,
suggestions, corrections)
Sensors: Keyboard

Artificial Intelligence a modern approach


Environment types
13

• Fully observable (vs. partially observable)


• Deterministic (vs. stochastic)
• Episodic (vs. sequential)
• Static (vs. dynamic)
• Discrete (vs. continuous)
• Single agent (vs. multiagent):

Artificial Intelligence a modern approach


Fully observable (vs. partially
observable)
14

Is everything an agent requires to choose its


actions available to it via its sensors? Perfect
or Full information.
 If so, the environment is fully accessible
If not, parts of the environment are
inaccessible
 Agent must make informed guesses about world.
In decision theory: perfect information vs.
Cross Word Poker Backgammon Taxi driver Part picking robot Image analysis
imperfect
Fully Partially information.
Partially Partially Fully Fully

Artificial Intelligence a modern approach


Deterministic (vs. stochastic)
15

Does the change in world state


 Depend only on current state and agent’s action?

Non-deterministic environments
 Have aspects beyond the control of the agent
 Utility functions have to guess at changes in world

Cross Word Poker Backgammon Taxi driver Part


Part picking robot Image analysis
Deterministic Stochastic Stochastic Stochastic Stochastic Deterministic

Artificial Intelligence a modern approach


Episodic (vs. sequential):
16

 Is the choice of current action


 Dependent on previous actions?

 If not, then the environment is episodic

 In non-episodic environments:
 Agent has to plan ahead:
 Current choice will affect future actions

Cross Word Poker Backgammon Taxi driver Part picking robot Image analysis
Sequential Sequential Sequential Sequential Episodic Episodic

Artificial Intelligence a modern approach


Static (vs. dynamic):
17
Static environments don’t change
 While the agent is deliberating over what to do

Dynamic environments do change


 So agent should/could consult the world when choosing
actions
 Alternatively: anticipate the change during deliberation OR
make decision very fast
Semidynamic: If the environment itself does not
change with the passage of time but the agent's
performance
Cross Word Poker
score does.
Backgammon Taxi driver Part picking robot Image analysis
Static Static Static Dynamic Dynamic Semi

Another example: off-line route planning vs. on-board navigation system


Artificial Intelligence a modern approach
Discrete (vs. continuous)
18

 A limited number of distinct, clearly defined


percepts and actions vs. a range of values
(continuous)

Cross Word Poker Backgammon Taxi driver Part picking robot Image analysis
Discrete Discrete Discrete Conti Conti Conti

Artificial Intelligence a modern approach


Single agent (vs. multiagent):
19

 An agent operating by itself in an environment or


there are many agents working together

Cross Word Poker Backgammon Taxi driver Part picking robot Image analysis
Single Multi Multi Multi Single Single

Artificial Intelligence a modern approach


Summary.
Observable Deterministic Episodic Static Discrete Agents

Cross Word Fully Deterministic Sequential Static Discrete Single

Poker Fully Stochastic Sequential Static Discrete Multi

Backgammon Partially Stochastic Sequential Static Discrete Multi

Taxi driver Partially Multi


Stochastic Sequential Dynamic Conti

Part picking robot Partially Stochastic Episodic Dynamic Conti Single

Image analysis Fully Deterministic Episodic Semi Conti Single

Artificial Intelligence a modern approach


Choice under (Un)certainty
21

Fully
Observable
yes

no
Deterministic no

yes

Certainty: Uncertainty
Search

Artificial Intelligence a modern approach


Agent types
22

Four basic types in order of increasing


generality:
 Simple reflex agents
 Reflex agents with state/model
 Goal-based agents
 Utility-based agents
 All these can be turned into learning agents
 http://www.ai.sri.com/~oreilly/aima3ejava/aima3e
javademos.html

Artificial Intelligence a modern approach


Simple reflex agents
23

Artificial Intelligence a modern approach


Simple reflex agents
24

 Simple but very limited intelligence.


 Action does not depend on percept history, only on current
percept.
 Therefore no memory requirements.
 Infinite loops
 Suppose vacuum cleaner does not observe location. What do

you do given location = clean? Left of A or right on B ->


infinite loop.
 Fly buzzing around window or light.

 Possible Solution: Randomize action.

 Thermostat.

 Chess – openings, endings


 Lookup table (not a good idea in general)

 35100 entries required for the entire game

Artificial Intelligence a modern approach


States: Beyond Reflexes
25

• Recall the agent function that maps from percept


histories to actions:
[f: P*  A]
 An agent program can implement an agent function
by maintaining an internal state.
 The internal state can contain information about the
state of the external environment.
 The state depends on the history of percepts and on
the history of actions taken:
[f: P*, A* S A] where S is the set of states.
 If each internal state includes all information
relevant to information making, the state space is
Markovian.
Artificial Intelligence a modern approach
States and Memory: Game Theory
26

If each state includes the information about


the percepts and actions that led to it, the
state space has perfect recall.
Perfect Information = Perfect Recall + Full
Observability + Deterministic Actions.

Artificial Intelligence a modern approach


Model-based reflex agents
27
 Know how world evolves
 Overtaking car gets closer
from behind
 How agents actions affect
the world
 Wheel turned clockwise
takes you right

 Model base agents update


their state

Artificial Intelligence a modern approach


Goal-based agents
28

• knowing state and environment? Enough?


– Taxi can go left, right, straight

• Have a goal
 A destination to get to

Uses knowledge about a goal to guide its


actions
 E.g., Search, planning

Artificial Intelligence a modern approach


Goal-based agents
29

• Reflex agent breaks when it sees brake lights. Goal based


agent reasons
– Brake light -> car in front is stopping -> I should stop -> I should use
brake

Artificial Intelligence a modern approach


Utility-based agents
30

Goals are not always enough


 Many action sequences get taxi to destination

 Consider other things. How fast, how safe…..

A utility function maps a state onto a real


number which describes the associated
degree of “happiness”, “goodness”,
“success”.
Where does the utility measure come from?
 Economics: money.
 Biology: number of offspring.
 Your life?
Artificial Intelligence a modern approach
Utility-based agents
31

Artificial Intelligence a modern approach


Learning agents
32

 Performance element
is what was previously
the whole agent
 Input sensor
 Output action
 Learning element
 Modifies
performance
element.

Artificial Intelligence a modern approach


Learning agents
33

 Critic: how the agent is


doing
 Input: checkmate?
 Fixed

 Problem generator
 Tries to solve the
problem differently
instead of optimizing.
 Suggests exploring
new actions -> new
problems.
Artificial Intelligence a modern approach
Learning agents(Taxi driver)
34

 Performance element
 How it currently drives
 Taxi driver Makes quick left turn across 3 lanes
 Critics observe shocking language by passenger and other
drivers and informs bad action
 Learning element tries to modify performance elements for
future
 Problem generator suggests experiment out something
called Brakes on different Road conditions
 Exploration vs. Exploitation
 Learning experience can be costly in the short run
 shocking language from other drivers
 Less tip
 Fewer passengers
Artificial Intelligence a modern approach
The Big Picture: AI for Model-Based
Agents
35

Planning
Action Reinforcement
Decision
Theory Learning
Game Theory
Knowledg Learnin
e Machine g
Logic
Learning
Probabilit
y Statistics
Heuristic
s
Inference
Artificial Intelligence a modern approach
The Picture for Reflex-Based Agents
36

Action
Reinforcement
Learning
Learnin
g
• Studied in AI, Cybernetics, Control Theory,
Biology, Psychology.

Artificial Intelligence a modern approach


Discussion Question
37

Model-based behaviour has a large


overhead.
 Our large brains are very expensive from
an evolutionary point of view.
Why would it be worthwhile to base
behaviour on a model rather than “hard-
code” it?
For what types of organisms in what type of
environments?

Artificial Intelligence a modern approach


Summary
38

 Agents can be described by their PEAS.


 Environments can be described by several key
properties: 64 Environment Types.
 A rational agent maximizes the performance measure
for their PEAS.
 The performance measure depends on the agent
function.
 The agent program implements the agent function.
 3 main architectures for agent programs.
 In this course we will look at some of the common
and useful combinations of environment/agent
architecture.
Artificial Intelligence a modern approach

You might also like