0% found this document useful (0 votes)
56 views

Artificial Intelligence Slide 2

The document discusses agents and environments in artificial intelligence. An agent is anything that can perceive its environment through sensors and act upon the environment through actuators. An agent runs in cycles of perceiving through sensors, thinking to determine its next action, and acting through actuators. Examples of agents include vacuum cleaners, cell phones, humans, and self-driving cars. The document also discusses rational agents and the PEAS (Performance, Environment, Actuators, Sensors) framework for defining agents and their environments.

Uploaded by

Abdullah Ammar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views

Artificial Intelligence Slide 2

The document discusses agents and environments in artificial intelligence. An agent is anything that can perceive its environment through sensors and act upon the environment through actuators. An agent runs in cycles of perceiving through sensors, thinking to determine its next action, and acting through actuators. Examples of agents include vacuum cleaners, cell phones, humans, and self-driving cars. The document also discusses rational agents and the PEAS (Performance, Environment, Actuators, Sensors) framework for defining agents and their environments.

Uploaded by

Abdullah Ammar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

Artificial Intelligence

Intelligent Agents
Agents and environments
• Agent: An agent is anything that can be viewed as:
– perceiving its environment through sensors and
– acting upon that environment through actuators.
Agents and environments
• Agent: An agent is anything that can be viewed as:
– perceiving its environment through sensors and
– acting upon that environment through actuators.

• An agent program runs in (1)perceive, (2)think,


cycles
and (3)act.of:
Agents and environments
• Agent: An agent is anything that can be viewed as:
– perceiving its environment through sensors and
– acting upon that environment through actuators.

• An agent program runs in (1)perceive, (2)think,


cycles
and (3)act.of:
• Agent = Architecture + Program
Agents and environments
• Human agent:
– Sensors: eyes, ears, and other organs.
– Actuators: hands, legs, mouth, and other body parts.
• Robotic agent:
– Sensors: Cameras and infrared range finders.
– Actuators: Various motors.
Agents and environments
• Human agent:
– Sensors: eyes, ears, and other organs.
– Actuators: hands, legs, mouth, and other body parts.
• Robotic agent:
– Sensors: Cameras and infrared range finders.
– Actuators: Various motors.
• Agents everywhere!
– Thermostat
– Cell phone
– Vacuum cleaner
– Robot
– Alexa Echo
– Self-driving car
– Human
– etc.
Vacuum cleaner
A B

• Percepts: location and contents e.g., [A, Dirty]


• Actions: Left, Right, Suck, NoOp
• Agent function: mapping from percepts to actions.
Vacuum cleaner
A B

• Percepts: location and contents e.g., [A, Dirty]


• Actions: Left, Right, Suck, No-Op
• Agent function: mapping from percepts to actions.

Percept Action
[A, clean] Right
[A, dirty] Suck
[B, clean] Left
[B, dirty] Suck
Well-behaved agents
Rational Agent:

“For each possible percept sequence, a rational agent should select an


action that is expected to maximize its performance measure, given the
evidence provided by the percept sequence and whatever built-in knowledge
the agent has.”
Rationality
• Rationality is relative to a performance measure.

• Judge rationality based on:


– The performance measure that defines the criterion of suc- cess.
– The agent prior knowledge of the environment.
– The possible actions that the agent can perform.
– The agent’s percept sequence to date.
PEAS
• When we define a rational agent, we group these properties under
PEAS, the problem specification for the task environ- ment.
• The rational agent we want to design for this task environment is the
solution.

• PEAS stands for:

– Performance

– Environment

– Actuators

– Sensors
PEAS
What is PEAS for a self-driving car?

• Performance:
• Environment:
• Actuators:
• Sensors:
PEAS
What is PEAS for a self-driving car?

• Performance: Safety, time, legal drive, comfort.


• Environment:
• Actuators:
• Sensors:
PEAS
What is PEAS for a self-driving car?

• Performance: Safety, time, legal drive, comfort.


• Environment: Roads, other cars, pedestrians, road signs.
• Actuators:
• Sensors:
PEAS
What is PEAS for a self-driving car?

• Performance: Safety, time, legal drive, comfort.


• Environment: Roads, other cars, pedestrians, road signs.
• Actuators: Steering, accelerator, brake, signal, horn.
• Sensors:
PEAS
What is PEAS for a self-driving car?

• Performance: Safety, time, legal drive, comfort.


• Environment: Roads, other cars, pedestrians, road signs.
• Actuators: Steering, accelerator, brake, signal, horn.
• Sensors: Camera, sonar, GPS, Speedometer, odometer, ac-
celerometer, engine sensors, keyboard.
PEAS
How about a vacuum cleaner?

iRobot Roomba series

The iRobot Roomba 860 is ranked among the best robotic vacuum cleaners for
hardwood floors.
PEAS
How about a vacuum cleaner?

iRobot Roomba series


• Performance: cleanness, efficiency: distance traveled to clean,
battery life, security.
• Environment:
• Actuators:
• Sensors:
PEAS
How about a vacuum cleaner?

iRobot Roomba series


• Performance: cleanness, efficiency: distance traveled to clean,
battery life, security.
• Environment: room, table, wood floor, carpet, differentob-
stacles.

• Actuators:
• Sensors:
PEAS
How about a vacuum cleaner?

iRobot Roomba series


• Performance: cleanness, efficiency: distance traveled to clean,
battery life, security.
• Environment: room, table, wood floor, carpet, differentob-
stacles.

• Actuators: wheels, different brushes, vacuum extractor.


• Sensors:
PEAS
How about a vacuum cleaner?

iRobot Roomba series


• Performance: cleanness, efficiency: distance traveled to clean,
battery life, security.
• Environment: room, table, wood floor, carpet, differentob-
stacles.

• Actuators: wheels, different brushes, vacuum extractor.


• Sensors: camera, dirt detection sensor, cliff sensor, bump sen- sors,
infrared wall sensors.
Environment types
• Fully observable (vs. partially observable): An agent’s sensors give it
access to the complete state of the environment at each point in time.

• Deterministic (vs. stochastic): The next state of the en- vironment is


completely determined by the current state and the action executed by
the agent. (If the environment is de- terministic except for the actions of
other agents, then the environment is strategic)

• Episodic (vs. sequential): The agent’s experience is divided into


atomic ”episodes” (each episode consists of the agent perceiving and
then performing a single action), and the choice of action in each
episode depends only on the episode itself.
Environment types
• Static (vs. dynamic): The environment is unchanged while (The
an agent is environmentis semi-dynamic
ifdeliberating.
the environment itself does not change with the passage of time but
the agent’s performance score does.)

• Discrete (vs. continuous): A limited number of distinct, clearly defined


percepts and actions. E.g., checkers is an ex- ample of a discrete
environment, while self-driving car evolves in a continuous one.

• Single agent (vs. multi-agent): An agent operating by itself

in an environment.

• Known (vs. Unknown): The designer of the agent may or


may not have knowledge about the environment makeup. If the
environment is unknown the agent will need to know how it works in
order to decide.
Environment types
Agent types
• Four basic types in order of increasing generality:

– Simple reflex agents

– Model-based reflex agents

– Goal-based agents

– Utility-based agents

• All of which can be generalized into learning agents that can improve
their performance and generate better actions.
Simple reflex agents
• Simple reflex agents select an action based on the current state
only ignoring the percept history.
• Simple but limited.
• Can onlywork if the environmentis fullyobservable, that is the
correct action is based on the current percept only.

Agent Sensors

What the world


is like now

Environment
Condition-action rules What action I
should do now

Actuators
Vacuum (reflex) agent
A B

• Let’s write the algorithm for the Vacuum cleaner...


• Percepts: location and content (location sensor, dirt sensor).
• Actions: Left, Right, Suck, NoOp

Percept Action function VacuumAgent(location, state)


[A, clean] Right return action
[A, dirty] Suck if status = dirty than suck
[B, clean] Left else if location = A than
return right
[B, dirty] Suck
else return left
Vacuum (reflex) agent
A B

• Let’s write the algorithm for the Vacuum cleaner...


• Percepts: location and content (location sensor, dirt sensor).
• Actions: Left, Right, Suck, NoOp

Percept Action
[A, clean] Right
[A, dirty] Suck
[B, clean] Left
[B, dirty] Suck
What if the vacuum agent is deprived from its location sen- sor?
Model-based reflex agents
• Handle partial observability by keeping track of the part of the world it
can’t see now.
• Internal state depending on the percept history (best guess).
• Model of the world based on (1) how the world evolves in- dependently
from the agent, and (2) how the agent actions affects the world.

Sensors
State
How the world evolves What the world
is like now

Environment
What my actions do

Condition-action rules What action I


should do now

Agent Actuators
Goal-based agents
• Knowing the current state of the environment is not enough. The agent
needs some goal information.
• Agent program combines the goal information with the envi- ronment
model to choose the actions that achieve that goal.
• Consider the future with “What will happen if I do A?”
• Flexible as knowledge supporting the decisions is explicitly rep-
resented and can be modified.

Sensors

State
What the world
How the world evolves is like now

Environment
What it will be like
What my actions do if I do action A

What action I
Goals should do now

Agent Actuators
Utility-based agents
• Sometimes achieving the desired goal is not enough. We may look for
quicker, safer, cheaper trip to reach a destination.
• Agent happiness should be taken into consideration. We call it utility.
• A utility function is the agent’s performance measure
• Because of the uncertainty in the world, a utility agent choses the action
that maximizes the expected utility.
Sensors
State
What the world

How the world evolves is like now

Environment
What it will be like
What my actions do if I do action A

Utility How happy I will be


in such a state

What action I
should do now

Agent Actuators
Learning agents
• Programming agents by hand can be very tedious. “Some
more expeditious method seem desirable” Alan Turing, 1950.
• Four conceptual components:
– Learning element: responsible for making improvements
– Performance element: responsible for selecting external ac- tions.
It is what we considered as agent so far.
– Critic: How well is the agent is doing w.r.t. a fixed perfor- mance
standard.
– Problem generator: allows the agent to explore.
Performance standard

Critic Sensors

feedback

Environment
changes
Learning Performance
element element
knowledge
learning
goals

Problem
generator
Actuators
Agent
Agent’s organization
a) Atomic Representation: Each state of the world is a black- box that has
no internal structure. E.g., finding a driving route, each state is a city. AI
algorithms: search, games, Markov decision processes, hidden Markov
models, etc.
Agent’s organization
b) Factored Representation: Each state has some attribute- value
properties. E.g., GPS location, amount of gas in the tank. AI algorithms:
constraint satisfaction, and Bayesian networks.
Agent’s organization
c) Structured Representation: Relationships between the ob- jects of a
state can be explicitly expressed. AI algorithms: first or- der logic,
knowledge-based learning, natural language understand- ing.
Intelligent agents
• The concept of intelligent agent is central in AI.

• AI aims to design intelligent agents that are useful, reactive,


autonomous and even social and pro-active.

• An agent perceives its environment through percept and acts through


actuators.

• A performance measure evaluates the behavior of the agent.

• An agent that acts to maximize its expected performance mea- sure is


called a rational agent.

• PEAS: A task environment specification that includes Perfor- mance


measure, Environment, Actuators and Sensors.

Agent = Architecture + Program


Intelligent agents
• Four types of agents: Reflex agents, model-based agents, goal-
based agents, and utility-based agents.
• Agents can improve their performance through learning.
• This is a high-level present of agent programs.
• States representations: atomic, factored, structured. Increas- ing
expressiveness power.
Intelligent agents
• Four types of agents: Reflex agents, model-based agents, goal-
based agents, and utility-based agents.
• Agents can improve their performance through learning.
• This is a high-level present of agent programs.
• States representations: atomic, factored, structured. Increas- ing
expressiveness power.

Credit: Courtesy Percy Liang

You might also like