Artificial Intelligence Slide 2
Artificial Intelligence Slide 2
Intelligent Agents
Agents and environments
• Agent: An agent is anything that can be viewed as:
– perceiving its environment through sensors and
– acting upon that environment through actuators.
Agents and environments
• Agent: An agent is anything that can be viewed as:
– perceiving its environment through sensors and
– acting upon that environment through actuators.
Percept Action
[A, clean] Right
[A, dirty] Suck
[B, clean] Left
[B, dirty] Suck
Well-behaved agents
Rational Agent:
– Performance
– Environment
– Actuators
– Sensors
PEAS
What is PEAS for a self-driving car?
• Performance:
• Environment:
• Actuators:
• Sensors:
PEAS
What is PEAS for a self-driving car?
The iRobot Roomba 860 is ranked among the best robotic vacuum cleaners for
hardwood floors.
PEAS
How about a vacuum cleaner?
• Actuators:
• Sensors:
PEAS
How about a vacuum cleaner?
in an environment.
– Goal-based agents
– Utility-based agents
• All of which can be generalized into learning agents that can improve
their performance and generate better actions.
Simple reflex agents
• Simple reflex agents select an action based on the current state
only ignoring the percept history.
• Simple but limited.
• Can onlywork if the environmentis fullyobservable, that is the
correct action is based on the current percept only.
Agent Sensors
Environment
Condition-action rules What action I
should do now
Actuators
Vacuum (reflex) agent
A B
Percept Action
[A, clean] Right
[A, dirty] Suck
[B, clean] Left
[B, dirty] Suck
What if the vacuum agent is deprived from its location sen- sor?
Model-based reflex agents
• Handle partial observability by keeping track of the part of the world it
can’t see now.
• Internal state depending on the percept history (best guess).
• Model of the world based on (1) how the world evolves in- dependently
from the agent, and (2) how the agent actions affects the world.
Sensors
State
How the world evolves What the world
is like now
Environment
What my actions do
Agent Actuators
Goal-based agents
• Knowing the current state of the environment is not enough. The agent
needs some goal information.
• Agent program combines the goal information with the envi- ronment
model to choose the actions that achieve that goal.
• Consider the future with “What will happen if I do A?”
• Flexible as knowledge supporting the decisions is explicitly rep-
resented and can be modified.
Sensors
State
What the world
How the world evolves is like now
Environment
What it will be like
What my actions do if I do action A
What action I
Goals should do now
Agent Actuators
Utility-based agents
• Sometimes achieving the desired goal is not enough. We may look for
quicker, safer, cheaper trip to reach a destination.
• Agent happiness should be taken into consideration. We call it utility.
• A utility function is the agent’s performance measure
• Because of the uncertainty in the world, a utility agent choses the action
that maximizes the expected utility.
Sensors
State
What the world
Environment
What it will be like
What my actions do if I do action A
What action I
should do now
Agent Actuators
Learning agents
• Programming agents by hand can be very tedious. “Some
more expeditious method seem desirable” Alan Turing, 1950.
• Four conceptual components:
– Learning element: responsible for making improvements
– Performance element: responsible for selecting external ac- tions.
It is what we considered as agent so far.
– Critic: How well is the agent is doing w.r.t. a fixed perfor- mance
standard.
– Problem generator: allows the agent to explore.
Performance standard
Critic Sensors
feedback
Environment
changes
Learning Performance
element element
knowledge
learning
goals
Problem
generator
Actuators
Agent
Agent’s organization
a) Atomic Representation: Each state of the world is a black- box that has
no internal structure. E.g., finding a driving route, each state is a city. AI
algorithms: search, games, Markov decision processes, hidden Markov
models, etc.
Agent’s organization
b) Factored Representation: Each state has some attribute- value
properties. E.g., GPS location, amount of gas in the tank. AI algorithms:
constraint satisfaction, and Bayesian networks.
Agent’s organization
c) Structured Representation: Relationships between the ob- jects of a
state can be explicitly expressed. AI algorithms: first or- der logic,
knowledge-based learning, natural language understand- ing.
Intelligent agents
• The concept of intelligent agent is central in AI.