0% found this document useful (0 votes)
4 views

A.I Lecture 3

The document discusses the structure and types of agents in artificial intelligence, emphasizing the relationship between an agent's architecture and its program. It outlines four basic kinds of agent programs: simple reflex agents, model-based reflex agents, goal-based agents, and utility-based agents, each with varying levels of complexity and decision-making capabilities. The key challenge in AI is to create programs that enable rational behavior with minimal code, rather than relying on extensive tables of actions.

Uploaded by

logj05585
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

A.I Lecture 3

The document discusses the structure and types of agents in artificial intelligence, emphasizing the relationship between an agent's architecture and its program. It outlines four basic kinds of agent programs: simple reflex agents, model-based reflex agents, goal-based agents, and utility-based agents, each with varying levels of complexity and decision-making capabilities. The key challenge in AI is to create programs that enable rational behavior with minimal code, rather than relying on extensive tables of actions.

Uploaded by

logj05585
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 9

Artificial Intelligence

Credit Hours: 03
Reference: Artificial Intelligence: A modern approach, By
Stuart, Peter, John, 2nd Edition

1
The Structure of Agents
Agent = Architecture + Program
• The job of AI is to design the agent program that implements the
agent function
• Agent function maps percepts to actions
• The program will run on some sort of computing device with
physical sensors and actuators, called architecture

• The program we choose has to be one that is appropriate for the


architecture. If the program is going to recommend actions like
walk, then the architecture must have legs

2
Agent Program
• All of the agent programs have the same skeleton, they take the
current percept as input from sensors and return an action to the
actuators

function TABLE-DRIVEN-AGENT(percept) returns an action


static: percepts, a sequence, initially empty
table, a table of actions, indexed by percept sequences,
initially fully specified
append percept to the end of percepts
action LOOKUP(percepts, table)
return action

3
Agent Program
• The above slide shows an agent program that keeps track of the
percept sequence and then uses it to index into a table of actions to
decide what to do
• To build a rational agent, we as a designer must construct a table
that contains the appropriate action for every possible percept
sequence
• The key challenge for AI is to write programs that can produce
rational behavior from a small amount of code, rather than from a
large number of table entries

4
Agent Program
There are four basic kinds of agent programs:
1. Simple reflex agents
The simplest kind of agent is the simple reflex agent. These agents
select actions on the basis of the current percept, ignoring the rest
of the percept history
For example, the vacuum agent is a simple reflex agent, because its
decision is based only on the current location and on whether that
location contains dirt

For example: if car-in-front-is-braking then initiate-braking

5
Agent Program
2. Model-based reflex agents
The agent should keep track of the part of the world it can't see now
The agent should maintain internal state that reflects some of the
unobserved aspects of the current state
For the braking problem, the internal state is not too extensive, but for
other driving tasks such as changing lanes, the agent needs to keep
track of where the other cars are
Updating this internal state information requires two kinds of
knowledge to be encoded in the agent program

6
Agent Program
2. Model-based reflex agents
First, we need some information about how the world evolves
independently of the agent, for example, that an overtaking car
generally will be closer behind than it was a moment ago.
Second, we need some information about how the agent's own
actions affect the world, for example that when the agent turns the
steering wheel clockwise, the car turns to the right, from where it was
few minutes ago
This knowledge about “how the world works” is called a model of the
world. An agent that uses such a model is called a model-based agent

7
Agent Program
3. Goal-based agents
Knowing about the current state of the environment is not always
enough to decide what to do
For example: at road junction the car can turn left, right or go straight.
The correct decision depends on where the car wants to go to
The agent needs some sort of goal information that describes
situations that are desirable
The agent program can combine this with information about the
results of possible actions in order to choose actions that achieve
the goal

8
Agent Program
4. Utility-based agents
Goals alone are not enough to generate high-quality behavior in most
environments. For example, many action sequences will get the car to
its destination (to achieve the goal) but some are quicker, safer, more
reliable, or cheaper than others
Goals just provide a crude binary distinction between "happy" and
"unhappy" states
A more general performance measure should allow a comparison of
different world states according to exactly how happy they would
make the agent
If one world state is preferred to another, then it has higher utility for
the agent. A utility function maps a state onto a real number, which
describes the associated degree of happiness
9

You might also like