0% found this document useful (0 votes)
26 views

Artificial Intelligence Ch2

Uploaded by

gadisakarorsa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views

Artificial Intelligence Ch2

Uploaded by

gadisakarorsa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

CHAPTER 2

Intelligent Agents
2.1. Introduction
An agent is anything that can perceive its environment through sensors and acting upon that
environment through effectors. Agent can be either human or machine. For instance, a human
agent has eyes, ears, and other organs for sensors, and hands, legs, mouth, and other body parts
for effectors. A robotic agent substitutes cameras and infrared range finders for the sensors and
various motors for the effectors. Similarly, a software agent has encoded bit strings as its
percepts and actions.
2.2. Agents and Environments
A rational agent is an agent that does the right thing. The right action is the one that will cause
the agent to be most successful.
2.3. Acting of Intelligent Agents (Rationality)

Intelligent agents must be able to set goals and achieve them. How successful an agent is can be
determined by its performance measure. However, there is no one fixed measure suitable for all
agents. To understand how to establish a standard of what it means to be successful in an
environment and use it to measure the performance of agents, consider the following example.

Example, consider the case of an agent that is supposed to vacuum a dirty floor. An acceptable
performance measure would be:
 the amount of dirt cleaned up in a single eight-hour shift,
 the amount of electricity consumed and the amount of noise generated as well,
 the time consumed.
We need to be careful to distinguish between rationality and omniscience. An omniscient agent

1
While we are learning artificial intelligence we are learning our intelligence too!
knows the actual outcome of its actions, and can act accordingly; but omniscience is impossible
in reality. Rationality is concerned with expected success given what has been perceived.
In summary, what is rational at any given time depends on four things:
 The performance measure that defines degree of success.
 Everything that the agent has perceived so far.
 What the agent knows about the environment.
 The actions that the agent can perform.
This leads to a definition of an ideal rational agent: For each possible percept sequence, an
ideal rational agent should do whatever action is expected to maximize its performance
measure, based on the evidence provided by the percept sequence and whatever built-in
knowledge the agent has.
2.4. Structure of Intelligent Agents
The agent behavior is an action that is performed after any given sequence of percepts. The job
of AI is to design the agent program. Agent program is a function that implements the agent
mapping from percepts to actions. This program will run on some sort of computing device,
called the architecture.
The program we choose has to be one that the architecture will accept and run. The architecture
might be a computer, or it might include special-purpose hardware for certain tasks, such as
processing camera images or filtering audio input. It might also include software that provides a
degree of separation between the raw computer and the agent program, so that we can program at
a higher level.
In general, the architecture makes the percepts from the sensors available to the program, runs
the program, and feeds the program's action choices to the effectors as they are generated. The
relationship among agents, architectures, and programs can be summed up as follows:

Agent = architecture +
program

2.5. Agent Types


Any agent has two parts: the agent architecture and the agent program. The architecture is the
hardware and the program is the software. The role of the agent program is to implement the
agent function. The agent function is a mapping from percept histories to actions.

2
While we are learning artificial intelligence we are learning our intelligence too!
A rational agent function should select the action that is expected to maximise the performance
measure given the available information.
The problem facing the AI system developer is therefore: how can we best implement the agent
program on the available architecture? One answer to this question is to use a lookup-table. A
lookup-table is simply a table which contains every possible percept history as inputs, and best
actions as outputs. The problem with lookup-tables is that for any reasonably complex problem
they would be huge. Therefore we need a more concise implementation of the agent program.
There are five types of agent program:
2.5.1. Simple reflex agents
The simplest type is the simple reflex
agent. These use a set of condition-
action rules that specify which action
to choose for each given percept.
These agents use only the current
percept, so have no memory of past
percept. The rules are of the form “if
this is the percept then this is the best
action”.
Although many more complex agents
may contain some condition-action rules, an agent that is only a simple reflex agent is clearly
limited in the type of rational behaviour it can produce. In particular, they cannot base decisions
on things that they cannot directly perceive, i.e. they have no model of the state of the world.
2.5.2. Model-based reflex agent
A more complex type of agent is the model
based agent. Model based agents maintain an
internal model of the world, which is updated
by perceptions as they are received. In
addition, they have built-in knowledge (i.e.
prior knowledge) of how the world tends to
evolve. For example, a taxi-driving system
may contain the knowledge that when it
doesn’t perceive them, cars tend to keep

3
While we are learning artificial intelligence we are learning our intelligence too!
moving with roughly the same direction and speed. Model-based reflex agents also contain
knowledge about how their actions affect the state of the world. For example, the taxi-driving
system may contain the knowledge that if it turns the steering wheel it will change direction.
This built-in knowledge, combined with new percept, enables such agents to update its model of
the state of world, thus enabling it to make better decisions. However, model-based agents are
still basically reflex agents. They use a set of condition-action rules to determine their behaviour.
This makes it difficult for them to be able to plan to achieve longer terms goals. So that they live
in the present only and do not think about the future.
2.5.3. Goal-based agent
Knowing about the current state of the environment is not always enough to decide what
to do. For example, at a road junction, the taxi can turn left, right, or go straight on. In this
scenario, the right decision depends on where the taxi is trying to get to. In other words, as well
as a current state description, the agent needs
some sort of goal information, which
describes situations that are desirable. For
example, being at the passenger's destination.
The agent program can combine this with
information about the results of possible
actions (the same information as was used to
update internal state in the reflex agent) in
order to choose actions that achieve the goal.
Search and planning are subfields of AI devoted to find action sequences that achieve the agent's
goals. Goal based agents are the same as model based agents, except they contain an explicit
statement of the goals of the agent. These goals are used to choose the best action at any given
time.
2.5.4. Utility-based agent
Goals alone are not really enough to generate high-quality behavior. For example, there are many
action sequences that will get the taxi to its destination, thereby achieving the goal, but some are
quicker, safer, more reliable, or cheaper than others. Goals just provide a distinction between
"happy" and "unhappy" states, whereas a more general performance measure should allow a
comparison of different world states (or sequences of states) according to exactly how happy
they would make the agent if they could be achieved.

4
While we are learning artificial intelligence we are learning our intelligence too!
Because "happy" does not sound very scientific, the customary terminology is to say that if one
world state is preferred to another, then it has higher utility for the agent. Utility is therefore a
function that maps a state onto a real number, which describes the associated degree of
happiness. A complete specification of the utility function allows rational decisions in two kinds
of cases where goals have trouble.
First, when there are conflicting goals, only some of which can be achieved (for example, speed
and safety) the utility function specifies the appropriate trade-off.
Second, when there are several goals that the agent can aim for, none of which can be achieved
with certainty, utility provides a way in which the likelihood of success can be weighed up
against the importance of the goals.
 Note that any rational agent can be described as possessing a utility function. An agent
that possesses an explicit utility function therefore can make rational decisions, but may
have to compare the utilities achieved by different courses of actions.
 The word "utility" refers to "the quality of being useful"
For example, if the taxi-driving system’s goal was to drive from Nekemte to Finfine, it would
achieve this goal by driving first to Asossa, then to Mandi, then to Gimbi, then to Nekemte and
then on to Finfine. Clearly there can be many actions that lead to a goal being achieved, but some
are better than others. Utility based
agents deal with this by assigning a
utility to each state of the world.
This utility defines how “happy”
the agent will be in such a state.
For example, the utility could state
that the agent will not be happy if
it does not get to Finfine within 1
day. Actually, goal based agents
implicitly contain a utility function
(e.g. “getting to Finfine will make
me happy”), but the fact that it is implicit makes it difficult to define more complex “desires”.
Explicitly stating the utility function also makes it easier to define the desired behaviour of utility
based agents.

5
While we are learning artificial intelligence we are learning our intelligence too!
2.5.5. Learning agent
Learning agents are not really an
alternative agent type to those
described above. All of the above
types can be learning agents, but here
we define what structure learning
agents will have. In the diagram
shown to the right, the box labelled
Performance element can be replaced
with any of the 4 types described
above.

The Learning Element is responsible for suggesting improvements to any part of the
performance element. For example, it could suggest an improved condition-action rule for a
simple reflex agent, or it could a modification to the knowledge of how the world evolves in a
model based agent.
The input to the learning element comes from the Critic. The critic analyses incoming
perceptions and decides if the actions of the agent have been good or not. To decide this it will
use an external performance standard. For example, in a chess-playing program, the Critic will
receive a percept and notice that the opponent has been check-mated. It is the performance
standard that tells it that this is a good thing.

The Problem Generator is responsible for suggesting actions that will result in new knowledge
about the world being acquired. These actions may not lead to any goals being achieved in the
short term, but they may result in perceptions that the learning element can use to update the
performance element. For example, the taxi-driving system may suggest testing the brakes in wet
conditions, so that the part of the performance element that deals with “what my actions do” can
be updated.

6
While we are learning artificial intelligence we are learning our intelligence too!
2.5.6. Important Concepts and Terms
Actions are done by the agent on the environment, which in turn provides percepts to the agent.
In the light of AI, there are different types of environment.
a. Accessible vs. inaccessible.
If an agent's sensory device gives it access to the complete state of the environment, then we say
that the environment is accessible to that agent. An environment is effectively accessible if the
sensors detect all aspects that are relevant to the choice of action. An accessible environment is
convenient because the agent need not maintain any internal state to keep track of the world.
b. Deterministic vs. nondeterministic.
If the next state of the environment is completely determined by the current state and the actions
selected by the agents, then we say the environment is deterministic. In principle, an agent need
not worry about uncertainty in an accessible, deterministic environment. If the environment is
inaccessible, however, then it may appear to be nondeterministic. This is particularly true if the
environment is complex, making it hard to keep track of all the inaccessible aspects. Thus, it is
often better to think of an environment as deterministic or nondeterministic.
c. Episodic vs. none-episodic.
In an episodic environment, the agent's experience is divided into episodes. Each episode
consists of the agent perceiving and then acting. The quality of its action depends just on the
episode itself, because subsequent episodes do not depend on what actions occur in previous
episodes. Episodic environments are much simpler because the agent does not need to think
ahead.
d. Static vs. dynamic.
If the environment can change while an agent is deliberating, then we say the environment is
dynamic for that agent; otherwise it is static. Static environments are easy to deal with because
the agent need not keep looking at the world while it is deciding on an action, nor need it worry
about the passage of time. If the environment does not change with the passage of time but the
agent's performance score does, then we say the environment is semi dynamic.
e. Discrete vs. continuous.

7
While we are learning artificial intelligence we are learning our intelligence too!
If there are a limited number of distinct, clearly defined percepts and actions we say that the
environment is discrete. Chess is discrete there are a fixed number of possible moves on each
turn. Taxi driving is continuous the speed and location of the taxi and the other vehicles sweep
through a range of continuous values.
Different environment types require different agent programs to deal with them effectively. It
will turn out, as you might expect, that the hardest case is inaccessible, none-episodic, dynamic,
and continuous. It also turns out that most real situations are so complex that whether they are
really deterministic is a controversial point; for practical purposes, they must be treated as
nondeterministic.
Examples of environments and their characteristics
Environment Accessible Deterministic Episodic Static Discrete
Chess with a clock Yes Yes No Semi Yes
Chess without a clock Yes Yes No Yes Yes
Taxi driving No No No No No
Image-analysis system
AI learning class

Task Environments
Any agent must operate within a specific environment. Environment is the problem for which the
agent is the solution. Therefore it is essential to clearly define the environment (i.e. the problem)
before starting to design the agent (i.e. the solution). PEAS are keys for the environment
definition. It is an acronym that summarises the 4 components of any environment that it is
necessary to define:
 Performance: A measure of how good the behaviour of agents operating in the
environment is.
 Environment: What things are considered to be a part of the environment and what things
are excluded?
 Actuators: How can an agent perform actions in the environment?
 Sensors: How can the agent perceive the environment?
Example 1: Taxi-Driving System:
The aim of the taxi-driving system is to drive a car on public roads without colliding with other
road users. The system should be able to pick up customers and take them to their destinations
and receive payment. Accordingly, the PEAS of this taxi-driving system can be defined as:
 Performance: Safe, fast, legal, comfortable trip, maximise profits.

8
While we are learning artificial intelligence we are learning our intelligence too!
 Environment: Roads, other traffic, pedestrians, customers.
 Actuators: Steering wheel, accelerator, brakes indicators, horn.
 Sensors: Cameras, sonar, speedometer, GPS, engine sensors, keyboard.

Example 2: Medical Diagnosis System:


The aim of the medical diagnosis system is to interview patients through a keyboard/monitor
interface, receive test results, and recommend a treatment plan based on the information it
receives.
 Performance: Healthy patient, minimise costs, avoid lawsuits.
 Environment: Patient, hospital, staff.
 Actuators: Screen display (questions, tests, diagnoses, treatments, referrals).
 Sensors: Keyboard (entry of symptoms, findings, patient’s answers).
Example 3: Part Picking Robot:
This system consists of a video camera that must classify factory parts on a conveyor belt, and a
robotic arm that needs to pick up the parts and place them into an appropriate bin.
 Performance: Percentage of parts in correct bins.
 Environment: Conveyor belt with parts, bins.
 Actuators: Jointed arm and hand.
 Sensors: Camera, joint angle sensors.
It is important to remember that defining the environment using PEAS is the first stage in AI
system design.

9
While we are learning artificial intelligence we are learning our intelligence too!
The PAGE (Percepts, Actions, Goals, Environment) Description
The goals do not necessarily have to be represented within the agent; they simply describe the
performance measure by which the agent design will be judged.

The most famous artificial environment is the Turing Test environment, in which the whole point
is that real and artificial agents are on equal footing, but the environment is challenging enough
that it is very difficult for a software agent to do as well as a human.

10
While we are learning artificial intelligence we are learning our intelligence too!

You might also like