0% found this document useful (0 votes)
37 views

Intelligent Agent: Dept. of Computer Science Faculty of Science and Technology

This document provides an overview of intelligent agents and their properties. It discusses the key concepts of: - What defines an agent and how it interacts with its environment through sensors and actions - Different types of agents including table-driven and model-based reflex agents - The importance of rational behavior and performance measures for agents - How goal-based and utility-based agents can operate in more complex environments - Different environments agents may operate in and how this impacts agent design

Uploaded by

Jobayer Hayder
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views

Intelligent Agent: Dept. of Computer Science Faculty of Science and Technology

This document provides an overview of intelligent agents and their properties. It discusses the key concepts of: - What defines an agent and how it interacts with its environment through sensors and actions - Different types of agents including table-driven and model-based reflex agents - The importance of rational behavior and performance measures for agents - How goal-based and utility-based agents can operate in more complex environments - Different environments agents may operate in and how this impacts agent design

Uploaded by

Jobayer Hayder
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 40

Intelligent Agent

Course Code: CSC4226 Course Title: Artificial Intelligence and Expert System

Dept. of Computer Science


Faculty of Science and Technology
Lecture Outline

1. Agents and Environments

2. Good Behavior: The Concept of Rationality

3. The Nature of Environments

4. The Structure of Agents


INTELLIGENT AGENT

Agent: entity in a program or environment capable of generating


action.

An agent uses perception of the environment to make decisions


about actions to take.

The perception capability is usually called a sensor.

The actions can depend on the most recent perception or on the


entire history (percept sequence).
AGENT V/S PROGRAM

Size - an agent is usually smaller than a program.

Purpose - an agent has a specific purpose while programs are


multi-functional.

Persistence - an agent's life span is not entirely dependent on a


user launching and quitting it.

Autonomy - an agent doesn't need the user's input to function.


WHAT IS AN AGENT ?
ROBOTS AND THEIR APPLICATION
TAXONOMY OF
AUTONOMOUS AGENT
AGENT TOPOLOGY
DESIRABLE PROPERTIES OF AGENT
DESIRABLE PROPERTIES OF AGENT
Continued..
DESIRABLE PROPERTIES OF AGENT
Continued..
AGENT AND ENVIRONMENT

ENVIRONMENT
SENSOR
ACTUATORS

PERCEPT
PERCEPT SEQUENCE

AGENT FUNCTION
AGENT PROGRAM
AGENT FUNCTION
The agent function is a mathematical function that maps a sequence of perceptions into
action.
The function is implemented as the agent program.
The part of the agent taking an action is called an actuator.
environment -> sensors -> agent function -> actuators -> environment
VACUUM CLEANING AGENT
GOOD BEHAVIOR:
THE CONCEPT OF RATIONALITY

Rational Agent
- one does the right thing
- Every entry in the table for the agent function is filled out correctly
What does it mean to do the right thing ?
by considering the consequences of the agent's behavior
Agent - >plunked down in an environment
-> generates a sequence of actions
-> according to the percepts it receives.

sequence of actions -> causes


-> the environment
-> to go through a sequence of states.
If the sequence of states is desirable, then the agent has performed well.
performance Measure –
evaluates any given sequence of environment states
RATIONAL AGENT

A rational agent is one that can take the right decision in every situation.

Performance measure: a set of criteria/test bed for the success of the


agent's behavior.

The performance measures should be based on the desired effect of the


agent on the environment.
RATIONALITY
The agent's rational behavior depends on:

 the performance measure that defines success


 the agent's knowledge of the environment
 the action that it is capable of performing
 the current sequence of perceptions.
 Definition: for every possible percept sequence, the agent is
expected to take an action that will maximize its
performance measure.
SPECIFYING THE TASK ENVIRONMENT:
PEAS DESCRIPTION
PEAS: Examples
PROPERTIES OF
TASK ENVIRONMENT
Fully v/s Partially Observable
Unobservable

Single Agent v/s Multi Agent Episodic V/s Sequential


Deterministic v/s Stochastic Static v/s Dynamic
Nondeterministic Discrete v/s Continuous

Uncertain Environment Known v/s Unknown


PROPERTIES OF
TASK ENVIRONMENT
TASK ENVIRONMENT: EXAMPLES
THE STRUCTURE OF AGENTS
Architecture: computing device + physical sensors and actuators

architecture makes the percepts from the sensors available to the agent
program,
runs the program,
and feeds the program's action choices to the actuators as they are
generated.
 Agent programs
 take the current percept as input from the sensors and return an action
to the actuators.
AGENT EXAMPLE:
TABLE DRIVEN AGENT
Table-driven agents: the function consists in a lookup table of actions to be
taken for every possible state of the environment.

If the environment has n variables, each with t possible states, then the table
size is tn.

Only works for a small number of possible states for the environment.

Simple reflex agents: deciding on the action to take based only on the
current perception and not on the history of perceptions.

Based on the condition-action rule: (if (condition) action)

Works if the environment is fully observable


TABLE-DRIVEN-AGENT
VACUUM CLEANING AGENT:
Table Driven
Table Drives

percepts = []
table = {}

def table_agent (percept):


action = True
percepts.append(percept)
action = lookup(percepts, table)
return action
LIMITATION OF
TABLE-DRIVEN AGENT
(a) no physical agent in this universe will have the space to store the
table,

(b) the designer would not have time to create the table,

(c) no agent could ever learn all the right table entries from its
experience, and

(d) even if the environment is simple enough to yield a feasible table


size, the designer still has no guidance about how to fill in the table
entries
BASIC KINDS OF
AGENT PROGRAMS
1. Simple reflex agents

2. Model-based reflex agents

3. Goal-based agents

4. Utility-based agents.
SIMPLE REFLEX AGENTS
select actions on the basis of the current percept, ignoring the rest of the
percept history.
a condition-action rule, written as
if car-in-front-is-braking then initiate-braking.
general and flexible approach is first to build a
general-purpose interpreter for condition-action rules and
then to create rule sets for specific task environments.
Background Information Current internal state
used in the Process of Decision Process

Only if the Environment is


Fully Observable

What will happen? when,


Vacuum Cleaner with poor perception!
Without location sensor !
MODEL-BASED REFLEX AGENTS
keep track of the part of the world it can't see now. [handle partial observability]
model of the world
That is, the agent should maintain some sort of internal state that depends on the percept
history and thereby reflects at least some of the unobserved aspects of the current state.

Updating this internal state information as time goes by requires two kinds of knowledge to be
encoded in the agent program.

First, we need some information about how the world evolves independently of the agent
[ex. overtaking car]

Second, we need some information about how the agent's own actions affect the world
[ex. Turn steering wheel clockwise..]

This knowledge about "how the world works"—whether implemented in simple Boolean circuits
or in complete scientific theories - is called a model of the world.
GOAL-BASED AGENT

Searching

Planning

decision making
UTILITY-BASED AGENTS
Goals alone are not enough to generate high-quality behavior in most environments.
An agent's utility function is essentially an internalization of the performance measure.

flexibility and learning


In two kinds of cases, goals are inadequate, but a utility-based agent can still make
rational decisions

1. when there are conflicting goals, only some of which can be achieved
(for example, speed and safety), the utility function specifies the appropriate tradeoff.

2. when there are several goals that the agent can aim for, none of which can be
achieved with certainty, utility provides a way in which the likelihood of success can be
weighed against the importance of the goals.
LEARNING AGENT
HOW COMPONENT OF
AGENT PROGRAMS WORK
References

1. Chapter 2: Intelligent Agents , Pages 34-58


“Artificial Intelligence: A Modern Approach,” by Stuart J. Russell and Peter Norvig,
Books

1. “Artificial Intelligence: A Modern Approach,” by Stuart J. Russell and Peter Norvig.


2. "Artificial Intelligence: Structures and Strategies for Complex Problem Solving", by
George F. Luger, (2002)
3. "Artificial Intelligence: Theory and Practice", by Thomas Dean.
4. "AI: A New Synthesis", by Nils J. Nilsson.
5. “Programming for machine learning,” by J. Ross Quinlan,
6. “Neural Computing Theory and Practice,” by Philip D. Wasserman, .
7. “Neural Network Design,” by Martin T. Hagan, Howard B. Demuth, Mark H.
Beale, .
8. “Practical Genetic Algorithms,” by Randy L. Haupt and Sue Ellen Haupt.
9. “Genetic Algorithms in Search, optimization and Machine learning,” by David E.
Goldberg.
10."Computational Intelligence: A Logical Approach", by David Poole, Alan
Mackworth, and Randy Goebel.
11.“Introduction to Turbo Prolog”, by Carl Townsend.

You might also like