0% found this document useful (0 votes)
106 views

Lecture-01: Introduction: TAC7011 An Agent Approach To Computational Intelligence

This document provides an introduction to intelligent agents and multi-agent systems. It defines an agent as a computer system that is situated in some environment and is capable of autonomous action in this environment to meet its design objectives. Intelligent agents are described as active entities that perceive their environment, reason, plan actions to achieve goals, and interact with other agents. The document outlines the key concerns of building agents that can act autonomously and interact effectively with other agents. It also provides an overview of the fields that inspired the concept of agents and lists some common properties of agents.
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
106 views

Lecture-01: Introduction: TAC7011 An Agent Approach To Computational Intelligence

This document provides an introduction to intelligent agents and multi-agent systems. It defines an agent as a computer system that is situated in some environment and is capable of autonomous action in this environment to meet its design objectives. Intelligent agents are described as active entities that perceive their environment, reason, plan actions to achieve goals, and interact with other agents. The document outlines the key concerns of building agents that can act autonomously and interact effectively with other agents. It also provides an overview of the fields that inspired the concept of agents and lists some common properties of agents.
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 52

Lecture-01 : Introduction

TAC7011
An Agent Approach to Computational
Intelligence
Books:
1.Michael Wooldridge, Introduction to MultiAgent Systems, John
Wiley & Sons (2002).
2.Ethem Alpaydin, Introduction to Machine Learning MIT Press
(2004).
3.S. Russell and Peter Norvig, Artificial Intelligence: A Modern
Approach, Second Edition, Prentice Hall (2003).
Intelligent Agents
• One of the central concerns of Artificial Intelligence is
the design and implementation of
intelligent/autonomous agents
– active entities that perceive their environment, reason, plan
and execute appropriate actions to achieve their goals (in
service of their users),
– react to external changes, and have social abilities that allow
them to communicate and interact with other agents and
users.
• These may be robots or intelligent software agents that
"live" on the Internet. Agent-based approaches are good
for building open systems where components can come
and go, and work together in flexible ways.
Intelligent Agents
• “A agent is a computer system
that is situated in some
environment, and that is
capable of autonomous action
in this environment in order to
meet its design objectives.”
• Usually the agent only has
partial-actions might not have
expected consequences.
• Control systems.
• Software demons.
Agents and Environments

sensors
percepts
environment ?
agent
actions

actuators

• Agents include humans, robots, softbots, thermostats…


• The agent function maps percepts to actions f:P* A
• The agent program runs on the physical architecture to
produce f
Rational Agents
• A rational agent “does the right thing”
• Performance measure – success criteria
– Evaluates a sequence of environment states
• A rational agent chooses whichever action maximizes
the expected value of its performance measure given the
percept sequence to date
– Need to know performance measure, environment, possible
actions, percept sequence

• Rationality  Omniscience, Perfection, Success


• Rationality  exploration, learning, autonomy
Lecture Outline
1. Practical Information
2. Definition of an Agent
3. Distributed Artificial Intelligence and Multi-agent
Systems
4. Agent Typology
5. Summary and references
Lecture Plan
Date Lectures
1 Week-1
Introduction, Overview and Technology (Chapter-1, MW)
2 Week-2
Intelligent Agents (Chapter-2, MW)
3 Week-3
Deductive reasoning Agents (Chapter-3, MW)
4 Week-4
Practical Reasoning Agents (Chapter-4, MW)
5 Week-5
Reactive and Hybrid Agents (Chapter-5, MW)
6 Week-6 Java Agent Development Framework (JADE)
7 Week-7
Reasoning in Uncertain Environments (Chapter-13, R&N)
8 Week-8
Probabilistic reasoning Agents (Chapter-14, R&N)
9 Week-9
Learning From Examples (Chapter-18, R&N)
Week-10
Learning From Examples (Chapter-18, R&N)
11 Week-11 Learning Neural Networks I (Chapter-18, R&N)
12 Week-12 Learning Neural Networks 1I (Chapter-18, R&N)

13 Week-13
Reinforcement Learning (Chapter-20, R&N)
Example 1
”When a space probe makes its long flight from Earth
to outer planets, a ground crew is usually required to
continue to track its progress and decide how to deal
with unexpected eventualities. This is costly and, if
decisions are required quickly, it is simply not
practical. For these reasons, organisations like NASA
are seriously investigating the possibility of making
the probes more autonomous – giving them richer
decision making capabilities and responsibilities.”
Example 2
”Searching the Internet for the answer to a specific
query can be a long and tedious process. So, why not
allow a computer program – an agent – do searches for
us? The agent would typically be given a query that
would require synthesizing information from various
different internet information sources.”
Example 3

”After a wet and cold winter, you are in need of a last


minute holiday somewhere warm. After specifying
your requirements to your Personal Digital Assistant
(PDA), it converses with a number of different web
sites which sell services such as flights and hotel
rooms. After hard negotiation on your behalf with a
range of sites, your PDA presents you with a package
holiday.”
Overview 1
• Five ongoing trends have marked the history of
computing:
1. Ubiquity
 Reduction in the cost of computing capability made it possible to
introduce processing power every ehere
2. Interconnection
 Computer systems are networked into large distributed systems
3. Intelligence
 The complexity of tasks that can be automated and delegated to
computers
4. Delegation
 Judgement of computer systems are frequently accepted (delegation to
computing systems such as safety-critical tasks as piloting aircraft.
5. Human-orientation
 Use concepts and metaphors that reflect how we understand the world
(Graphical user interface)
Overview 2
• These trends present major challenges to software
developers. e.g.
– Delegation – act independently.
– Intelligence – act in a way that represents our best interests
while interacting with other humans or systems.
 Need systems that can act effectively on our behalf.
• Systems must have the ability to cooperate and reach
agreements with other systems.

New field: Multi-agent Systems


Overview 3
• An agent is a system that is capable of independent
action on behalf of its user or owner.
• A multi-agent system is one that consists of a number
of agents which interact with one another.
• In order to successfully interact, agents need ability to
cooperate, coordinate and negotiate.
Two Key Problems
1. How do we build agents that are capable of
independent, autonomous action in order to
successfully carry out the tasks that we delegate to
them? (Micro aspects)
2. How do we build agents that are capable of interacting
(cooperating, coordinating, negotiating) with other
agents in order to successfully carry out the tasks we
delegate to them? (Macro aspects)
Fields that inspired agents
• Artificial Intelligence
– Agent intelligence, micro aspects
• Software Engineering
– Agent as an abstraction
• Distributed systems and Computer Networks
– Agent architectures, multi-agent systems,
coordination
There are many definitions of agents – often too narrow
(describing a particular agent in a particular situation) or
too general (describing all types of software).
Definitions of Agents 1
American Heritage Dictionary:
”... One that acts or has the power or authority to act ... or
represent another”
Russel and Norvig:
”An agent is anything that can be viewed as perceiving its
environment through sensors and acting upon that environment
through effectors.”
Maes, Pattie:
”Autonomous Agents are computational systems that inhabit
some complex dynamic environment, sense and act
autonomously in this environment, and by doing so realize a set
of goals or tasks for which they are designed”.
Definitions of Agents 2
IBM:
”Intelligent agents are software entities that carry out
some set of operations on behalf of a user or another
program with some degree of independence or
autonomy, and in doing so, employ some knowledge
or representations of the user’s goals or desires”.
Definitions of Agents 3
• An agent is autonomous: capable of acting
independently, exhibiting control over its internal state.
 An agent is a computer system capable of autonomous
action in some environment.

System

Input Output

Environment
Definition of Agent 4
• Examples of trivial/non-interesting agents are:
– Thermostat, UNIX deamon, e.g. Xbiff (X Windows program)
– Most software deamons, background processes in Unix OS,
which monitor a software environment and perform actions to
modify it.
 An intelligent agent is a computer system capable of
flexible autonomous action in some environment.
- By flexible we mean:
- Reactive
- Pro-active
- Social
Properties of Agents 1
• Autonomous
– Capable of independent action without our interference

• Reactive
– Maintains an ongoing interaction with its environment and responds to
changes that occur in (in time for the response to be useful).
– Reactive in a fixed environment – program can execute blindly.

• Pro-active
– Generating and attempting to achieve goals; not driven solely by events;
taking the initiative.
– Proactive – goal directed behaviour.

• Social
– The ability to interact with other agents (and possibly humans) via some
kind of agent communication language and perhaps cooperate with others.
Properties of Agents 2
• Mentalistic notions, such as beliefs and intentions are
often referred to as properties of strong agents.
• Other properties are:
– Mobility: the ability of an agent to move around a network.
– Veracity: agent will not knowingly communicate false
information.
– Benevolence: agents do not have conflicting goals and always
try to do what is asked of it.
– Rationality: an agent will act in order to achieve its goals and
will not act in such a way as to prevent its goals being achieved.
Agents and Objects 1
• Are agents just objects by another name?

Objects do it for free…

• Agents do it because they want to!


• Agents do it for money!
Agents and Objects 2
Main differences:
– Agents are autonomous: agents embody a stronger
notion of autonomy than objects, in particular, agents
decide for themselves whether or not to perform an
action.
– Agents are smart: capable of flexible (reactive, pro-
active social) behaviour; standard object models do
not have such behaviour.
– Agents are active: a multi-agent system is inherently
multi-threaded in that each agent is assumed to have
atleast one thread of active control.
Why agents?
• Today, we have a distributed environment that cannot be
completely specified – open environments.
• Former paradigms, such as OOP, cannot completely
satisfy our needs:
– They were designed for constructing systems in a completely
specified environment - a closed world.
How can we work in an Open
Environment
• By copying human behaviour:
– Perceive the environment Agent

– Affect the environment


– Have a model of behaviour Environment

– Have intentions and motivations to be fulfilled by


implementing corresponding goals
Distributed Artificial Intelligence (DAI)
• DAI is a sub-field of AI
• DAI is concerned with problem solving where agents
solve (sub-) tasks (macro level)

Distributed AI
• Main areas of DAI
1. Multi-Agent Systems (MAS) Distributed
Computing
Artificial
Intelligence

2. Distributed Problem Solving (DPS)

Distributed
Multi-Agent
Problem
Systems
Solving
Distributed Artificial Intelligence (DAI)
DAI Concerns
• DAI is concerned with:
– Agent granularity
– Heterogenity of agents
– Methods of distributing control among agents
– Communication possibilities

• DAI is not concerned with:


– Issues of coordination of concurrent processes at the problem
solving and representational level
– Parallel Computer Architectures, Parallel Programming
Languages or Distributed Operating Systems
DPS and MAS
• DPS (Distributed Problem Solving) considers how the
task of solving a particular problem can be divided
among a number of modules that cooperate in dividing
and sharing knowledge about the problem and its
evolving solution(s).

• MAS is concerned with the behaviour of a collection


of autonomous agents aiming to solve a given problem.
Decentralisation
• An important concept in DAI and MAS
– No central control; control is distributed
– Knowledge or information sources may also be
distributed
Multi-agent Systems (MAS)
Contains a number of agents which interact with one another
through communication. The agents are able to act in an
environment; where each agent will act upon or influence
different parts of the environment.
Reference: Wooldridge, An Introduction to Multiagent Systems, p. 105

Multi-agent System

Environment
Motivation for MAS
• To solve problems that are too large for a centralized
agent
• To allow interconnection and interoperation of multiple
legacy systems
• To provide a solution to inherently distributed problems
• To provide solutions which draw from distributed
information sources
• To provide solutions where expertise is distributed
• To offer conceptual clarity and simplicity of design
Benefits of MAS
• Faster problem solving
• Decrease in communication
– Decrease in communication: there might be an increase in
the number of messages passed or in the number of
interactions among the agents. But there is a decrease in
communication as the volume of information transmitted
each time will be reduced.
• Flexibility
• Increased reliability
Cooperative and Self-interested MAS
• Cooperative
– Agents designed by interdependent designers
– Agents act for increased good of the system
– Concerned with increasing the performance of the system

• Self-interested
– Agents designed by independent designers
– Agents have their own agenda and motivation
– Concerned with the benefit and performance of the individual
agent
 More realistic in an Internet setting?
Interaction and Communication in MAS
• To successfully interact, agents need ability to
cooperate, coordinate and negotiate.
• This requires communication:
– Plan /message passing
– Information exchange using shared repositories
• Important characteristics of communication:
– Relevance of the information
– Timeliness
– Completeness
Agent Typology 1
• One of the most referred to typologies is given by
Nwana, BT Research Labs
– Reference: H. S. Nwana. ”Software Agents: An Overview”,
Knowledge Engineering Review, Vol. 11, No. 3, 1996, 40
pages

• Several dimensions of typology:


– Mobility - mobile or static.
– Deliberative or reactive.
– Primary attributes, such as autonomy, learning and
cooperation.
Agent Typology 2
• A part view of an agent typology

Autonomous
Software
systems
Cooperate Learning

Smart
Collaborative
Agents Interface
Agents
Agents
Agent Typology 3
• Human agents: Person, Employee, Student, Nurse, or
Patient
• Artificial agents: owned and run by a legal entity
• Institutional agents: a bank or a hospital
• Software agents: Agents designed with software
• Information agent: Data bases and the internet
• Autonomous agents: Non-trivial independence
• Interactive/Interface agents: Designed for interaction
• Adaptive agents: Non-trivial ability for change
• Mobile agents: code and logic mobility
Agent Typology 4
• Collaborative/Coordinative agents: Non-trivial ability
for coordination, autonomy, and sociability
• Reactive agents: No internal state and shallow
reasoning
• Hybrid agents: a combination of deliberative and
reactive components
• Heterogenous agents: A system with various agent
sub-components
• Intelligent/smart agents: Reasoning and intentional
notions
• Wrapper agents: Facility for interaction with non-
agents
Agent Typology 5
Nwana identified the following seven types of agents:
1. Collaborative agents - autonomous and cooperate.
2. Interface agents - autonomous and learn.
3. Mobile agents - able to move around a network.
4. Information/Internet agents - manages the information on the internet.
5. Reactive agents - stimulus-response behaviour.
6. Hybrid agents - combination two or more agent philosophies.
7. Smart agents - autonomous, learn and cooperate.

• Criticisms of this Typology


– Confuses agents with what they do (e.g. Information search) and the
technology (e.g. reactive, mobile).
Agent Typology 6
• Collaborative agents
– Hypothesis/Goal: The capabilities of the collection of agents is greater
than any of its members.
– Main Motivation: To solve problems that are too large for a single agent.

• Interface agents
– Hypothesis/Goal: A personal assistant that collaborates with the user.
– Main Motivation: To eliminate humans performing several, manual sub-
operations.
– Example: A personal assistant that finds a suitable package holiday for the
user.
Agent Typology 7
• Mobile agents
– Hypothesis/Goal: Agents need not be stationary!
– Main Motivation: To reduce communication costs
– Example: Aglets
• Information/Internet agents
– Hypothesis/Goal: Reduce information overload problem
– Main Motivation: The need for tools to manage information
explosion
– Example: agents that reside on servers and access the
distributed on-line information on the Internet
Agent Typology 8
• Reactive agents
– Hypothesis/Goal: Physical grounding hypothesis:
representations grounded in the physical world.
– Key Criticism of reactive agents:
 Scope is limited to games and simulations.
 Not clear how to design agents so that the intended behaviour is
emergent.
• Hybrid agents
– Definition:constitutes a combination of two or more agent
philosophies (e.g deliberative & reactive).
– Hypothesis/Goal: Gains from the combination of philosophies
>> gains from the same philosophy.
Agent Typology 9
Heterogeneous agents
• Heterogeneity: a term that you’ll come across very often
in the context of agents and distributed systems, in general.
• Definition: System of agents of several types
– e.g. mobile and interface agents in the same system
– Realistic in an Internet (open-system) setting

• Motivation: Interoperability is plausible


• Requires Standards for communication among the agents:
– Agent Communication Languages and protocols
– Cooperation conventions
Other Types of Agents
Some of these may not exhibit any agent properties as
discussed earlier.
• Desktop Agents: e.g.
– Operating System agents – interact with the OS to perform tasks on
behalf of the user.
– Application agents – e.g. Wizards

• Web search agents – act as information brokers between


information suppliers (e.g. Websites) and information
consumers (e.g. users)
Operating System Agents
Examples of tasks performed It has knowledge
by OS agents are: about the OS, GUI
Setup User shell and the user.
Customisation
File management, etc.
Agent
Application GUI Shell

OS API

Memory File
Mgmt. Mgmt.

Process
Mgmt.
Web Search Agents
• Web search agents and information filtering agents fall
into the category of Information/Internet agents.

User Query
Web Query Server
Browser
Response

Tasks:
Index
•Hyperlink discovery database
•Document retrieval and Web
indexing
Web robot
•It has knowledge about
the web, usenet Search Engine
newsgroups, etc.
Information Filtering Agents

User
Web News Server
Browser

Web
Indexed User
articles Profile

Indexing
Engine
Media
Filtering Agent
Summary
• An agent is a system that is capable of independent
action on behalf of its user or owner.
• A multi-agent system is one that consists of a number
of agents which interact with one another.
• In order to successfully interact, agents need ability to
cooperate, coordinate and negotiate.
Definition of Agents - Summary
• An agent acts on behalf of another user or entity
• An agent has the weak agent properties:
– autonomy, pro-activity, reactivity and social ability
• An agent may have strong agent properties :
– mentalistic notions such as beliefs and desires
• Other properties discussed in the context of agents:
– mobility, veracity, benevolence and rationality
References
• Curriculum: Wooldridge: ”Introduction to MAS”
– Chapters 1 & 2
• Article: Agent Typology
– H. S. Nwana. ”Software Agents: An Overview”, Knowledge Engineering
Review, Vol. 11, No. 3, 1996, 40 pages

• Recommended Reading
– B. Moulin, B. Chaib-draa. ”An Overview of Distributed Artificial
Intelligence”. In: G. M. P. O'Hare, N. R. Jennings (eds). Foundations of
Distributed Artificial Intelligence, John Wiley & Sons, 1996, pp. 3-56.
– Y. Shoham, Software Agents: A Oriented Programming, Journal of
Artificial Intelligence 60(1), pp.51-92.
– IEEE Intelligent Systems, Vol. 17, n.6, Nov.-Dec. 2002.
– Communications of ACM, Vol. 47, Issue 2, Feb.2004, pages 56-60.
Discussion Questions
Agents
•What's the difference between an agent and a piece of software?
•What's the difference between an agent, an intelligent agent, and an autonomous
agent?
– Is it even worth talking about agents that aren't intelligent or autonomous?
– Can you have an intelligent agent that isn't autonomous?
– Can you have an autonomous agent that isn't intelligent?
•What are the characteristics of an agent ?
– Characterize agent characteristics.
•What are the characteristics of an agent's environment ?
•Give an example of a problem domain that each of the following classes of agents
would be well suited for, and an example of a problem domain that the class would
not be well suited for:
– Reactive agents
– Simple agents with state
– Logic-based agents
– Belief-desire-intention (BDI)agents

You might also like