0% found this document useful (0 votes)
41 views

Agent Types and Problem Formulation

Uploaded by

Parth Mehta
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views

Agent Types and Problem Formulation

Uploaded by

Parth Mehta
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 74

Module 1

Introduction to AI
Introduction to AI
1. Definition Of AI: The field of artificial intelligence, or AI, is
concerned with not just understanding but also building
intelligent entities—machines that can compute how to act
effectively and safely in a wide variety of novel situations
2. The subject matter itself also varies:
some consider intelligence to be a property of internal thought
processes and reasoning, while others focus on intelligent behavior,
an external characterization.
Introduction to AI
1. Definition Of AI: The field of artificial intelligence, or AI, is
concerned with not just understanding but also building
intelligent entities—machines that can compute how to act
effectively and safely in a wide variety of novel situations
2. Example Systems
3. AI Problems
4. AI Techniques
5. Problem Solving By Searching
What is AI?
It is concerned with the design of intelligence in an artificial device.

What is intelligence?
-Behaves as Intelligently as a human
-Behave in the best possible manner
-Thinking?
- Acting?
https://www.ques10.com/p/30186/what-are-the-components-of-ai/

Intelligent Systems: Categorization of Intelligent System


From the two dimensions—human vs. rational and thought vs. behavior—there are four possible combinations

Thought
/Reasoning

System that System that


think like Human think like Rationally
Human Like Ideal
Performance Performance

System that System that


act like Human act like Rationally

Behaviour
Turing Test

According to this test, a computer needs to interact with human interrogator


by answering his questions in written format. Computer passes the test if
human interrogator, cannot identify whether the written responses from a
person or a computer. Turing test is valid even after 60 years of research.
Typical AI Problems
Intelligent entities(or agents) need to be able to do both “mundane”
and “expert” tasks:
Mundane tasks:
• Planning Route
• Communication (through natural language)
• Navigation round obstacles on the street
Expert Tasks:
• Medical Diagnosis
• Mathematical Problem Solving
• Stock market trend prediction
Intelligent Behavior
1. Perception/Observation
2. Reasoning/Thinking
3. Learning/Knowledge gaining
4. Understanding Language
5. Solving Problems

These things are examples of some of the things that we want our AI
systems
to solve the problem.
What AI can do today?

Autonomous Land Vehicle


What AI can do today?

Deep Blue:The cover depicts the final position from the decisive game 6 of the 1997 chess
match in which the program Deep Blue defeated Garry Kasparov (playing Black), making
this the first time a computer had beaten a world champion in a chess match
Four types of artificial intelligence
• Type 1: Reactive machines. These AI systems have no memory and are task
specific. An example is Deep Blue, the IBM chess program that beat Garry
Kasparov in the 1990s. Deep Blue can identify pieces on the chessboard and
make predictions, but because it has no memory, it cannot use past experiences to
inform future ones.
• Type 2: Limited memory. These AI systems have memory, so they can use past
experiences to inform future decisions. Some of the decision-making functions in
self-driving cars are designed this way.
• Type 3: Theory of mind. Theory of mind is a psychology term. When applied to
AI, it means that the system would have the social intelligence to understand
emotions. This type of AI will be able to infer human intentions and predict
behavior, a necessary skill for AI systems to become integral members of human
teams.
• Type 4: Self-awareness. In this category, AI systems have a sense of self, which
gives them consciousness. Machines with self-awareness understand their own
current state. This type of AI does not yet exist.
Sub-areas of AI

https://www.analyticssteps.com/blogs/6-major-branches-artificial-intelligence-ai
3

Agents
• An agent is anything that can be viewed as
– perceiving its environment through sensors and
– acting upon that environment through actuators
– Assumption: Every agent can perceive its own actions (but not
always the effects)
4

Agents
• Human agent:
– eyes, ears, and other organs for sensors;
– hands,legs, mouth, and other body parts for
actuators
• Robotic agent:
– cameras and infrared range finders for sensors;
– various motors for actuators

• A software agent:
– Keystrokes, file contents, received network packages as sensors
– Displays on the screen, files, sent network packets as actuators
5

Agents and environments


• Percept: agent’s perceptual input at any
given instant
• Percept sequence: complete history of everything the agent has
ever perceived
An agent’s behavior is described by the agent function which maps
from percept histories to actions:

• [f: P *  A]

Internally, the agent function will be implemented by an agent


program which runs on the physical architecture to produce f

agent = architecture + program


6

Vacuum-cleaner world
• Two locations: A and B
• Percepts: location and contents, [A,Dirty]
• e.g.,
Actions: Left, Right, Suck, NoOp
Percept sequence Actions
[A,Clean] Right
[A, Dirty] Suck
[B,Clean] Left
[B,Dirty] Suck
[A,Clean],[A,Clean] Right
[A,Clean],[A,Dirty] Suck
… …
[A,Clean], Right
[A.Clean],[A,Clean] Suck
[A,Clean],[A,Clean],[A,Clean]
One simple function is :
if the current square is dirty then suck, otherwise move to the other square
7

Rational agents
• An agent should strive to "do the right thing", based on what it the
can perceive and
actions it can perform.
• The right action is the one that will cause the agent to be
most successful
• Performance measure: An objective criterion for success of an
agent's behavior

E.g., performance measure of a vacuum-cleaner agent could be:


-amount of dirt cleaned up,
-amount of time taken,
-amount of electricity consumed,
-amount of noise generated, etc.
8

Rationality

• What is rational at any given time depends on four things


– The performance measure that defines the criterion of success
– The agent’s prior knowledge of the environment
– The actions that the agent can perform
– The agent’s percept sequence to date

– Rational Agent: For each possible percept sequence, a rational


agent should select an action that is expected to maximize its
performance measure, given the evidence provided by the
percept sequence and whatever built-in knowledge the agent
has.

CS461 Artificial Intelligence © Pinar Duygulu Spring 2008


9

Vacuum cleaner agent


• Let’s assume the following
– The performance measure awards one point for each clean square
at each time step, over a lifetime of 1000 time steps
– The geography of the environment is known a priori

The only available actions are Left, Right, Suck and NoOp

The agent correctly perceives its location and whether that


location contains dirt

• Under these circumstances the agent is rational: its


expected performance is at least as high as any other
agent’s

CS461 Artificial Intelligence © Pinar Duygulu Spring 2008


12

Specifying the task environment (PEAS)

• PEAS:
– Performance measure,
– Environment,
– Actuators,
– Sensors
• In designing must always
be an agent, the task environment (PEAS) as fully
to specify
as possible
the first
step

Spring
13

PEAS for an automated taxi driver


• Performance measure: Safe, fast, legal, comfortable trip, maximize
profits

• Environment: Roads, other traffic, pedestrians, customers

• Actuators: Steering wheel, accelerator, brake, signal, horn

• Sensors: Cameras, sonar, speedometer, GPS, odometer, engine


sensors, keyboard
13

Describe PEAS of following agents


• PEAS for a medical diagnosis system

• PEAS for a satellite image analysis system

• PEAS for a part-picking robot

• PEAS for Interactive English tutor

Covid 19/Omicrone Diagnosis System


14

PEAS for a medical diagnosis system

• Performance measure: Healthy patient, minimize costs, lawsuits

• Environment: Patient, hospital, staff

• Actuators: Screen display (questions, tests, diagnoses, treatments,


referrals)

• Sensors: Keyboard (entry of symptoms, findings, patient's


answers)
15

PEAS for a satellite image analysis system

• Performance measure: correct image categorization

• Environment: downlink from orbiting satellite

• Actuators: display categorization of scene

• Sensors: color pixel arrays


16

PEAS for a part-picking robot

• Performance measure: Percentage of parts in correct bins

• Environment: Conveyor belt with parts, bins

• Actuators: Jointed arm and hand

• Sensors: Camera,joint angle sensors


18

PEAS for Interactive English tutor

• Performance measure: Maximize student's score on test

• Environment: Set of students

• Actuators: Screen display (exercises, suggestions, corrections)

• Sensors: Keyboard
19

Environment types
• Fully observable vs. partially observable
• Deterministic vs. stochastic
• Episodic vs. sequential
• Static vs. dynamic
• Discrete vs. continuous
• Single agent vs. multiagent
20

Environment types

Fully observable vs. partially observable:

• An environment is fully observable if an agent's sensors


give it access to the complete state of the environment at
each point in time.
• Fully observable environments are convenient, because
the
agent need not maintain any internal state to keep track
• of
the world
• An environment might be partially observable because of
noisy and inaccurate sensors or because parts of the state
are simply missing from the sensor data
Examples: vacuum cleaner with local dirt sensor, taxi
driver
21

Environment types

Deterministic vs. stochastic:

• The environment is deterministic if the next state of the


environment is completely determined by the current state
and the action executed by the agent.

In principle, an agent need not worry about uncertainty in a


• fully observable, deterministic environment

• If the environment is partially observable then it could


appear to be stochastic (judgmental)

• Examples: Vacuum world is deterministic while taxi driver is


not
22

Environment types

Episodic vs. sequential:

• In episodic environments, the agent's experience is divided


into atomic "episodes" (each episode consists of the agent
perceiving and then performing a single action), and the
choice of action in each episode depends only on the
• episode itself.
Examples:
An agent that has to spot defective parts on an assembly line bases
each decision on the current part, regardless of previous decisions;
moreover, the current decision doesn’t affect whether the next part is
defective.
In sequential environments, the current decision could
affect all future decisions,
Example.: chess and taxi driver
23

Environment types

Static vs. dynamic:

• The environment is unchanged while an agent is deliberating.


• Static environments are easy to deal with because the agent need not
keep looking at the world while it is deciding on the action or need it
worry about the passage of time
• Dynamic environments continuously ask the agent what it wants to
• do
The environment
change is semi-dynamic
with the passage of time butifthe
theagent's
environment itself does
performance scorenot
does
• Examples: taxi driving is dynamic, chess when played with a clock is
semi-dynamic, crossword puzzles are static

CS461 Artificial Intelligence © Pinar Duygulu Spring 2008


24

Environment types

Discrete vs. continuous:

• A limited number of distinct, clearly defined states, percepts and


actions.
• Examples: Chess has finite number of discrete states, and has
discrete set of percepts and actions. Taxi driving has continuous
states, and actions

CS461 Artificial Intelligence © Pinar Duygulu Spring 2008


25

Environment types

Single agent vs. multiagent:

• An agent operating by itself in an environment is single agent


• Examples: Crossword is a single agent while chess is two-agents

CS461 Artificial Intelligence © Pinar Duygulu Spring 2008


26

Environment types

Task Observable Deterministic Episodic Static Discrete Agents


Environment

Crossword puzzle Fully Deterministic Sequential Static Discrete Single


Chess with a Fully Strategic Sequential Semi Discrete Multi
clock
Poker Partially Stochastic Sequential Static Discrete Multi
Backgammon Fully Stochastic Sequential Static Discrete Multi

Taxi driving Partially Stochastic Sequential Dynamic Continuous Multi


Medical Partially Stochastic Sequential Dynamic Continuous Single
Diagnosis
Image Analysis Fully Deterministic Episodic Semi Continuous Single
Part-picking robot Partially Stochastic Episodic Dynamic Continuous Single

Refinery Partially Stochastic Sequential Dynamic Continuous Single


controller
Interactiv Partially Stochastic Sequential Dynamic Discrete Multi
e
English

Tutor
The environment type largely determines the agent design
• The real world is (of course) partially observable, stochastic,
sequential, dynamic, continuous, multi-agent
CS461 Artificial Intelligence © Pinar Duygulu Spring 2008
30

Agent types Refer: https://www.javatpoint.com/types-of-ai-agents

• Four basic types in order of increasing generality:


– Simple reflex agents
– Model-based reflex agents
– Goal-based agents
– Utility-based agents

CS461 Artificial Intelligence © Pinar Duygulu Spring 2008


31

Simple reflex agents


• Select actions on the basis of the current percept
ignoring the rest of the percept history

• Example: simple reflex vacuum cleaner agent


function REFLEX-VACUUM-AGENT([location,status]) returns an action
if status = Dirty then return Suck
else if location = A then return Right
else if location = B then return Left

• Condition-action-rule
• Example: if car-in-front-is-breaking then initiate-
breaking

CS461 Artificial Intelligence © Pinar Duygulu Spring 2008


32

Simple reflex agents

CS461 Artificial Intelligence © Pinar Duygulu Spring 2008


33

Simple reflex agents


function SIMPLE-REFLEX-AGENT(percept) returns an action
static: rules, a set if condition-action rules
state <-- INTERPRET_INPUT(percept)
rule <-- RULE_MATCH(state, rules)
action <-- RULE_ACTION[rule]
return action

• Simple-reflex agents are simple, but they turn out


to be of very limited intelligence
• The agent will work only if the correct decision
can be made on the basis of the current percept –
that is only if the environment is fully observable
• Infinite loops are often unavoidable – escape
could be possible by randomizing

CS461 Artificial Intelligence © Pinar Duygulu Spring 2008


34

Model-based reflex agents


• The agent should keep track of the part of the world it can't see now
• The agent should maintain some sort of internal state that depends
on the percept history and reflects at least some of the unobserved
aspects of the current state
• Updating the internal state information as time goes by requires
two kinds of knowledge to be encoded in the agent program
– Information about how the world evolves(develop gradually)
independently of the agent
– Information about how the agent's own actions affects the world

• Model of the world – model based agents

CS461 Artificial Intelligence © Pinar Duygulu Spring 2008


34

Model-based reflex agents


•The Model-based agent can work in a partially observable
environment, and track the situation.
•A model-based agent has two important factors:
• Model: It is knowledge about "how things happen in the
world," so it is called a Model-based agent.
• Internal State: It is a representation of the current state based
on percept history.
•These agents have the model, "which is knowledge of the world"
and based on the model they perform actions.
•Updating the agent state requires information about:
• How the world evolves(gradually changes)
• How the agent's action affects the world.

CS461 Artificial Intelligence © Pinar Duygulu Spring 2008


35

Model-based reflex agents

CS461 Artificial Intelligence © Pinar Duygulu Spring 2008


36

Model-based reflex agents


function REFLEX-AGENT-WITH-STATE(percept) returns an action
static: state, a description of the current world state
rules, a set of condition-action rules
action, the most recent action, initially none

state <-- UPDATE_INPUT(state, action, percept)


rule <-- RULE_MATCH(state, rules)
action <-- RULE_ACTION[rule]
return action

CS461 Artificial Intelligence © Pinar Duygulu Spring 2008


37

Goal-based agents
•The knowledge of the current state environment is not
always sufficient to decide for an agent to what to do.
•The agent needs to know its goal which describes
desirable situations.
•Goal-based agents expand the capabilities of the model-
based agent by having the "goal" information.
•They choose an action, so that they can achieve the goal.
•These agents may have to consider a long sequence of
possible actions before deciding whether the goal is
achieved or not. Such considerations of different scenario
are called searching and planning, which makes an agent
proactive.
CS461 Artificial Intelligence © Pinar Duygulu Spring 2008
38

Goal-based agents

CS461 Artificial Intelligence © Pinar Duygulu Spring 2008


39

Goal-based agents vs reflex-based agents

• Although goal-based agents appears less efficient, it


is more flexible because the knowledge that
supports its decision is represented explicitly and
can be modified
• On the other hand, for the reflex-agent, we would
have to rewrite many condition-action rules
• The goal based agent's behavior can easily be
changed
• The reflex agent's rules must be changed for a new
situation

CS461 Artificial Intelligence © Pinar Duygulu Spring 2008


40

Utility-based agents

• Goals alone are not really enough to generate high


quality behavior in most environments – they just
provide a binary distinction between happy and
unhappy states
• A more general performance measure should
allow
a comparison of different world states according
to

exactly how happy they would make the agent if

they could be achieved
Happy – Utility (the quality of being useful)
A utility function maps a state onto a real number
which describes the associated degree of happiness
CS461 Artificial Intelligence © Pinar Duygulu Spring 2008
41

Utility-based agents

CS461 Artificial Intelligence © Pinar Duygulu Spring 2008


42

Learning agents

• Turing – instead of actually programming


intelligent machines by hand, which is too much
work, build learning machines and then teach them
• Learning also allows the agent to operate in
initially
unknown environments and to become more
competent than its initial knowledge alone might
allow

CS461 Artificial Intelligence © Pinar Duygulu Spring 2008


43

Learning agents

CS461 Artificial Intelligence © Pinar Duygulu Spring 2008


Summary: agent types
(1) Table-driven agents
– use a percept sequence/action table in memory to find the next action. They are
implemented by a (large) lookup table.
(2) Simple reflex agents
– are based on condition-action rules, implemented with an appropriate production
system. They are stateless devices which do not have memory of past world states.
(3) Agents with memory - Model-based reflex agents
– have internal state, which is used to keep track of past states of the world.
(4) Agents with goals – Goal-based agents
– are agents that, in addition to state information, have goal information that
describes desirable situations. Agents of this kind take future events into
consideration.
(5) Utility-based agents
– base their decisions on classic axiomatic utility theory in order to act rationally.
(6) Learning agents
– they have the ability to improve performance through learning.
Summary
An agent perceives and acts in an environment, has an architecture, and is implemented by
an agent program.
A rational agent always chooses the action which maximizes its expected performance, given
its percept sequence so far.
An autonomous agent uses its own experience rather than built-in knowledge of the
environment by the designer.
An agent program maps from percept to action and updates its internal state.
– Reflex agents (simple / model-based) respond immediately to percepts.
– Goal-based agents act in order to achieve their goal(s), possible sequence of steps.
– Utility-based agents maximize their own utility function.
– Learning agents improve their performance through learning.
Representing knowledge is important for successful agent design.

The most challenging environments are partially observable, stochastic, sequential, dynamic,
and continuous, and contain multiple intelligent agents.
Solving problems by Searching
Introduction
• Simple-reflex agents directly maps states to actions.

• Therefore, they cannot operate well in environments where the mapping is too
large to store or takes too much to learn

• Goal-based agents can succeed by considering future actions and desirability of


their outcomes

• Problem solving agent is a goal-based agent that decides what to do by finding


sequences of actions that lead to desirable states
Outline
• Problem-solving agents
• Problem types
• Problem formulation
• Example problems
• Basic search algorithms
Problem solving agents
• Intelligent agents are supposed to maximize their performance
measure.
• This can be simplified if the agent can adopt a goal and aim at
satisfying it.
• Goals help organize behavior by limiting the objectives that the agent
is trying to achieve.
• Goal formulation- based on the current situation and the agent’s
performance measure, is the first step in problem solving.

• Goal is a set of states. The agent’s task is to find out which sequence of
actions will get it to a goal state.
• Problem formulation- is the process of deciding what sorts of actions
and states to consider, given a goal.
Problem solving agents
• An agent with several immediate options of unknown value can
decide what to do by first examining different possible sequences of
actions that lead to states of known value, and then choosing the best
sequence.

• Looking for such a sequence is called search.

• A search algorithm takes a problem as input and returns a solution in


the form of action sequence.

• One a solution is found the actions it recommends can be carried out


– execution phase.
Problem solving agents
“formulate, search, execute” design for the agent.
• After formulating a goal and a problem to solve the agent
calls a search procedure to solve it.
• It then uses the solution to guide its actions, doing whatever
the solution recommends as the next thing to do (typically the
first action in the sequence).
• Then removing that step from the sequence.
• Once the solution has been executed, the agent will
formulate a new goal.
Environment Assumptions
Static, formulating and solving the problem is done without
paying attention to any changes that might be occurring in the
environment.
• Initial state is known and the environment is observable

• Discrete, enumerate alternative courses of actions

• Deterministic, solutions to problems are single sequences of


actions, so they cannot handle any unexpected events, and
solutions are executed without paying attention to the
percepts.
Problem solving agents
On holiday in Romania; currently in Arad.
• Flight leaves tomorrow from Bucharest
• Formulate goal:
– be in Bucharest
• Formulate problem:
– states: various cities
– actions: drive between cities
• Find solution:
– sequence of cities, e.g., Arad, Sibiu, Fagaras, Buchare
Problem solving agents
Example: Romania
Well-defined problems and solutions
A problem can be defined formally by four components
• Initial state that the agent starts in
– e.g. In(Arad)
• A description of the possible actions available to the agent
– Successor function – returns a set of <action,successor> pairs
– e.g. {<Go(Sibiu),In(Sibiu)>, <Go(Timisoara),In(Timisoara)>, <Go(Zerind), In(Zerind)>}
– Initial state and the successor function define the state space ( a graph in which the
nodes are states and the arcs between nodes are actions). A path in state space is a
sequence of states connected by a sequence of actions
• Goal test determines whether a given state is a goal state
– e.g.{In(Bucharest)}
• Path cost function that assigns a numeric cost to each path.
The cost of a path can be described as the some of the costs of the individual actions
along the path – step cost – e.g. Time to go Bucharest
Problem Formulation
A solution to a problem is a path from the initial state to the goal state
• Solution quality is measured by the path cost function and an optimal solution has
the lowest path cost among all solutions
• Real world is absurdly complex – state space must be abstracted for problem solving
• (Abstract) state = set of real states
• (Abstract) action = complex combination of real actions
– e.g., "Arad -> Zerind" represents a complex set of possible routes, detours, rest
stops, etc.
• For guaranteed realizability, any real state "in Arad“ must get to some real state "in
Zerind"
• (Abstract) solution =
– set of real paths that are solutions in the real world
• Each abstract action should be "easier" than the original problem

You might also like