0% found this document useful (0 votes)
14 views

Chapter 2

Uploaded by

Memo LOl
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Chapter 2

Uploaded by

Memo LOl
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 42

Chapter – 2

AI Agents
Agents in Artificial
Intelligence
An agent can be anything that perceive its environment
through sensors and act upon that environment through
actuators. An Agent runs in the cycle of perceiving, thinking,
and acting. An agent can be:

• Human-Agent: A human agent has eyes, ears, and


other organs which work for sensors and hand, legs, vocal
tract work for actuators.

•Robotic Agent: A robotic agent can have cameras, infrared


range finder,
NLP for sensors and various motors for actuators.

•Software Agent: Software agent can have


keystrokes, file contents as
Agents in Artificial
Intelligence

Sensor: Sensor is a device which detects the change in the


environment and sends the information to other electronic
devices. An agent observes its environment through sensors.
Actuators: Actuators are the component of machines that
converts energy into motion. The actuators are only responsible
for moving and controlling a system. An actuator can be an
electric motor, gears, rails, etc.
Effectors: Effectors are the devices which affect the
environment. Effectors can be legs, wheels, arms, fingers,
Agent
function
 That maps any given percept sequence to an action.

 Internally, the agent function for an artificial agent will be


implemented by an agent program. It is important to
keep these two ideas distinct.

 The agent function is an abstract mathematical


description; the agent program is a concrete
implementation, running on the agent architecture.
Vacuum Cleaner Agent
Example
 This world has just two locations: squares A and B.
 The vacuum agent perceives which square it is in and
whether there is dirt in the square.
 It can choose to move left, move right, suck up the dirt, or do
nothing.
Partial tabulation of a simple
agent function for the vacuum-
cleaner
One world
very simple agent function is the following: if the current
square is dirty, then suck, otherwise, move to the other square.
function Vacuum-Cleaner([1ocation,Status ])
if status = Dirty then return Suck
else if location = A then return Right
else if location = B then return
Left
Rational
Agent
 These are ideal agents that always make the
best decision given their knowledge and goals.
They maximize expected utility.
 While no real-world AI agent is perfectly
rational, rational agents serve as a theoretical
benchmark for evaluating the performance
of other agents.
What is rational agent decision or action at
any given
time depends on four
things:
 The performance measure that defines the
criterion of success. [Goal or utility ]
 The agent's prior knowledge of the
environment.
 The actions that the agent can perform.
 The agent's percept sequence to date.
Agent Environment in
AI is everything in the world which surrounds the
An environment
agent, but it is not a part of an agent itself. An environment can
be described as a situation in which an agent is present.

The environment is where agent lives, operate and provide the


agent with something to sense and act upon it. An environment is
mostly said to be non-feministic.

Environments can be physical (e.g., a robot navigating a room) or


virtual (e.g., a computer program playing a video game).
Agent-Environment
Interaction:
The core of AI is the interaction between an agent and
its environment. This interaction typically follows a
cyclic process:
1. Perception: The agent receives input or
information from the environment. This input is
often referred to as percepts or observations.
2. Processing: The agent processes the received
information and makes decisions based on its
programming or learned knowledge.
3. Action: The agent takes actions to affect the
environment. These actions can be physical (e.g.,
moving a robot's
motors) or virtual (e.g., making decisions in a
game).
4. Feedback: The environment responds to the
agent's actions, leading to potential changes in the
agent's state and the generation of new percepts.
Features of
Environment:
 Fully observable vs Partially
Observable

 Static vs Dynamic

 Discrete vs Continuous

 Deterministic vs Stochastic

 Single-agent vs Multi-agent

 Episodic vs sequential
Agent-Environment
Examples:
Simple examples include a thermostat
(agent) interacting with room temperature
(environment) to maintain a set
temperature, or a chatbot (agent)
responding to user messages (environment).

 More complex examples include


autonomous vehicles (agents) navigating
city streets (environment),
recommendation systems (agents)
suggesting products based on user
behavior (environment), and game-playing
AI (agents) competing in virtual game
Case Study : Autonomous Delivery
Robot

 Imagine an autonomous delivery


robot as the agent in this
scenario. The robot is designed to
deliver packages from a local
distribution center to customers'
homes.
Environme
nt:

 The environment consists of a


cityscape with streets, sidewalks,
buildings, and various obstacles.
 There are customers' homes as
destinations, represented as houses
along the streets.
 The robot has sensors to perceive its
environment, including cameras,
lidar, and GPS.
Agent
Characteristics:

 The delivery robot is equipped with


AI algorithms and a decision-
making system.
 It has a goal to deliver packages to
specific addresses.
 The robot can perceive its
surroundings through its sensors,
detecting pedestrians, vehicles,
traffic signals, and obstacles.
Agent-Environment
Interaction:
 Perception: The robot's sensors continuously capture data from
its environment, such as detecting pedestrians, traffic lights,
and the location of package recipients' homes.
 Processing: The AI algorithms process the sensor data to
understand the current state of the environment. It identifies
obstacles, plans routes, and calculates the optimal path to
deliver packages efficiently.
 Action: The robot takes actions based on its processing of the
environment's state. It navigates the streets, avoiding
obstacles, obeying traffic rules, and adjusting its route in real-
time if it encounters road closures or unexpected situations.
 Feedback: The environment responds to the robot's actions. For
example, it might encounter traffic congestion, pedestrians
crossing the street, or a building entrance where it needs to
Outcom
e:
 The robot successfully delivers
packages to customers' homes,
following the planned routes and
adjusting its behavior based on real-
time information from the
environment.
Features of Agent
Environment
1. Fully observable vs Partially Observable:
 If an agent sensor can sense or access the complete state of
an environment at each point of time then it is a
fully observable environment, else it is partially
observable.

 A fully observable environment is easy as there is no need to


maintain the internal state to keep track history of the world.

 An agent with no sensors in all environments then such an


environment
is called as unobservable.

 An environment might be partially observable because of noisy


and inaccurate sensors or because parts of the state are
simply missing from the sensor data-for example, a vacuum
2. Deterministic vs Stochastic:
 If an agent's current state and selected action can

completely determine the next state of the


environment, then such environment is called a
deterministic environment.
 A stochastic environment is random in nature and

cannot be determined completely by an agent.


 In a deterministic, fully observable environment, agent

does not need to worry about uncertainty


 Vacuum cleaner is deterministic ,self driving car is

stochastic
Features of Agent
Environment
3. Episodic vs Sequential:
 In an episodic environment, there is a series of one-shot
actions, and
only the current percept is required for the action.
 However, in Sequential environment, an agent requires
memory of past
actions to determine the next best actions.
 For example, an agent that must spot defective parts on
an assembly line bases each decision on the current part,
regardless of previous decisions; moreover, the current
decision doesn't affect whether the next part is defective.
 In sequential environments, on the other hand, the
current decision could affect all future decisions. Chess
and taxi driving are sequential: in both cases, short-term
actions can have long-term consequences. Episodic
4. Single-agent vs Multi-agent
 If only one agent is involved in an environment, and
operating by itself then such an environment is called
single agent environment.
 However, if multiple agents are operating in an
environment, then such an environment is called a
multi-agent environment.
 The agent design problems in the multi-agent
environment are
different from single agent environment.
 For example, an agent solving a crossword puzzle by
itself is clearly in a single-agent environment,
whereas an agent playing chess is in a two-agent
environment.
Features of Agent
5. StaticEnvironment
vs Dynamic:
 If the environment can change itself while an agent is
deliberating, then such environment is called a dynamic
environment.
 Static environments are easy to deal because an agent does
not need to
continue looking at the world while deciding for an action.
 However, for dynamic environment, agents need to keep
looking at the
world at each action.
 Taxi driving is an example of a dynamic environment
whereas Crossword puzzles are an example of a static
environment.
6. Discrete vs Continuous:
 If in an environment there are a finite number of percepts and
actions that can be performed within it, then such an
PEAS
PEAS is Representation
a type of model on which an AI agent works upon. When
we define an AI agent or rational agent, then we can group its
properties under PEAS representationmodel. It is made up of
four words:
P: Performance measure
E: Environment
A: Actuators
S: Sensors
Here performance measure is the objective for the success of
an agent's
behavior.
PEAS for self-driving
cars:

Performance: Safety, time, legal drive, comfort


Environment: Roads, other vehicles, road signs,
pedestrian Actuators: Steering, accelerator,
brake, signal, horn
Sensors: Camera, GPS, speedometer, odometer,
Example of Agents with their
PEAS
Types of AI
Agents
Agents can be grouped into five classes based on their
degree of perceived intelligence and capability. All these
agents can improve their performance and generate better
action over the time. These are given below:
1. Simple Reflex Agent
2. Model-based reflex agent
3. Goal-based agents
4. Utility-based agent
5. Learning agent
Types of AI
Agents
1. Simple Reflex
agent:
Types of AI
 These Agents
agents take decisions on the basis of the current percepts
and ignore the rest of the percept history.

 These agents only succeed in the fully observable environment.

 The Simple reflex agent works on Condition-action rule, which


means it maps the current state to action. Such as a Room
Cleaner agent, it works only if there is dirt in the room.

 They have very limited intelligence


Types of AI
Agents
2. Model-based reflex
agent
Types of AI
 The Agents
Model-based agent can work in a partially
observable environment and track the situation.
 A model-based agent has two important factors:
 Model: It is knowledge about "how things happen in
the world," so it is called a Model-based agent.
 Internal State: It is a representation of the current
state based on percept history.
 These agents have the model, "which is knowledge of the
world" and based on the model they perform actions.
 Updating the agent state requires information about:
 How the world evolves
 How the agent's action affects the world.
 Example: Chess-playing programs like
IBM's Deep Blue. They maintain a
model of the chessboard, evaluate
potential moves, and choose the best
one based on their evaluation.
Types of AI
Agents
3. Goal-based
agents
Types of AI
 The Agents
knowledge of the current state environment is not
always sufficient to decide for an agent to what to do.
 The agent needs to know its goal which describes desirable
situations.
 Goal-based agents expand the capabilities of the model-based
agent by having the "goal" information.
 They choose an action, so that they can achieve the goal.
 These agents may have to consider a long sequence of
possible actions before deciding whether the goal is
achieved or not. Such considerations of different scenario
are called searching and planning, which makes an agent
proactive.
 A delivery drone. It has a goal to deliver
a package to a specific location. It
considers its current location, the
package's destination, and the
obstacles in its path to plan a route to
reach the goal.
Types of AI
Agents
4. Utility-based
agents
Types of AI
Agents
 These agents are like the goal-based agent but provide an
extra component of utility measurement which makes them
different by providing a measure of success at a given state.
 Utility-based agent act based not only goals but also the best
way to achieve the goal.
 The Utility-based agent is useful when there are multiple
possible alternatives, and an agent has to choose in order to
perform the best action.
 Automated stock trading systems. They
analyze various investment options,
considering potential risks and returns, and
choose investments that maximize
expected profit.
Types of AI
Agents
5. Learning
Agents
Types of AI
Agents
 A learning agent in AI is the type of agent which can
learn from its past experiences, or it has learning
capabilities.
 It starts to act with basic knowledge and then able to act
and adapt automatically through learning.
 A learning agent has mainly four conceptual
components, which are:
 Learning element: It is responsible for making
improvements by learning from environment
 Critic: Learning element takes feedback from critic
which describes that how well the agent is doing with
respect to a fixed performance standard.
 Performance element: It is responsible for selecting
external action
 Problem generator: This component is responsible for
 Self-driving cars. They
continuously learn from sensor
data and user feedback to
improve their driving skills and
safety.

You might also like