0% found this document useful (0 votes)
2 views

5

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

5

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

Artificial Intelligence and Machine Learning

Dr. Debapriya Roy

Institute of Engineering and Management, Kolkata, India


Agents

Q: Define an agent.
▶ An agent can be anything that perceive its environment
through sensors and act upon that environment through
actuators. An Agent runs in the cycle of perceiving, thinking,
and acting.
What is a rational agent.
▶ A rational agent always selects an action based on the percept
sequence it has received so as to maximize its (expected)
performance measure given the percepts it has received and
the knowledge possessed by it.
Bounded rationality

A rational agent that can use only bounded resources cannot


exhibit the optimal behaviour. A bounded rational agent does the
best possible job of selecting good actions given its goal, and given
its bounded resources.
An AI system can be defined as the study of the rational agent and
its environment. An agent can be:
▶ Human-Agent: A human agent has eyes, ears, and other
organs which work for sensors and hand, legs, vocal tract work
for actuators.
▶ Robotic Agent: A robotic agent can have cameras, infrared
range finder, NLP for sensors and various motors for actuators
▶ Software Agent: Software agent can have keystrokes, file
contents as sensory input and act on those inputs and display
output on the screen
Intelligent Agents

An intelligent agent is an autonomous entity which act upon an


environment using sensors and actuators for achieving goals. An
intelligent agent may learn from the environment to achieve their
goals. A thermostat is an example of an intelligent agent.
Following are the main four rules for an AI agent:
▶ Rule 1: An AI agent must have the ability to perceive the
environment.
▶ Rule 2: The observation must be used to make decisions.
▶ Rule 3: Decision should result in an action.
▶ Rule 4: The action taken by an AI agent must be a rational
action.
Representation of AI agent in PEAS representation model

▶ When we define an AI agent or rational agent, then we can


group its properties under PEAS representation model.
▶ P: Performance measure E: Environment A: Actuators S:
Sensors
▶ PEAS for self-driving cars: Let’s suppose a self-driving car
then PEAS representation will be:
▶ Performance: Safety, time, legal drive, comfort
▶ Environment: Roads, other vehicles, road signs, pedestrian
▶ Actuators: Steering, accelerator, brake, signal, horn.
▶ Sensors: Camera, GPS, speedometer, odometer,
accelerometer, sonar.
Example of Agents with their PEAS representation
kinds of agent programs

▶ Simple reflex agents


▶ Model-based reflex agents
▶ Goal-based agents
▶ Utility-based agents.
▶ Learning agent
What is a state ?

All information about the environment.


All information necessary to make a decision for the task at hand.
Agent’s Knowledge Representation
Illustration with Vacuum World
Goal-based agent

▶ Only the Knowledge of the current state may not be sufficient


for deciding what to do next.
▶ For example, at a road junction, the taxi can turn left, turn
right, or go straight on. The correct decision depends on
where the taxi is trying to get to.
▶ The agent needs some sort of goal information that describes
situations that are desirable.
Problem-Solving agents

▶ Search and planning are the subfields of AI devoted to finding


action sequences that achieve the agent’s goals.
▶ For search problem we study one kind of goal-based agent
called a problem-solving agent.
▶ Problem-solving agents use atomic representations i.e., states
of the world are considered as wholes, with no internal
structure visible to the problem solving algorithms.
Problem Formulation

▶ Problem formulation is the process of deciding what actions


and states to consider, given a goal.
▶ Example., let us assume that the agent will consider actions
at the level of driving from one major town to another. Each
state therefore corresponds to being in a particular town.
Well-defined problems

A problem can be defined formally by five components:


▶ Initial state: where the agent starts in.
▶ Actions: A description of the possible actions available to the
agent. Given a particular state s, ACTIONS(s) returns the set
of actions that can be executed in s.
▶ Transition model: A description of what each action does.
▶ The goal test: which determines whether a given state is a
goal state. Sometimes there is an explicit set of possible goal
states, and the test simply checks whether the given state is
one of them.
▶ Path cost: A path cost function assigns a numeric cost to
each path. The problem-solving agent chooses a cost function
that reflects its own performance measure.
State Space

o Together, the initial state, actions, and transition model


implicitly define the state space of the problem—the set of all
states reachable from the initial state by any sequence of
actions.
o The state space forms a directed network or graph in which
the nodes are states and the links between nodes are actions,
e.g. the map of Romania can be interpreted as a state-space
graph.
o A path in the state space is a sequence of states connected by
a sequence of actions.
Toy problems: Vacuum World
▶ States: The state is determined by both the agent location
and the dirt locations. The agent is in one of two locations,
each of which might or might not contain dirt. Thus, there
are 2 × 22 = 8 possible world states. A larger environment
with n locations has n × 2n states.
▶ Initial state: Any state.
▶ Actions: Left, Right, and Suck. Larger environments might
also include Up and Down.
▶ Transition model: The actions have their expected effects,
except that moving Left in the leftmost square, moving Right
in the rightmost square, and Sucking in a clean square have
no effect. The complete state space is shown in Figure 3.3.
▶ Goal test: This checks whether all the squares are clean.
▶ Path cost: Each step costs 1, so the path cost is the number
of steps in the path.
The state space for the vacuum world.
8 queens problem formulation 1

▶ States: Any arrangement of 0 to 8 queens on the board


▶ Initial state: 0 queens on the board,
▶ Successor function: Add a queen in any square,
▶ Goal test: 8 queens on the board, none are attacked.
8 queens problem formulation 1

▶ The initial state has 64 successors. Each of the states at the


next level have 63 successors, and so on.
▶ We can restrict the search tree somewhat by considering only
those successors where no queen is attacking each other. To
do that we have to check the new queen against all existing
queens on the board. The solutions are found at a depth of 8.
8 queens problem formulation 2

▶ States: Any arrangement of 8 queens on the board,


▶ Initial state: All queens are at column 1,
▶ Successor function: Change the position of any one queen,
▶ Goal test: 8 queens on the board, none are attacked.
8-Puzzle problem

consists of a 3×3 board with eight numbered tiles and a blank


space. A tile adjacent to the blank space can slide into the space.
The object is to reach a specified goal state, such as the one
shown on the right of the figure.
8-Puzzle problem: State space formulation

▶ States: A state is a description of each of the eight tiles in


each location that it can occupy.
▶ Operators/Action: The blank moves left, right, up or down
▶ Goal Test: The current state matches a certain state (e.g.
one of the ones shown on previous slide)
▶ Path Cost: Each move of the blank costs 1
8-Puzzle Problem
A small portion of the state space of 8-puzzle is shown below.
Note that we do not need to generate all the states before the
search begins. The states can be generated when required.
Water Jug problem

You have 2 jugs measuring 4 gallons, 3 gallons, and a pump to fill


the the jugs. You need to measure out exactly 2 gallon.
▶ Initial state: All 2 jugs are empty
▶ Goal test: How can you get 2 gallons of water into the 4
gallon jug.
▶ Successor function:
* Applying the action tranfer to jugs i and j with capacities Ci
and Cj and containing Gi and Gj gallons of water, respectively,
leaves jug i with max(0, Gi-(Cj - Gj)) gallons of water and jug
j with min(Cj, Gi+ Gj) gallons of water.
* Applying the action fill to jug i leaves it with Ci gallons of
water.
▶ Cost function: Charge one point for each gallon of water
transferred and each gallon of water filled
Exercise

Can you propose a formulation of the problem of getting to


Bucharest from the city of Arad in Romania in terms of the initial
state, actions, transition model, goal test, and path cost.
References
TEXT BOOK:
1. Machine Learning, Tom M. Mitchell, McGraw Hill Education,
2017.
2. Artificial Intelligence by Rich and Knight, The McGraw
Hill, 2017.
3. Ian Goodfellow, YoshuaBengio, Aaron Courville. Deep
Learning, the MIT press, 2016
Reference book:
1. Artificial Intelligence: A modern approach by Stuart Russel,
Pearson Education, 2010
2. Machine Learning for Dummies, By John Paul Mueller and
Luca Massaron, For Dummies, 2016.
3. ”Deep Learning” Bishop, Bishop, Springer.
4. Artificial Intelligence & Generative AI for Beginners, The
Complete Guide, David M. Patel, Independently published
2023.
The End

You might also like