0% found this document useful (0 votes)
2 views

Lec 4_ Problem Solving Agent 3

The document discusses problem-solving agents, outlining the key components of problem formulation, including goal and problem formulation, search algorithms, and execution. It describes the process of abstraction, state space, actions, and transition models, as well as various types of problems such as single-state, multi-state, contingency, and exploration problems. The document emphasizes the importance of search strategies and performance criteria in solving problems effectively.

Uploaded by

Dilkhosh Saadon
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Lec 4_ Problem Solving Agent 3

The document discusses problem-solving agents, outlining the key components of problem formulation, including goal and problem formulation, search algorithms, and execution. It describes the process of abstraction, state space, actions, and transition models, as well as various types of problems such as single-state, multi-state, contingency, and exploration problems. The document emphasizes the importance of search strategies and performance criteria in solving problems effectively.

Uploaded by

Dilkhosh Saadon
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

10/11/22

Problem Solving
Agent
Lecture 4: Problem Formulation and Search

What is Problem?
ØA problem is a collection of information that agent will use to
decide what to do!
ØBasic elements of problem definition are STATEs and ACTIONs.
oAn agent can look ahead to find a sequence of actions that will
eventually achieve its goal.

1
10/11/22

Problem Solving Agent


ØIt is a goal-based agent.
Sensors
ØAgents use atomic
representation.
ØAgent needs four steps in ?
problem-solving: Environment

1. Goal Formulation Agent


2. Problem Formulation Actuators
3. Search algorithm 1. Formulate Goal
2. Formulate Problem
4. Execute. 1. States
2. Actions
3. Search Algorithm
4. Execute.

(1) Goal and (2) Problem Formulations


1. Goal Formulation:
oDeclaring the Goal
oLimits the objectives that agent is trying to achieve
oGoal can be defined as set of world states
2. Problem Formulation:
oProblem Formulation: is the process of deciding what actions and
states to consider given the goal.
oIt can change the worlds states size, enormously.
oIt depends on how the agent is connected to its environment.

2
10/11/22

(3) Search
ØNow, the agent should de
oWhich action to choose (with map or without map)?
oWhat action should be chosen in a state with unknown value?
ØProcess of looking for a sequence of states-actions is called
Search.
ØSearch: is a behavior that an agent exhibits when it has
insufficient knowledge in order to solve a problem.
ØSearch Algorithm takes problem as input and returns a solution
in form of an action sequence.

(4) Execute
ØAfter finding a suitable action sequence, the actions can be carried out.

ØSo to solve a problem:


“Formulating è Search è Execute”

ØSolution: is a formula to be satisfied or set of conditions to be achieved.


o unique solution ? Several solutions ?
o Some solution are better than others ?
Ø Optimal solution
o Sometimes too hard to find.
Ø Approximate solution
o Quality of solution improving with time.

3
10/11/22

Problem Formulation: Main Procedures


ØAbstraction: process of removing details from a representation.
oThe abstract is valid if we can expand any abstract solution into a solution
in the more detailed world.
oThe abstraction is useful if carrying out each of the actions in the solution
is easier than the original problem.

ØState: set of real states

ØAction: complex combination of real action


ØSolution: set of real paths which are solutions in real world

Problem Formulation Components


1. Initial state: the state from which the agent infers it is at the beginning.
2. State Space: Set of all possible states,
3. Actions: Description of possible actions.
4. Transition Model: description of the outcome of an action.
5. Goal test: tests whether the state description matches a goal state!
6. Path: a sequence of actions leading from one state to another.
7. Path costs: cost function g over paths. Usually the sum of the actions
along the path.
8. Solution: path from an initial state to a goal state.
9. Search cost: time and storage requirements to find a solution.

4
10/11/22

Example 1: Vacuum Cleaner


ØStates: <Robot Location, Cell A is dirty, Cell B is dirty>
ØConsider initial state to be 2.
ØActions: Left, Right, Suck
ØTransition Model:
ØAfter Left ={ 1, 3, 5, 7}
ØAfter Suck ={ 5, 7}
ØAfter Right = {6, 8}
ØAfter Suck = {8}
ØGoal: both A and B are cleaned
ØPath cost: 1 per action

Example 3: Airline travel-planning


Ø States: each state includes a location and the current time.
Ø Initial state: specified by the user’s query
Ø Actions:
o Take any flight from the current location
o Any seat class (1st class or economy)
o Leaving after the current time
o Leaving enough time for within-airport transfer
Ø Transition model: the state resulting from taking a flight will have:
o the flight destination as the current location
o The flight’s arrival time as the current time
Ø Goal test: are we at the final destination specified by the user?
Ø Path cost: depends on:
Ø financial cost
Ø Waiting cost
Ø Flight time
Ø Customs and immigration procedure
Ø Seat quality
Ø Time of day
Ø Type of airplane, and so on

11

5
10/11/22

Search Trees
ØSearch function: return a solution or failure.
ØExpanding: applying operators (actions) on the current state.
ØSearch Strategy: the choice of which state to expand first is
determined by it.
ØSearch algorithms build a search tree whose roots is correspond to
initial state.
oIt defines the order of nodes expansion.
oNodes in tree are expanded using operators (actions).
oSome nodes may be visited more than once.
ØThe environment is static, discrete, observable and deterministic.

12

Nodes and Fringe!


ØA node is the computer’s record of a state as it is encountered
during a systematic search of the state space.
ØA node in search tree has several attributes:
oIt has a parent and a set of successor nodes
oIt has a state
oAction that got to the state in the search (links)
oDepth of the node in the search tree
oPath cost
ØFringe: is a set of search nodes that have not been expanded yet
Ø Implemented as a queue FRINGE
oThe ordering of the nodes in FRINGE defines the search strategy

13

6
10/11/22

Search Tree

14

A Search Tree
ØThe basic idea: build a search tree which systematically
explores the state space graph:
1. Begin with creating a root node at the start state
2. Expand this node by finding all the states we can get to from this
state and adding these nodes as children of the root node
3. The previous step creates a fringe of new nodes. Now expand all of
these new nodes in turn
4. Each time a node expanded, the new nodes are added to the
fringe.
5. Before expanding a node, it should be tested if it is the goal state.

15

7
10/11/22

Search Strategy (Algorithm) Performance


ØImportant criteria in choosing search strategies:
oCompleteness: Does it always find a solution if one exist?
oCost Optimality: does it always find a least-cost solution?
oTime complexity: number of nodes generated
oSpace complexity: number of nodes that should be stored in memory
ØTime and Space complexity are measured in terms of:
ob: Maximum branching factor of search tree
od: Depth of the least-cost solution
om: Maximum depth of state space

16

As a Code!

17

8
10/11/22

Types of Problems
ØThere are four types of problems:
oSingle-state problems
oMulti-state problems

oContingency problems
oExploration problems

18

Single-State Problems
ØAgents know exactly in which state it
is.
ØState information may be received by
sensors or using a complicated process
of saving track of the world.
ØThe initial state of the agent should be
clear
ØAgent should know the exact effect of
its actions.
ØDeterministic, static, discrete and fully
observable.

19

9
10/11/22

Multi-State Problems
ØAgent cannot find exactly in which state it is in (world is not
fully accessible)
ØAgent knows all the effects of its action
ØIn these problems, agent should reason about sets of states
that it might go to.
ØAgent still knows the effect of its actions
ØAgent does not know the initial state
oIs there a sequence of actions that leads to goal?
ØPartially observable, deterministic, static and discrete.

20

Multi-State Problems
oActions(Left, Suck, Right, Suck)

oInitial. States = {1, 2, 3, 4, 5, 6, 7, 8}


oAfter Left ={ 1, 3, 5, 7}
oAfter Suck ={ 5, 7}

oAfter Right = {6, 8}


oAfter Suck = {8}

21

10
10/11/22

Contingency Problem
oThe effect of an action is not clear.
ØThe agent does not know what effect
its actions will have.
oUsually, in non-deterministic and
partially observable environment.
oIt is impossible to define a
complete sequence of actions that
constitute a solution in advance
because information about the
intermediary states is unknown.

22

Exploring Problem
ØThe agent has no information about effect of his action
oLike new born baby
ØAgent should experiment to discover what its actions do.
ØUsually, agent cannot be simulated
ØExample: being lost in new country without map!

23

11

You might also like