0% found this document useful (0 votes)
2 views

Module 2 (3)

The document discusses problem-solving agents in AI, focusing on goal-based and planning agents that utilize search algorithms to find solutions. It outlines various problem-solving strategies, including uninformed search methods like Breadth-First Search and Depth-First Search, as well as real-world applications such as route finding and the Traveling Salesperson Problem. Additionally, it emphasizes the importance of defining problems, formulating goals, and evaluating algorithm performance based on completeness, optimality, and complexity.

Uploaded by

Amruth Vaidya
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Module 2 (3)

The document discusses problem-solving agents in AI, focusing on goal-based and planning agents that utilize search algorithms to find solutions. It outlines various problem-solving strategies, including uninformed search methods like Breadth-First Search and Depth-First Search, as well as real-world applications such as route finding and the Traveling Salesperson Problem. Additionally, it emphasizes the importance of defining problems, formulating goals, and evaluating algorithm performance based on completeness, optimality, and complexity.

Uploaded by

Amruth Vaidya
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 78

Module -2

SOLVING PROBLEMS BY SEARCHING


• Chapter 2 – reflex agents which base their actions on a direct
mapping from states to actions.(not operate well)
• Goal-based agents, on the other hand, consider future actions and
the desirability of their outcomes.
• Goal-based agents Problem Solving agent
• Problem-solving agents use atomic representations, as states of the
world are considered as wholes, with no internal structure visible to
the problem solving algorithms.
Goal based agent or problem solving agent
• Goal based agents are usually called planning agents.
• Problem solving begins with the definitions of problems and their
solutions.
• We then describe several general purpose search algorithms that can
be used to solve these problems.
• We will see several uninformed search algorithms – that are given no
information about the problem other than its definition.
• Informed search algorithms, can do quite well given some guidance
on where to look for solutions.
• Goal-based agents that use more advanced factored or structured representations
are usually called planning agents.
• General-purpose search algorithms that can be used to solve these problems.
Problem Solving Definition
• Problem Solving is a process of generating solutions from observed
data.
• Problem is characterized by a set of goals, set of objects and set of
operations.
• The method of solving problem through AI involves the process of
• Defining the search space
• Deciding start and goal states
• Finding the path from start state to goal state through search space.
• A problem space can have one or more solutions.
• A solution is a combination of objects and operations that achieve the
goals.
Problem Solving Agent
• Used to find sequence of actions that achieve goals.
• Graph example

• The process of looking for a sequence of actions that reaches the goal
is called search.
• A search algorithm takes a problem as input and returns a solution in
the form of an action sequence.
• Once a solution is found, the actions it recommends can be carried
out. This is called the execution phase.
• Thus, we have a simple “formulate, search, execute” design for the
agent
• In order for an agent to solve a problem it should pass by 2 phases of
formulation:
• Goal formulation, based on the current situation and the agent’s performance
measure, is the first step in problem solving.
3.1 PROBLEM-SOLVING AGENTS
3.1.1 Well-defined problems and solutions
• A problem can be defined formally by five components:
• The initial state that the agent starts in. For example, the initial state
for our agent in Romania might be described as In(Arad).
• A description of the possible actions available to the agent. Given a
particular state s, returns the set of actions that can be executed in s.
• A description of what each action does; the formal name for this is the
transition model, specified by a function RESULT(s, a) that returns the
state that results from SUCCESSOR doing action a in state s.
• RESULT(In(Arad),Go(Zerind)) = In(Zerind) .
• The goal test, which determines whether a given state is a goal state.
• A path cost function that assigns a numeric cost to each path.
Agent Assumptions
• Environment is observable
• Environment is discrete
• Environment is known
• Environment is deterministic

Under these assumptions, the solution to any problem is a fixed


sequence of actions.

Search algorithm takes problem as input and provides


action sequence as solution.

Formulate- Search-Execute strategy

ABSTRACTION : The process of removing detail from a representation is called


abstraction.
3.2 EXAMPLE PROBLEMS
• The problem-solving approach has been applied to a vast array of
task environments.
• A toy problem is intended to illustrate or exercise various
problem-solving methods.
• Used to compare the performance of algorithms.
• A real-world problem is one whose solutions people actually care
about.
Toy Problem(Vacuum World – standard formulation)

8-puzzle Problem
• Consists of a 3 x3 board with eight numbered tiles and a blank space.
• A tile adjacent to the blank space can slide into the space.

• The 8-puzzle belongs to the family of sliding-block puzzles, which are often used as test
problems for new search algorithms in AI.
Standard formulation
• States: A state description specifies the location of each of the eight Ides and the blank
in one of the nine squares.
• Initial state: Any state can be designated as the initial state. Note that any given goal can
be reached.
• Actions: The simplest formulation defines the actions as movements of the blank space
Left, Right,Up, or Down. Different subsets of these are possible depending on where the
blank is.
• Transition model: Given a state and action, this returns the resulting state; for example,
if we apply Left to start state in Figure 3.4, the resulting state has the 5 and the blank
switched.
• Goal test: This checks whether the state matches the goal configuration shown in Figure
3.4. (Other goal configurations are possible.)
• Path cost: Each step costs 1, so the path cost is the number of steps in the path.
8-queens problem
• The goal of the 8-queens problem is to place eight queens on a chessboard such that no queen attacks any
other.
• it remains a useful test problem for search algorithms.
• There are two main kinds of formulation.
1. An incremental formulation involves operators that augment the state
description, starting with an empty state; for the 8-queens problem, this means that
each action adds a queen to the state.
2. A complete-state formulation starts with all 8 queens on the board and moves
them amend. In either case, the path cost is of no interest because only the final state
counts.

• The first incremental formulation one might try is the following:


• States: Any arrangement of 0 to 8 queens on the board is a state.
• Initial state: No queens on the board.
• Actions: Add a queen to any empty square.
• Transition model: Returns the board with a queen added to the specified square.
• Goal test: 8 queens are on the board, none attacked.

Task - Toy problem devised by Donald Knuth


Real-world problems
Route finding algorithms – applications(websites, car systems)
routing video streams – military operations planning, airline planning

The airline travel problems that must be solved by a travel-planning Website:


• States: Each state obviously includes a location (e.g., an airport) and the current time. Furthermore,
because the cost of an action (a flight segment) may depend on previous segments, their fare bases, and
their status as domestic or international, the state must record extra information about these "historical"
aspects.
• Initial state: This is specified by the user's query.
• Actions: Take any flight from the current location, in any seat class, leaving after the current time, leaving
enough time for within-airport transfer if needed.
• Transition model: The state resulting from taking a flight will have the flight's destination as the current
location and the flight's arrival time as the current time.
• Goal test: Are we at the final destination specified by the user?
• Path cost: This depends on monetary cost, waiting time, flight time, customs and immigration procedures,
seat quality, time of day, type of airplane, frequent-flyer mileage awards, and so on.
Touring problems are closely related to route-finding problems
Consider, for example, the problem “Visit every city in Figure 3.2 at least once, starting and ending
in Bucharest.”
As with route finding, the actions correspond to trips between adjacent cities. The state space,
however, is quite different. Each state must include not just the current location but also the set of
cities the agent has visited. So the initial state would be In(Bucharest ), Visited({Bucharest}), a
typical intermediate state would be In(Vaslui ), Visited({Bucharest , Urziceni , Vaslui}), and the
goal test would check whether the agent is in Bucharest and all 20 cities have been visited.
TRAVELING SALESPERSON PROBLEM
• The traveling salesperson problem (TSP) is a touring problem in which each city must be visited exactly
once.
• The aim is to find the shortest tour.
• Used - planning movements of automatic circuit-board drills and of stocking machines on shop floors.
Robot navigation
• Robot navigation is a generalization of the route-finding problem described earlier.
• Rather than following a discrete set of routes, a robot can move in a continuous space with (in
principle) an infinite set of possible actions and states.
• For a circular robot moving on a flat surface, the space is essentially two-dimensional.
• When the robot has arms and legs or wheels that
must also be controlled, the search space
becomes many dimensional.
• Advanced techniques are required
just to make the search space finite.
8 queens problem solution
3.3 SEARCHING FOR SOLUTIONS

•Having formulated some problems, we now need to solve


them.
•A solution is an action sequence, so search algorithms work
by considering various possible action sequences.
• The possible action sequences starting at the initial state form a
search tree with the initial state at the root;
• The branches are actions and the nodes correspond to states in the
state space of the problem.
• The root node of the tree corresponds to the initial state, In(Arad).
• The first step is to test whether this is a goal state.
• Next expanding the current state, thereby generating a new set
of states.
The first few steps in growing the search tree for finding a route from
Arad to Bucharest.
• In this case, we add three branches from the parent
node In(Arad) leading to three new child nodes:
In(Sibiu), In(Timisoara), and In(Zerind). Now we must
choose which of these three possibilities to consider
further.
• This is the essence of search—following up one option now
and putting the others aside for later, in case the first choice
does not lead to a solution.
• The set of all leaf nodes available for expansion at any given
point is called the frontier.
• LEAF NODE with no children in the tree.
• Consider the paths Arad–Sibiu (140 km long) and
Arad–Zerind–Oradea–Sibiu (297 km long). Obviously, the
second path is redundant—it’s just a worse way to get to
the same state.
Redundancy

• We say that In(Arad) is a repeated state in the


search tree, generated in this case by a loopy path.
• Loopy paths are a special case of the more general
concept of redundant paths, which exist whenever
there is more than one way to get from one state to
another.
• In rectangle based grid, each state has four successors
- search tree of depth d that includes repeated states
has 2d^2 leaves(d=20)
• algorithms that forget their history are doomed to
repeat it.
• The way to avoid exploring redundant paths is to
remember where one has been.
• With a data structure called the explored set (also
known as the closed list), which remembers every
expanded node. – new algorithm
• The priority queue, which pops the element of the queue with the highest priority
according to some ordering function.
Measuring Problem Solving Performance

• Algorithm’s performance can be evaluated in 4 ways:


• Completeness
• Optimality
• Time Complexity
• Space Complexity
• State space graph |V| + |E|
• Complexity measured for search trees using:
• Branching Factor or no of successors of any node
• Depth of shallowest goal node
• Search cost based on the time
• Total cost = Search cost + Path cost(ex – Arad to Bucharest)
3.4 UNINFORMED SEARCH STRATEGIES
Breadth First Search

• Explores all the nodes at given depth before proceeding to the next level
• Uses Queue to implement
• Blind technique
• Brute force method
• Complete
• Time complexity
• The root node is expanded first
• then all the successors of the root node are expanded next, then their
successors, and so on.
• Achieved by FIFO – FRONTIER
• Then the total number of nodes generated is
• the memory requirements are a bigger problem for breadth-first search than is the execution time.
• If your problem has a solution at depth 16, then (given our assumptions) it will take about 350 years for
breadth-first search (or indeed any uninformed search) to find it.
Advantages of BFS
• Simplest strategy
• BFS is complete – If there is a solution, BFS is guaranteed to find it.
• If there are multiple solutions, then a minimal solution will be found.

Disadvantages of BFS
• The BFS cannot be effectively used unless the search space is quite
small.
Solve BFS
Uniform Cost Search
• An uninformed search algorithm in AI
• UCS uses the lowest cumulative cost to find a path from the source
node to the goal node.
• Nodes are expanded, starting from the root, according to the
minimum cumulative cost.
• The UCS is implemented using a Priority Queue.
• Insert the root node into the priority queue.
• Remove an element with the highest priority
• If removed node is the goal node, print total cost and stop the
algorithm
• Else, enqueue all the children of the current node to the priority
queue, with their cumulative cost from the root as priority.
Depth First Search

• DFS always expands the deepest node in the current frontier of the
search tree
• Stack (LIFO)
• Deepest Node
• Incomplete – no solution or infinite loop
• If the initial state is a goal state, quit and return success.
• Call DFS with E as the initial state, if success is returned, signal
success. Otherwise continue in this loop
• NODE LIST : A
• NODE LIST : B C
• The solution path A-B-C-D-G is returned and the algorithm
terminates.
Advantages of DFS

• DFS requires less memory since only the nodes on the current path
are stored.
• The DFS may find a solution without examining much of the search
space at all.

Disadvantages of DFS
• May find a sub-optimal solution
• Incomplete: without a depth bound, one may not fina a solution even
if one exists
• A variant of depth-first search called backtracking search uses
still less memory.
• In backtracking, only one successor is generated at a time rather
than all successors.
• each partially expanded node remembers which successor to
generate next.
• the idea of generating a successor by modifying the current state
description directly rather than copying it first.
• Robotic assembly – critical to success.
Depth Limited Search

• Infinite state spaces can be generated without a predefined depth.


• Nodes at depth l are treated as if they have no successors.
• This approach is called depth-limited search.
• The depth limit solves the infinite-path problem.
• we would discover that any city can be reached from any other city in at most 9 steps.
• This number, known as the diameter DIAMETER of the state space, gives us a better depth limit,
which leads to a more efficient depth-limited search.
Iterative Deepening Depth First Search
• often used in combination with depth-first tree search, that finds the best depth limit.
• It does this by gradually increasing the limit—first 0, then 1, then 2, and so on—until a
goal is found.
• Iterative deepening combines the benefits of depth-first and breadth-first search.
• Like depth-first search, its memory requirements are modest: O(bd) to be precise.
• Like breadth-first search, it is complete when the branching factor is finite.
• In an iterative deepening search, the nodes on the bottom level (depth d) are generated
once, those on the next-to-bottom level are generated twice, and so on, up to the children
of the root, which are generated d times.
• So the total number of nodes generated in the worst case is
• Iterative deepening search may seem wasteful because states are generated multiple times.
• In general, iterative deepening is the preferred uninformed search method when the search space is large and
the depth of the solution is not known.
3.4.6 Bidirectional search

• Two simultaneous search from an initial node to goal and backward


from goal to initial stopping when two searches meet in the middle.

• Time Complexity:
• Complete and Incomplete – so we choose BFS
3.4.6 Bidirectional search
3.4.7 Comparing uninformed search strategies

You might also like