0% found this document useful (0 votes)
6 views90 pages

AIML Chapter 2

Chapter 2 discusses various AI problems and search strategies, outlining the steps to define and solve problems through state space representation. It covers different search techniques, including breadth-first and depth-first search, and introduces various types of AI agents based on their capabilities. The chapter also highlights the characteristics of production systems and the importance of analyzing problems to select appropriate solving methods.

Uploaded by

12302130603011
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views90 pages

AIML Chapter 2

Chapter 2 discusses various AI problems and search strategies, outlining the steps to define and solve problems through state space representation. It covers different search techniques, including breadth-first and depth-first search, and introduces various types of AI agents based on their capabilities. The chapter also highlights the characteristics of production systems and the importance of analyzing problems to select appropriate solving methods.

Uploaded by

12302130603011
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 90

AIML

Chapter-2: AI PROBLEMS AND SEARCH


Outline
 Defining Problems As A State Space Search
 Production System
 Production Characteristics
 Issue in Design of Search Programs
 Generate And Test
 Hill Climbing & Steepest Hill Climbing
 Best-First Search
 Problem Reduction
 Constraint Satisfaction
 Iterative Deepening Search
 Means-Ends Analysis
Define Problem & Problem Solving
 Four Steps to built a system to solve a particular problem

1.Define Problem: it include precise specification about Initial State as well as Final State.

2.Problem Analysis: Analyze the problem according to the available possible techniques to
solve a particular problem.

3.Identify and Represent the task knowledge that is necessary to solve the problem.

4.Choose the best problem solving technique and Apply to solve particular problem.
State Representation & Define Problem
 State is represented as problem at a given time.

 Define a state space that contain the possible states of problem.

 Define the initial state from where the solving process may start.

 Define the goal state.

 Specify set of rules that describes the actions available.

 Ex: Chess Game & Water Jug Problem


Water Jug Problem
 Problem Definition : You are given two jugs, a 4-gallon one and a 3-gallon one. There
is pump that can be used to fill the jugs with water. How can you get exactly 2-gallons of
water into the 4-gallon jug?

 State Space (initial State): Set of ordered pair of integer (x,y) where x represents the
amount of water in 4-gallon jug and y represents the amount of water in 3 gallon jug.
 Initial state is (0,0).
 Goal state is (2,n) for any value of n.
Cont.
 Production Rules
Current State Next State Description

(x,y) if x<4 (4,y) Fill the 4-gallon jug

(x,y) if x<3 (x,3) Fill the 3-gallon jug

(x,y) if x>0 (x-d,y) Pour some water out of the 4-gallon jug

(x,y) if y>0 (x,y-d) Pour some water out of the 3-gallon jug

(x,y) if x>0 (0,y) Empty the 4-gallon jug on the ground

(x,y) if y>0 (x,0) Empty the 3-gallon jug on the ground

(x,y)if x+y >=4 & y>0 (4,y-(4-x)) Pour water from the 3-gallon jug into 4-gallon until 4-gallon jug
is full
Current State Next State Description

(x,y)if x+y >=3 & (x-(3- Pour water from the 4-gallon jug into 3-gallon until
x>0 y),3)) 3-gallon jug is full
(x,y) if x+y <=4 & ((x+y),0) Pour all the water from the 3-gallon jug into the 4-
y>0 gallon jug
(x,y) if x+y <=3 & (0,(x+y)) Pour all the water from the 4-gallon jug into the 3-
x>0 gallon jug
(0,2) (2,0) Pour the 3 gallon jug into the 4-gallon jug
Water Jug – Solution
Gallons in 4-gallon Jug Gallons in the 3-gallon jug Rule Applied

0 0
0 3 2
3 0 9
3 3 2
4 2 7
0 2 5 or 12
2 0 9 or 11
8 Puzzle Problem

 Defination:The 8 puzzle problem consists of eight numbered


movable tiles set 3X3 frame. One cell of the frame is always
empty thus making it possible to move an adjacent numbered tile
into the empty cell.
Production System
 Production system that facilitates describing and performing search Process.
 Production system consists
 Set of rules, each consisting of a left side that determines the applicability of
the rule and a right side describes the operation to be performed.
 One or more knowledge / databases that contain appropriate information.
 A control strategy that specifies the order in which the rules will be compared
to the database and a way of resolving the conflicts that arise when several
rules match at once
 A rule Applier
Production System Characteristics
 Monotonic production system is production system in which the application of rule never prevents the
later application of another rule that could also have been applied at the time of first rule was selected.

 A non-monotonic production system is one which is not true. The production system increases the
problem-solving efficiency of the machine by not keeping a record of the changes made in previous
search process.

 A partially communicative production system is a production system with the property that if the
application of particular sequence of rules transforms state X into state Y then any combination of those
rules that also transforms state X into state Y.

 A commutative production system is a production system that is both monotonic and partially
commutative.
Problem Characteristics
 To choose the appropriate method for a problem, it is necessary to analyze the problem
along several key dimensions.
1) Is the problem decomposable into a set of independent smaller or easier sub problems?

2) Can solution steps be ignored or at least undone if they prove unwise?

3) Is the problem’s universe predictable?

4) Is a good solution to a problem obvious without comparison to all other possible solutions?

5) Is the desired solution a state of world or a path of state?

6) Is a large amount of knowledge absolutely require to solve the problem or is knowledge important only to constrain
the search?

7) Can a computer that is simply given the problem return the solution or will the solution of the problem require
interaction between the computer and a person?
Issue in Design of Search Program
 The direction in which to conduct the search (Forward & backward
reasoning). we can search forward through the state space from the start to
goal state or we can search backward from the goal.

 How to select applicable rules(matching). Production system typically spend


most of their time looking for rules to apply, so it is critical to have efficient
procedures for matching rules against states.

 How to represent each node of the search process (Knowledge


representation problem and the frame problem).
AI Agents and their types
 Agents can be grouped into five classes based on their degree of
perceived intelligence and capability. All these agents can
improve their performance and generate better action over the
time.
1) Simple Reflex Agent
2) Model-based reflex agent
3) Goal-based agents
4) Utility-based agent
5) Learning agent
1) Simple Reflex agent
• The Simple reflex agents are the simplest agents. These agents take decisions on the
basis of the current percepts and ignore the rest of the percept history.
• These agents only succeed in the fully observable environment.
• The Simple reflex agent does not consider any part of percepts history during their
decision and action process.
• The Simple reflex agent works on Condition-action rule, which means it maps the
current state to action. Such as a Room Cleaner agent, it works only if there is dirt in
the room.
• Problems for the simple reflex agent design approach:
• They have very limited intelligence
• They do not have knowledge of non-perceptual parts of the current state
• Mostly too big to generate and to store.
• Not adaptive to changes in the environment.
Simple Reflex agent
2) Model-based reflex agent
• The Model-based agent can work in a partially observable environment, and
track the situation.
• A model-based agent has two important factors:
• Model: It is knowledge about "how things happen in the world," so it is
called a Model-based agent.
• Internal State: It is a representation of the current state based on percept
history.
• These agents have the model, "which is knowledge of the world" and based on
the model they perform actions.
• Updating the agent state requires information about:
• How the world evolves
• How the agent's action affects the world.
Model-based reflex agent
3) Goal-based agents
• The knowledge of the current state environment is not always sufficient to
decide for an agent to what to do.
• The agent needs to know its goal which describes desirable situations.
• Goal-based agents expand the capabilities of the model-based agent by having
the "goal" information.
• They choose an action, so that they can achieve the goal.
• These agents may have to consider a long sequence of possible actions before
deciding whether the goal is achieved or not. Such considerations of different
scenario are called searching and planning, which makes an agent proactive.
Goal-based agents
4) Utility-based agents
• These agents are similar to the goal-based agent but provide an extra
component of utility measurement which makes them different by providing a
measure of success at a given state.
• Utility-based agent act based not only goals but also the best way to achieve
the goal.
• The Utility-based agent is useful when there are multiple possible alternatives,
and an agent has to choose in order to perform the best action.
• The utility function maps each state to a real number to check how efficiently
each action achieves the goals.
Utility-based agents
5) Learning Agents
• A learning agent in AI is the type of agent which can learn from its past experiences,
or it has learning capabilities.
• It starts to act with basic knowledge and then able to act and adapt automatically
through learning.
• A learning agent has mainly four conceptual components, which are:
• Learning element: It is responsible for making improvements by learning from
environment
• Critic: Learning element takes feedback from critic which describes that how
well the agent is doing with respect to a fixed performance standard.
• Performance element: It is responsible for selecting external action
• Problem generator: This component is responsible for suggesting actions that
will lead to new and informative experiences.
• Hence, learning agents are able to learn, analyze performance, and look for new ways
to improve the performance.
Learning Agents
Search Strategies in AI
Search Algorithm Terminologies
Properties of Search Algorithms
Search Techniques
 Uniformed/Blind Search Control Strategy
 BFS(Breadth First Search)

 DFS(Depth First Search)

 DLS(Depth limited Search)

 Informed / Direct Search Control Strategy


 Best First Search

 Problem Decomposition

 A* , Generate & Test

 Mean end Analysis


Uninformed Search (Blind Search)
Informed Search
Difference between Informed and
Uninformed Search
BFS & DFS

BFS DFS
Breadth First Search
What is BFS?
Breadth-first search as the name suggests itself will search breadth-wise
to find the shortest path in the graph. BFS is an Uninformed but systematic search
algorithm that explores level by level.
How BFS works
To implement this algorithm, we need to use a Queue data
structure that follows the FIFO first in first out methodology. In BFS, one vertex
is selected at a time and when it is visited and marked then its adjacent are
visited and stored in the queue and the visited one, we can remove from queue if
all the adjacent vertices are stored or explored.
In BFS we expand all possible moves from the current state before progressing to
the next level, prioritizing solutions at shallower depths.
Does BFS promise Optimal Results?
BFS promises optimal or guaranteed results always as it goes level by
level so, the result is sure, but it could lead to more time & and space complexity
to reach the goal (Only if it is solvable).
BFS Example: 8-puzzle problem
solving using BFS
BFS Example (For understanding)
Applications Of Breadth-First Search
Algorithm

 Crawlers in Search Engines: Breadth-First Search is one of the main


algorithms used for indexing web pages. The algorithm starts traversing
from the source page and follows all the links associated with the page.
Here each web page will be considered as a node in a graph.
 GPS Navigation systems: Breadth-First Search is one of the best
algorithms used to find neighboring locations by using the GPS system.
 Find the Shortest Path & Minimum Spanning Tree for an
unweighted graph: When it comes to an unweighted graph, calculating
the shortest path is quite simple since the idea behind shortest path is to
choose a path with the least number of edges. Breadth-First Search can
allow this by traversing a minimum number of nodes starting from the
source node. Similarly, for a spanning tree, we can use either of the two,
Breadth-First Search or Depth-first traversal methods to find a spanning
tree.
Applications Of Breadth-First
Search Algorithm
 Broadcasting: Networking makes use of what we call as packets for
communication. These packets follow a traversal method to reach
various networking nodes. One of the most commonly used traversal
methods is Breadth-First Search. It is being used as an algorithm that
is used to communicate broadcasted packets across all the nodes in a
network.
 Peer to Peer Networking: Breadth-First Search can be used as a
traversal method to find all the neighboring nodes in a Peer to Peer
Network. For example, BitTorrent uses Breadth-First Search for peer to
peer communication.
Depth First Search (DFS)
 What Is DFS:
 Depth First Search as the name suggests will search depth-wise till
the Branch’s maximum depth. If the result not found on the branch’s
max depth, then it backtracks and searches another node until
reaches to depth and will continue the same until reaches the
solution.
 How DFS works:
 As DFS is an edge-based technique. It uses a Stack data
structure as to implement DFS we need to have any data structure
which follows LIFO Last in first out and for that Stack is the perfect
one. In DFS it first visits vertices and pushes into the stack, and if
there are no vertices then it pops out the visited vertices.
 Does DFS promise Optimal Results:
 DFS does not promise the optimal solution and may encounter infinite
loops in certain scenarios. DFS may swiftly traverse deep branches of
the search tree while potentially bypassing shorter paths.
8-puzzle problem solving using DFS
DFS (Depth First Search) Example-1
(For understanding)
Applications of DFS
Depth First Search Implementation
BFS v/s DFS
BFS(Breadth First Search) DFS(Depth First Search)
 In BFS Searching of node done in level wise.  In DFS Searching of node done in depth wise.
 It is implemented using queue.  It is implemented using stack.
 No backtracking is required.  Backtracking is required.
 Never goes in infinite loop.  It get trapped in infinite loop.
 It guaranteed gives best possible solution.  It never gives best possible solution.
 It requires more memory.  It requires less memory.
Comparative Analysis of 8-puzzle
problem solved using BFS and DFS
 The result shown in solving 8-puzzle problem using BFS and DFS have notable
differences.
 Whilst the algorithm successfully reached its goal state, the Iteration cost achieved
by BFS is 6 and the dept is 3. Whereas using DFS iteration cost is 10 and depth is
11, which is significantly higher than BFS. This shows that BFS’s strength in finding
shorter solutions, as it systematically explores shallow paths first.
 DFS demonstrated a relatively shorter runtime as compared to BFS. The advantage in
the runtime must be weighed against the potential for DFS to produce suboptimal or
longer paths.
 In BFS and DFS there is a conceptual difference BFS builds the tree level by level
whereas, DFS builds the tree sub-tree by sub-tree. BFS is more suitable for searching
vertices closer to the given source whereas, DFS is more suitable when there are
solutions away from the source.
 BFS considers all neighbors first and therefore not suitable for decision-making trees
used in games or puzzles whereas, DFS is more suitable for game or puzzle problems.
We decide, and then explore all paths through this decision. And if this decision leads
to a win-win situation, we stop. BFS requires More memory than DFS, so Space
complexity is greater than DFS.
Limitations of DFS
 Depth first search is incomplete if there is an infinite branch in the search
tree.
 Infinite branches can happen if:
 paths contain loops
 infinite number of states and/or operators.
 For problems with infinite state spaces, several variants of depth-first
search have been developed: depth limited search, iterative deepening
search.
 Depth limited search expands the search tree depth-first up to a maximum
depth 𝑙.
 The nodes at depth 𝒍 are treated as if they had no successors.
 Iterative deepening (depth-first) search (IDDS) is a form of depth limited
search which progressively increases the bound.
 It first tries 𝑙 = 1, then 𝑙 = 2, then 𝑙 = 3, etc. until a solution is found at 𝑙 = 𝑑.
DLS (Depth Limited Search)

 Depth limited search is an uninformed search algorithm which is similar to


Depth First Search(DFS). It can be considered equivalent to DFS with a
predetermined depth limit 'l'.

 Depth limited search may be thought of as a solution to DFS's infinite path


problem; in the Depth limited search algorithm, DFS is run for a finite depth
'l', where 'l' is the depth limit.
DLS: Depth Limited Search Example
IDDFS
IDDFS Example-1
Example
Pros and Cons of IDDFS
Uniform cost search
Example for Uniform cost search
Uniform cost search
Heuristic Search Technique
 Generate and test
 Hill climbing and Steepest-Ascent Hill Climbing
 Best First Search
 A* & AO*
 Constrain Satisfaction
 Means-end analysis
Heuristic Function

 The Purpose of heuristic function is to guide search process in the


most profitable path among all that are available.

 Most Promising path: 1-->2-->5

1
1 2 4

1 5

3 5
3 4
Generate and Test
The generate-and-test strategy is the simple technique.

Algorithm:

1. Generate a possible solution.

2. Test to see if this is actually a solution by comparing the chosen point or the
endpoint of the chosen path to the set of acceptable goal state.

3. If a solution has been found, quit.

4. return to step 1.
Hill Climbing
 Hill climbing is variant of generate-and-test in which feedback from the test procedure
is used to help the generator decide which direction to move in the search space.
Algorithm:
1. Evaluate the initial state. If it is a goal state, then return it and quit. Otherwise continue with
the initial state as current state.
2. Loop until a solution found or until there are no new operator s left to be applied in the
current state:
a) Select an operator that has not been applied to the current state and apply it to procedure
a new state.
b) Evaluate the new state.
a) If it is a goal state, then return it and quit
b) If it is goal state but it is better than the current state, then make it the current state.
c) If it is not better than the current state, then continue in the loop.
Limitation of Hill Climbing
• Local Maxima: it is a state that is better than all its
neighbour But is not better than some other state.
To overcome local maximum problem
backtracking is required, make a list of visited not
and explore a new path

• Plateau: it is a flat area of the search space in


which whole set of neighboring states have the
same value. To overcome plateau Make a big
jump and randomly select a state far away from
current state.

• Ridge: it is an area of the search space that is


higher than surrounding Areas and that itself has a
slope. To overcome ridge, apply two or more rules
corresponds to moving in several direction.
Steepest Ascent Hill Climbing
 In Simple hill climbing it consider all the moves from the current state and selects the
best one as the next state. This method is called steepest-ascent hill climbing.
Algorithm:
1. Evaluate the initial state if it is also a goal state, then return it and quit. Otherwise
continue with the initial state as the current state.
2. Loop until a solution is found or until a complete iteration produces no change to
current state:
A. Let SUCC be a state such that any possible successor of the current state will
be better than s.
B. For each operator that applies to current state do:
I. Apply the operator and generate a new state.
II. Evaluate the new state. If it is a goal state, than return it and quit . If not
compare it to SUCC. If it is better,then set SUCC to this state . If it is not
better leave SUCC Alone.
C. If the S is better than current state, then set current state to SUCC.
Best First Search
 Best First Search method combines the advantage of both DFS and BFS
search into single method.
 DFS is good because it allows a solution to be found without all
competing branch having to be expanded. BFS is good because it does
not get trapped on dead-end path.
 One way of combining the two is to follow a single path at a time but
switch paths whenever some competing path looks more promising than
the current one does.
 At each step of the best first search process, we select the most
promising of the nodes we have generated so far.
 This is done by applying appropriate heuristic function to each of them.
 We expand the chosen node by using the rules to generate its
successors. If one of them is solution, we can quit.
 To find the most promising path

 A* Example

F(n) = G(n) + H(n)

G(n): the cost to reach the node.

H(n): Heuristic value of node.

F(n):Estimated total path to reach goal node.


BEST FIRST SEARCH
A* Algorithm
Step 1:

 Start with OPEN containing only initial node.

 Set that node’s g value to 0, its h’ value to whatever it is, and its f’ value to h’+0 or h’.

 Set CLOSED to empty list.

Step 2: Until a goal node is found, repeat the following procedure:

 If there are no nodes on OPEN, report failure.

 Otherwise select the node on OPEN with the lowest f’ value.

 Call it BESTNODE. Remove it from OPEN. Place it in CLOSED.

 See if the BESTNODE is a goal state. If so exit and report a solution.

 Otherwise, generate the successors of BESTNODE but do not set the BESTNODE to point to them yet.
 Step 3: For each of the SUCCESSOR, do the following steps

1. Set SUCCESSOR to point back to BESTNODE. These backwards links will make it possible to recover the path once a solution
is found.

2. Compute g(SUCCESSOR) = g(BESTNODE) + the cost of getting from BESTNODE to SUCCESSOR.

3. See if SUCCESSOR is the same as any node on OPEN. If so call the node OLD.

 Check whether it is cheaper to get to OLD via its current parent or to SUCESSOR via BESTNODE by comparing their g
values.

 If OLD is cheaper, then do nothing. If SUCCESSOR is cheaper then reset OLD’s parent link to point to BESTNODE.

 Record the new cheaper path in g(OLD) and update f‘(OLD).

 If SUCCESSOR was not on OPEN, see if it is on CLOSED. If so, call the node on CLOSED OLD and add OLD to the list of
BESTNODE’s successors.

 If SUCCESSOR was not already on either OPEN or CLOSED, then put it on OPEN and add it to the list of BESTNODE’s
successors.

Compute f’(SUCCESSOR) = g(SUCCESSOR) + h’(SUCCESSOR)


Problem Reduction
 AND-OR graph (or tree) is useful for representing the solution of problems that can be
solved by decomposing them into a set of smaller problems, all of which must then be
solved.

 This decomposition or reduction generates arcs that we call AND arcs. One AND arc may
point to any numbers of successor nodes. All of which must then be solved in order for the
arc to point solution.

 In order to find solution in an AND-OR graph we need an algorithm similar to best –first
search but with the ability to handle the AND arcs appropriately.

 We define FUTILITY, if the estimated cost of solution becomes greater than the value of
FUTILITY then we abandon the search, FUTILITY should be chosen to correspond to a
threshold.
AND-OR Graph
AO* Algorithm
 traverse the graph starting at the initial node and following the current best path, and accumulate the
set of nodes that are on the path and have not yet been expanded.

 Pick one of these best unexpanded nodes and expand it. Add its successors to the graph and compute f
‘ (cost of the remaining distance) for each of them.

 Change the f ‘ estimate of the newly expanded node to reflect the new information produced by its
successors. Propagate this change backward through the graph. Decide which of the current best path.

 The propagation of revised cost estimation backward is in the tree is not necessary in A*
algorithm. This is because in AO* algorithm expanded nodes are re-examined so that the
current best path can be selected.
AO* Example

Advantages of AO*:
It is Complete
Will not go in infinite loop
Less Memory Required
Disadvantages of AO*:
It is not optimal as it does not explore all the path once it find a solution.
What is the difference between A* Algorithm and AO*
algorithm?

 A* algorithm provides with the optimal solution, whereas AO*


stops when it finds any solution.

 AO* algorithm requires lesser memory compared to A*


algorithm.

 AO* algorithm doesn't go into infinite loop whereas the A*


algorithm can go into an infinite loop.
Constraint Satisfaction
 Many AI problems can be viewed as problems of constraint satisfaction.

 For example, Crypt-arithmetic puzzle:

 As compared with a straightforward search procedure, viewing a problem as one of the constraint satisfaction can
substantially reduce the amount of search.

 Two-step process:

 Constraints are discovered and propagated as far as possible.

 If there is still not a solution, then search begins, adding new constraints.

 Initial state contains the original constraints given in the problem.

 A goal state is any state that has been constrained “enough”.

Example: The goal is to discover some problem state that satisfies the given set of constraints.
Means End Analysis
 Most of the search strategies either reason forward of backward, Often a mixture of the two directions is
appropriate.

 Such mixed strategy would make it possible to solve the major parts of problem first and solve the smaller
problems the arise when combining them together, Such a technique is called Means - Ends Analysis.

 The means -ends analysis process centers around finding the difference between current state and goal state.

 The problem space of means - ends analysis has

 an initial state and one or more goal state,

 a set of operate with a set of preconditions their application and difference functions that computes the

 difference between two state a(i) and s(j).


Means End Analysis
 The means-ends analysis process can be applied recursively for a problem.

 It is a mixture of Backward and forward search technique

 Following are the main Steps which describes the working of MEA technique for solving a
problem.

 First, evaluate the difference between Initial State and final State.

 Select the various operators which can be applied for each difference.

 Apply the operator at each difference, which reduces the difference between the current
state and goal state.
 In the MEA process, we detect the differences between the
current state and goal state.

 Once these differences occur, then we can apply an operator to


reduce the differences.

 But sometimes it is possible that an operator cannot be applied


to the current state.

 So, we create the sub-problem of the current state, in which


operator can be applied, such type of backward chaining in
which operators are selected, and then sub goals are set up to
establish the preconditions of the operator is called Operator
Subgoaling.
Algorithm: Means-Ends Analysis(CURRENT-GOAL)
1. Compare CURRENT to GOAL, if there are no differences between both then return Success and Exit.

2. Else, select the most significant difference and reduce it by doing the following steps until the success or failure
occurs.

a) Select a new operator O which is applicable for the current difference, and if there is no such operator, then
signal failure.

b) Attempt to apply operator O to CURRENT. Make a description of two states.

a) O-Start, a state in which O’s preconditions are satisfied.

b) O-Result, the state that would result if O were applied In O-start.

c) If (First-Part <------ MEA (CURRENT, O-START) And (LAST-Part <----- MEA(O-Result, GOAL), are
successful, then signal Success and return the result of combining FIRST-PART, O, and LAST-PART.
Example (Mean & Analysis)

Step-1: Evaluating the initial state


Step-2:Applying Delete operator
Step-3:Applying Move Operator
Step-4:Applying Expand Operator
References

 Types of AI Agents – Javatpoint


 Uninformed Search Algorithms - Javatpoint
 AI Search Iterative Deepening | PDF | Mathematical Relations | Combi
natorics (scribd.com)
 How to solve 8 Puzzle problems using BFS and DFS and compare both
in order to get optimal results? (linkedin.com)
Reference Video link

 https://youtu.be/POM4mmLctyo?si=Hcbb-TQbXlrfAI9t
 Uniform Cost Search Introduction to AI. Uniform cost search A breadth
-first search finds the shallowest goal state and will therefore be the c
heapest. - ppt download (slideplayer.com)
Thank You!

You might also like