0% found this document useful (0 votes)
14 views

AI Unit2 ProblemSolving

The document outlines a course on Artificial Intelligence, focusing on problem-solving agents and various search strategies, including uninformed and informed search techniques. It covers problem formulation, state space representation, and specific examples like the Water Jug Problem, Missionaries and Cannibals, and the 8-puzzle. Additionally, it discusses search trees, state space graphs, and evaluates different search strategies based on completeness, optimality, time, and space complexity.

Uploaded by

Will Goodman
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

AI Unit2 ProblemSolving

The document outlines a course on Artificial Intelligence, focusing on problem-solving agents and various search strategies, including uninformed and informed search techniques. It covers problem formulation, state space representation, and specific examples like the Water Jug Problem, Missionaries and Cannibals, and the 8-puzzle. Additionally, it discusses search trees, state space graphs, and evaluates different search strategies based on completeness, optimality, time, and space complexity.

Uploaded by

Will Goodman
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 115

Artificial Intelligence

Course Code: ITT451


By

Ms.C.B.Thaokar
Assistant Professor
Department of Information Technology
RCOEM

02/17/25 1

Unit-II
Problem solving agents
 Problem formulation
 Uninformed Search Strategies
 Informed (Heuristic) Search
 Greedy Best First Search
 A* Search
 RBFS
 Memory bounded heuristic search
 Heuristic functions
 Inventing admissible heuristic functions
 Local Search algorithms
 Hill - climbing
 Simulated Annealing
 Genetic Algorithms
02/17/25 2
Problem solving agent
• Reflex agents base their actions on a direct mapping from states
to actions. Such agents cannot operate well in environments for
which this mapping would be too large to store and would take too
long to learn.
• Goal based agents consider future actions and the desirability of
their outcomes.
• Goal based agent also called problem solving agent.
• Problem Formulation is the process of deciding what actions
and states to consider for a given goal.

02/17/25 3
Problem Formulation
 A problem is a collection of information that the agent will use to
decide what to do and solution to any problem is a fixed sequence
of actions.
 The process of looking for a sequence of actions that reaches the
goal is called search
State Space of problem : Is set of all states reachable from initial
state by sequence of actions.
Representation
 Initial state
 Operator / Action: Possible actions available to the agent
 Successor /transition function: Refers to any state reachable from
a given state by a single action.
 Goal state: Destination of the problem
 Path cost : It is summation of step cost( single action performed).

02/17/25 4
Eg-1Water Jug Problem

You are given two jugs, a 4-gallon one and 3-gallon one. Neither
has any measuring marks on it. There is a pump that can be used
to fill the jugs with water. How can you get exactly 2 gallons of
water into the 4-gallon jug?

02/17/25 5
4G 3G
State Space Search: Water Jug Problem

 Start state: (0, 0)


 Goal state: (2, n) for any n
 Goal Test : State matching to any configuration of goal state

 Operators:
1. Fill the 4-gallon jug
2. Fill the 3-gallon jug
3. Pour some water From 4 to 3 gallon jug
4. Pour some water From 3 to 4 gallon jug
5. Empty the 4-gallon jug
6. Empty the 3-gallon jug
02/17/25 6
Water Jug solution
• Water Jug solution

Rules
4-Gallon 3-Gallon Rule
Jug Jug Applied 1.Fill the 4-gallon jug
2.Fill the 3-gallon jug
0 0 3.Pour some water From 4 to 3 gallon
jug
0 3 2 4.Pour some water From 3 to 4 gallon
jug
3 0 4 5.Empty the 4-gallon jug
6.Empty the 3-gallon jug
3 3 2
4 2 4
0 2 5
2 0 4

02/17/25 7
Another way to show the solution ?

02/17/25 8
Missionaries and Cannibals
• Missionaries and Cannibals is a problem in which 3
missionaries and 3 cannibals want to cross from the left bank
of a river to the right bank of the river. There is a boat on the
left bank, but it only carries at most two people at a time (and
can never cross with zero people). If cannibals ever outnumber
missionaries on either bank, the cannibals will eat the
missionaries.

02/17/25 9
Missionaries and Cannibals Soln.

02/17/25 10
The 8-puzzIe
States: a state description specifies the location of each of the eight tiles in one of the
nine squares. For efficiency, it is useful to include the location of the blank.
Initial State: any state
Operators: blank moves left, right, up, or down
Transition model: Given a state and action, this returns the resulting state; for example,
if we apply Left to the start state in Figure the resulting state has the 5 and the blank
switched.
Goal state : state matches the goal configuration shown in Figure
Path cost: each step costs 1, so the path cost is just the length of the path.

02/17/25 11
8-queens problem
Principle of the game : A solution requires that no two queens share the same row,
column, or diagonal.

States: Any arrangement of 0 to 8 queens on the board is a state.


Initial state: No queens on the board.
Actions/Operators: Add a queen to any empty square.
Transition model: Returns the board with a queen added to the specified square.
Goal test: 8 queens are on the board, none attacked

02/17/25 12
Search Techniques
State Vs Node
A State is a representation of a physical configuration
A node is data structure constituting part of search tree includes parent,
children, depth, path cost g(x)
State do not have parents, children, depth or path cost.

n= node

n.State
n.Parent
n.Action
n.path_cost

02/17/25 13
Search Strategies
A strategy is defined by picking the order of node expansion
Strategies are evaluated along following dimensions
Completeness: Is the algorithm guaranteed to find a solution when there is
one?
Optimality : Does the strategy find the optimal solution i.e has the lowest path
cost among all solutions.
Time complexity : How long does it take to find a solution?
Space Complexity : How much memory is needed to perform the search?
Time and space complexity are measured in terms of following things
b :- the branching factor or maximum number of successors of any node
d :- the depth of the shallowest goal node (i.e., the number of steps along the path
from the root)
m:- the maximum length of any path in the state space.

Time is often measured in terms of the number of nodes generated during the
search, and space in terms of the maximum number of nodes stored in memory
02/17/25 14
State Space Graphs
• State space graph: A mathematical
representation of a search problem
a G
– Nodes are (abstracted) world
configurations b c
– Arcs represent successors (action results)
e
– The goal test is a set of goal nodes (maybe d f
only one) S h
• In a search graph, each state occurs
p r
only once! q

• We can rarely build this full graph in


memory (it’s too big), but it’s a
useful idea

02/17/25 15
Search Trees
• A search tree:
–The start state is the root node
–Children correspond to successors
–Nodes show states, but correspond to
PLANS that achieve those states
–For most problems, we can never
actually build the whole tree

02/17/25 16
State Space Graphs vs. Search Trees

Each NODE in
State Space in the search Search Tree
Graph tree is an
S
entire PATH
a G in the state e p
d
b c space graph.
b c e h r q
e
d f
a a h r p q f
S h
p r p q f q c G
q
q c G a

02/17/25 17
State Space Graphs vs. Search Trees

• Consider this 4-state graph:

S G

Lots of repeated structure in the search tree!

02/17/25 18
Map of Romania

02/17/25 19
Search Strategies

Partial search trees for finding a route from Arad to Bucharest

Search:
Expand out potential plans (tree nodes)
Maintain a fringe of partial plans under consideration
Try to expand as few tree nodes as possible
02/17/25 20
Tree Search Strategies

Important ideas:
Fringe
Expansion
Exploration strategy -
Uninformed Search
Informed Search
An informal description of the general tree-search and graph-search algorithms.

02/17/25 21
Uninformed / Blind Search Strategies
No additional information about state is available.
Only goal and non goal states are known.
Breadth-first search
function BREADTH-FIRST-SEARCH (problem) returns a solution, or failure
node ←a node with STATE = problem. INITIAL-STATE, PATH-COST = 0
if problem.GOAL-TEST(node.STATE) then return SOLUTION(node)
frontier ←a FIFO queue with node as the only element
explored ←an empty set
loop do
if EMPTY?( frontier) then return failure
node←POP( frontier ) /* chooses the shallowest node in frontier */
add node.STATE to explored
for each action in problem. ACTIONS(node.STATE) do
child ←CHILD-NODE(problem, node, action)
if child .STATE is not in explored or frontier then
if problem.GOAL-TEST(child .STATE) then return SOLUTION(child )
frontier ←INSERT(child , frontier )

02/17/25 22
Uninformed Search Strategies
Breadth-first search- evaluation

Complete—if the shallowest goal node is at some finite depth d,


breadth-first search will eventually find it after generating all
shallower nodes (provided the branching factor b is finite).
Optimality: if all operators have the same cost. Otherwise, not
optimal but finds solution with shortest path length.

02/17/25 23
Uninformed Search Strategies
Breadth-first search - Evaluation
Space Complexity: The root of the search tree generates b nodes at the first level, each
of which generates b more nodes, for a total of b2 at the second level. Then the total
number of nodes generated is

b + b2 + b3 + ---- + bd =
(bd)
.
So Space Complexity is O (bd)

Time Complexity: O (bd + 1 )

02/17/25 24
Uninformed Search Strategies
Breadth-first search – Evaluation
branching factor b = 10. The table assumes that 1 million nodes can be generated per
second and that a node requires 1000 bytes of storage

Depth Nodes Time Memory


2 100 .10ms 105KB
4 10.10 10ms 10.6MB
6 106 1.0ms 1GB
10 1010 3 hours 10TB
12 1012 13 days 1 PB
14 1014 3.5 years 99PB
16 1016 350 years 10exaByte

https://drive.google.com/file/d/
02/17/25
1u2ChJ2itSTk4NT6h6UgLgQ2BT63MO1dW/ 25
view?usp=sharing
Uninformed Search Strategies
Depth-first search
Depth-first search always expands the deepest node in the
current frontier of the search tree.
The search proceeds immediately to the deepest level of the
search tree, where the nodes have no successors.
As those nodes are expanded, they are dropped from the
frontier, so then the search “backs up” to the next deepest node
that still has unexplored successors.
breadth-first-search uses a FIFO queue, depth-first search uses a
LIFO queue
02/17/25 26
Uninformed Search Strategies
Depth-first search - Evaluation

02/17/25 27
Depth-first search -Evaluation

Completeness: Incomplete
 Optimality : nonoptimal
 Time Complexity: O (bm)
 Space Complexity : 1 + b + b + … + b (m levels total) = O(bm)

02/17/25 28
DFS and BFS
Advantages:
DFS
• DFS requires very little memory as it only needs to store a
stack of the nodes on the path from the root node to the current
node.
BFS :
• It will provide a solution if any solution exists.
• If there is more than one solution for a given problem, then
BFS will provide the minimal solution which requires the least
number of steps.

02/17/25 29
DFS and BFS
Disadvantages:
DFS:
• There is the possibility that many states keep reoccurring, and
there is no guarantee of finding the solution.
• The DFS algorithm goes for deep down searching and
sometimes it may go to the infinite loop.
BFS :
• It requires lots of memory since each level of the tree must be
saved into memory to expand the next level.
• BFS needs lots of time if the solution is far away from the root
node.

02/17/25 30
PAC Man Problem

4
3 G

1 S

1 2 3
Uniform Cost Search
When all step costs are equal, breadth-first search is optimal because it

always expands the shallowest unexpanded node.


 This algorithm is mainly used when the step costs are not the same
but
we need the optimal solution to the goal state.
 Uniform-cost search expands the node n with the lowest path
cost g(n).
This is done by storing the frontier as a priority queue ordered by g.
 Uniform Cost Search to find the goal and the path including the
cumulative cost to expand each node from the root node to the goal
node.
 It does not go depth or breadth, it searches for the next node with the
lowest cost and in the case of the same path cost

02/17/25 32
UCS Example 1

Open list: C

02/17/25 33
UCS Example 1

Open list: B(2) T(1) O(3) E(2) P(5)

02/17/25 34
UCS Example 1

Open list: T(1) B(2) E(2) O(3) P(5)

02/17/25 35
UCS Example 1

Open list: B(2) E(2) O(3) P(5)

02/17/25 36
UCS Example 1

Open list: E(2) O(3) P(5)

02/17/25 37
UCS Example 1

Open list: E(2) O(3) A(3) S(5) P(5) R(6)

02/17/25 38
UCS Example 1

Open list: O(3) A(3) S(5) P(5) R(6)

02/17/25 39
UCS Example 1

Open list: O(3) A(3) S(5) P(5) R(6) G(7)

02/17/25 40
UCS Example 1

Open list: A(3) S(5) P(5) R(6) G(7)

02/17/25 41
UCS Example 1

Open list: A(3) I(4) S(5) N(5) P(5) R(6) G(7)

02/17/25 42
UCS Example 1

Open list: I(4) P(5) S(5) N(5) R(6) G(7)

02/17/25 43
UCS Example 1

Open list: P(5) S(5) N(5) R(6) Z(6) G(7)

02/17/25 44
UCS Example 1

Open list: S(5) N(5) R(6) Z(6) F(6) G(7) D(8) L(10)

02/17/25 45
UCS Example 1

Open list: N(5) R(6) Z(6) F(6) G(7) D(8) L(10)

02/17/25 46
UCS Example 1

Open list: Z(6) F(6) G(7) D(8) L(10)

02/17/25 47
UCS Example 1

Open list: F(6) G(7) D(8) L(10)

02/17/25 48
Uninformed Search Strategies Eg.2
Uniform Cost Search

The successors of Sibiu are Rimnicu Vilcea and Fagaras, with costs 80 and 99, respectively.
The least-cost node, Rimnicu Vilcea, is expanded
next, adding Pitesti with cost 80 + 97=177. The least-cost node is now Fagaras, so it is
expanded, adding Bucharest with cost 99+211=310. Now a goal node has been generated,
but uniform-cost search keeps going, choosing Pitesti for expansion and adding a second path
to Bucharest with cost 80+97+101= 278.
Now the algorithm checks to see if this new path is better than the old one; it is, so the old
one is discarded. Bucharest, now with g-cost 278, is selected for expansion and the solution is
returned.
02/17/25 49
Uniform Cost Search Eg.3
F is goal

02/17/25 50
Uniform Cost Search
Advantages:
Uniform cost search is optimal because at every state the path
with the least cost is chosen.
Disadvantages:
It does not care about the number of steps involved in searching
and only concerned about path cost. Due to which this algorithm
may be stuck in an infinite loop.

Complete: Yes (if b is finite and costs are stepped costs are
zero)
Time Complexity: O(b(c/ϵ)) where, ϵ -> is the lowest cost,
c -> optimal cost
Space complexity: O(b(c/ϵ))

Optimal: Yes (even for non-even cost)


02/17/25 51
Iterative Deepening Depth First Search
• It combines DFS space efficient and BFS time efficient algo.
• It is iterative in nature. It searches for the best depth in each
iteration.
• It performs the Algorithm until it reaches the goal node.
• The algorithm is set to search until a certain depth and the
depth keeps increasing at every iteration until it reaches the
goal state.

02/17/25 52
Iterative Deepening Depth First Search

02/17/25 54
Iterative Deepening Depth First Search
• Completeness: Iterative deepening search may or may not
reach the goal state.
• Optimality: It does not give an optimal solution always.
• Time Complexity: O(bd)
 [b] + [b + b2] + .. + [b + b2 + .. + bd]
(d)b + (d-1) b2 + … + (1) bd
O(bd)

• Space complexity: O(bd)

02/17/25 55
Comparision of Uninformed Search

DFS BFS UCS Iterative


Complete N Y Y May or
May not

Optimal N N Y N

Time bm bd b(c/ϵ) bd

Space bm bd b(c/ϵ) bm

02/17/25 56
INFORMED SEARCH TECHNIQUES

02/17/25 57
TOPICS
 Greedy Best First Search
 A* Search
 RBFS
 Memory bounded heuristic search
 Heuristic functions
 Inventing admissible heuristic functions
 Local Search algorithms
 Hill - climbing
 Simulated Annealing 58
 Genetic Algorithms
Informed (Heuristic) Search
• The informed search algorithm is more useful for large search space.
• It uses the idea of heuristic, so it is also called Heuristic search.
• Heuristics function:
– Heuristic is a function which is used in Informed Search, and it
finds the most promising path.
– It takes the current state of the agent as its input and produces the
estimation of how close agent is from the goal.
– The heuristic method, however, might not always give the best
solution, but it guaranteed to find a good solution in reasonable
time.
– Heuristic function estimates how close a state is to the goal.
– It is represented by h(n), and it calculates the cost of an optimal
path between the pair of states. 59

– The value of the heuristic function is always positive.


Pure Heuristic Search
• It expands nodes based on their heuristic value h(n).
• It maintains two lists, OPEN and CLOSED list.
• In the CLOSED list, it places those nodes which have
been already expanded.
• In the OPEN list, it places nodes which have yet not
been expanded.
• On each iteration, each node n with the lowest heuristic
value is expanded and generates all its successors and n
is placed to the closed list.
• The algorithm continues until a goal state is found.
• Best First Search Algorithm(Greedy search)
• A* Search Algorithm
60
Best-first Search Algorithm
(Greedy Search):
• Greedy best-first search algorithm always selects the path which
appears best at that moment.
• It is the combination of depth-first search and breadth-first
search algorithms.
• It uses the heuristic function and search.
• With the help of best-first search, at each step, we can choose
the most promising node.
• In the best first search algorithm, we expand the node which is
closest to the goal node and the closest cost is estimated by
heuristic function.
• f(n) = h(n)
h(n)= estimated cost from node n to the goal. 61
Best first search algorithm:
Step 1: Place the starting node into the OPEN list.
Step 2: If the OPEN list is empty, Stop and return failure.
Step 3: Remove the node n, from the OPEN list which has the
lowest value of h(n), and places it in the CLOSED list.
Step 4: Expand the node n, and generate the successors of node n.
Step 5: Check each successor of node n, and find whether any
node is a goal node or not. If any successor node is goal
node, then return success and terminate the search, else
proceed to Step 6.
Step 6: For each successor node, algorithm checks for evaluation
function f(n), and then check if the node has been in either
OPEN or CLOSED list. If the node has not been in both
list, then add it to the OPEN list.
Step 7: Return to Step 2. 62
GBFS Example
• Consider the below search problem, and we will traverse it
using greedy best-first search. At each iteration, each node is
expanded using evaluation function f(n)=h(n) , which is given
in the below table.

63
Greedy BFS Example -2
GBFS Analysis
• Time Complexity: The worst case time complexity of Greedy
best first search is O(bm).
• Space Complexity: The worst case space complexity of
Greedy best first search is O(bm). Where, m is the maximum
depth of the search space.
• Complete: Greedy best-first search is also complete, if the
given state space is finite.
• Optimal: Greedy best first search algorithm is not optimal.

65
Advantages and Disadvantages
Advantages :
•This algorithm is more efficient than BFS and DFS algorithms.
Disadvantages :
•It can behave as an unguided depth-first search in the worst case
scenario.
•This algorithm is not optimal.

66
A* Search Algorithm
• It uses heuristic function h(n), and cost to reach the node n from
the start state g(n).
• It has combined features of UCS and greedy best-first search,
by which it solve the problem efficiently.
• A* search algorithm finds the shortest path through the search
space using the heuristic function.
• This search algorithm expands less search tree and provides
optimal result faster. A* algorithm is similar to UCS except that
it uses g(n)+h(n) instead of g(n).
• In A* search algorithm, we use search heuristic as well as the
cost to reach the node. Hence we can combine both costs as
following, and this sum is called as a fitness number.
67
A* Search Algorithm

At each point in the search space, only those node is


expanded which have the lowest value of f(n), and
the algorithm terminates when the goal node is found.

68
A* Search Algorithm
Step1: Place the starting node in the OPEN list.
Step 2: Check if the OPEN list is empty or not, if the list is empty
then return failure and stops.
Step 3: Select the node from the OPEN list which has the smallest
value of evaluation function (g + h), if node n is goal node
then return success and stop, otherwise
Step 4: Expand node n and generate all of its successors, and put n
into the closed list. For each successor n', check whether n'
is already in the OPEN or CLOSED list, if not then compute
evaluation function for n' and place into Open list.
Step 5: Else if node n' is already in OPEN and CLOSED, then it
should be attached to the back pointer which reflects the

lowest g(n') value. (If n’ is already on OPEN or CLOSED compare its new f with
the old f. If the new value is higher, discard the node. Otherwise, replace old f with new f 69
and reopen the node Else, put n’ with its f value in the right order in OPEN )

Step 6: Return to Step 2.


A* Example
• In this example, we will traverse the given graph using the A*
algorithm. The heuristic value of all states is given in the below
table so we will calculate the f(n) of each state using the formula
f(n)= g(n) + h(n), where g(n) is the cost to reach any node from
start state.

70
A* Example
Initialization:
{(S, 5)}
Iteration1:
{(S--> A, 4), (S-->G, 10)}
Iteration2:
{(S--> A-->C, 4), (S--> A-->B, 7), (S--
>G, 10)}
Iteration3:
{(S--> A-->C--->G, 6), (S--> A-->
C--->D, 11), (S--> A-->B, 7), (S-->G,
10)}
Iteration 4:
will give the final result, as
S--->A--->C--->G
it provides the optimal path with cost 6.

71
A * Algorithm Analysis
• Points to remember:
– A* algorithm returns the path which occurred first, and it does
not search for all remaining paths.
– The efficiency of A* algorithm depends on the quality of
heuristic.
• Complete: A* algorithm is complete as long as:
– Branching factor is finite.
• Optimal: A* search algorithm is optimal if it follows below two
conditions:
– Admissible: the first condition requires for optimality is that
h(n) should be an admissible heuristic for A* tree search. An
admissible heuristic is optimistic in nature.(If the heuristic function
is admissible, then A* tree search will always find the least cost path.)
– Consistency: Second required condition is consistency for
only A* graph-search. 72
Admissibility means h(n) should not be overestimated.
i.e h(n) <= actual path cost

A (15)
3 3
2

B (6) C (20) D (10)

15 6 20 10 5

E (20)) F(0) G (12) H (20)) I(0)

• Solution cost: • Open list:


– ABF = 9 – A (15) B (9) F (9)
– ADI = 8 • Missed optimal solution
Consistent / Monotonic h(n)
• h(n) is said to be consistent if for every node n & every
successor n’ of n generated by any action a, estimated cost of
reaching the goal from n is no greater than the step cost of
getting to n’ plus estimated cost of reaching the goal from n’.

h(n) <= C ( n,a,n’) + h(n’)

Straight line distance is consistent by using the principle of triangle inequality


i.e the sum of any two sides of a triangle is greater than or equal to the
third side ( a + b ≥ c )

Every Consistent heuristic is admissible


A* Graph Search Gone Wrong?
State space graph Search tree

A S (0+2)
1
1
S h=4
C
h=1 A (1+4) B (1+1)
h=2 1
2
3 C (2+1) C (3+1)
B

h=1
G G (5+0) G (6+0)

h=0
Consistency of Heuristics
• Main idea: estimated heuristic costs ≤ actual costs
– Admissibility: heuristic cost ≤ actual cost to goal
h(A) ≤ actual cost from A to G

A If n is the optimal solution reachable from n’, then g(n) ≥ g(n’) + h(n’)

1 – Consistency: heuristic “arc” cost ≤ actual cost for each arc

h=4 C h=1 h(A) – h(C) ≤ cost(A to C)

h=2 • consistent: if one step takes us from n to n’, then h(n) ≤ h(n’)
+ cost of step from n to n’
– Similar to triangle inequality
3

G
A * Algorithm Analysis
• Time Complexity: The time complexity of A* search
algorithm depends on heuristic function, and the number of
nodes expanded is exponential to the depth of solution d. So
the time complexity is O(b^m), where b is the branching
factor.
• Space Complexity: The space complexity of A* search
algorithm is O(b^m)

02/17/25 77
A* Example -2
Example Problems – Eight Puzzle
States: tile locations

Initial state: one specific tile


configuration

Operators: move blank tile left,


right, up, or down

Goal: tiles are numbered from


one to eight around the square

Path cost: cost of 1 per move


(solution cost same as number of
most or path length)
Example Problems – Eight Puzzle

h1 : Number of misplaced tiles


H1(start) = 7 ( all tiles are misplaced except 7tile)
h1 is an admissible heuristic, since it is clear that every
tile that is out of position must be moved at least once.
Example Problems – Eight Puzzle

h2 : Sum of distances of the tiles from their goal positions


h2(start)= 4 +2 +2 +2 + 2 + 0+3 + 3 = 18
h2 is an admissible heuristic, since in every move, one tile can
only move closer to its goal by one step and the distance is never
greater than the number of steps required to move a tile to its
goal position.

Distance formula can be Manhattan or Euclidean.


RBFS (Recursive Best First Search)
 Idea: mimic the operation of standard best-first search, but
use only linear space
 Runs similar to recursive depth-first search, but rather than
continuing indefinitely down the current path, it uses the f-
limit variable to keep track of the best alternative path
available from any ancestor of the current node.
 If the current node exceeds this limit, the recursion unwinds
back to the alternative path. As the recursion unwinds, RBFS
replaces the f-value of each node along the path with the best
f-value of its children.
In this way, it can decide whether it’s worth re expanding a
forgotten subtree.

02/17/25 82
Recursive best-first search
• Keeps track of the f-value of the best-alternative path
available.
– If current f-values exceeds this alternative f-value than
backtrack to alternative path.
– Upon backtracking change f-value to best f-value
before ( from parent) and after ( from descendant )
recursive call.
– Re-expansion of this result is thus still possible.

AI 1
Algorithm
// Input is current node and f limit
// Returns goal node or failure, updated limit
RBFS(n, limit)
if Goal(n)
return n
children = Expand(n)
if children empty
return failure, infinity
for each c in children
f[c] = max(g(c)+h(c), f[n]) // Update f[c] based on parent
f[n]=current
repeat
best = child with smallest f value
if f[best] > limit
return failure, f[best]
alternative = second-lowest f-value among children
newlimit = min(limit, alternative)
result, f[best] = RBFS(best, newlimit) // Update f[best] based on descendant
if result not equal to failure
return result
Analysis
• Optimal if h(n) is admissible
• Space is O(bm)
• Features
– Potentially exponential time in cost of solution
– More efficient than IDA*
RBFS Ex

86
Recursive best-first search, ex.

• Path until Rumnicu Vilcea is already expanded


• Above node; f-limit for every recursive call is shown on top.
• Below node: f(n)
• The path is followed until Pitesti which has a f-value worse than the f-limit.
Recursive best-first search, ex.

• Unwind recursion and store best f-value for current best leaf
Pitesti
result, f[best] = RBFS(best, newlimit)
• best is now Fagaras. Call RBFS for new best
– best value is now 450
Recursive best-first search, ex.

• Unwind recursion and store best f-value for current best leaf Fagaras
result, f[best] = RBFS(best, newlimit)

• best is now Rimnicu Viclea (again). Call RBFS for new best
– Subtree is again expanded.
– Best alternative subtree is now through pitesi.
• Solution is found since because 447 > 418.
Real Time Applications
Algorithm Applications
BFS Web crawlers , GPS Navigation system ( To find
neighbouring locations ) , shortest path , used for
broadcasting packets in Network

DFS Path finding , Topological Sort , Solving puzzles with only


one solution , find strongly connected components of a
graph, Detect cycles in a graph
UCS Route planning
A* Games ( Path finding)
Iterative Used in searching data of infinite space by incrementing
deepening the depth limit iteratively

RBFS Used in constrained search space , Network routing


Hill Climbing Optimization problems, scheduling ( Large search space)

02/17/25 90
Beyond Classical Search

02/17/25 91
Local Search Algorithms
• In many problems, the path to the goal is irrelevant; the goal state
itself is the solution
– Local search is widely used for very big problems
– Returns good but not optimal solutions

• Local search algorithms


– Keep a single "current" state, or small set of states. The state
space can be the set of “complete” configurations

 e.g., for 8-queens, a configuration can be any board with 8 queens


 e.g., for TSP, a configuration can be any complete tour

92
Local Search Algorithms
– Iteratively try to improve it / them
 for 8-queens, we gradually move some queen to a better place
 for TSP, we start with any tour and gradually improve it
The goal would be to find an optimal configuration
– e.g., for 8-queens, an optimal config. is where no queen is
threatened
– e.g., for TSP, an optimal configuration is the shortest route
– Very memory efficient
• keeps only one or a few states
• You control how much memory you use

93
Eg 1: TSP
Start with any complete tour, perform pairwise exchanges

02/17/25 94
Eg2: N Queen Problem
Move a queen to reduce number of conflicts

02/17/25 95
Hill Climbing

Hill climbing search algorithm


(also known as greedy
local
search) uses a loop that continually
moves
• in the direction of increasing
values (that is uphill).

It terminates when it reaches a peak where


no neighbor has a higher value.

Hill Climbing is a technique to solve


certain
• optimization problems.
In this technique, we start with a sub-
optimal solution and the solution is
improved repeatedly until some condition is
maximized. 5
Features of Hill Climbing:
• Hill Climbing technique is mainly used for solving
computationally hard problems. It looks only at the current
state and immediate future state. Hence, this technique is
memory efficient as it does not maintain a search tree.

• Generate and Test variant: Hill Climbing is the variant of


Generate and Test method.

• Greedy approach: Hill-climbing algorithm search moves in


the direction which optimizes the cost.

• No backtracking: It does not backtrack the search space, as it


does not remember the previous states.
97
State-space Diagram for Hill Climbing:

98
Different regions in the state space
landscape
• Local Maximum: Local maximum is a state which is
better than its neighbor states, but there is also another
state which is higher than it.
• Global Maximum: Global maximum is the best possible
state of state space landscape. It has the highest value of
objective function.
• Current state: It is a state in a landscape diagram where
an agent is currently present.
• Flat local maximum: It is a flat space in the landscape
where all the neighbor states of current states have the
same value.
99
• Shoulder: It is a plateau region which has an uphill edge.
Types of Hill Climbing Algorithm:

• Simple hill Climbing:


• Steepest-Ascent hill-climbing:
• Stochastic hill Climbing:

100
Simple hill Climbing:
• It only evaluates the neighbor node state at a time and
selects the first one which optimizes current cost and set it
as a current state.
• This algorithm has the following features:
– Less time consuming
– Less optimal solution and the solution is not guaranteed

101
Algorithm for Simple Hill Climbing:
Step 1: Evaluate the initial state, if it is goal state then return
success and Stop.
Step 2: Loop Until a solution is found or there is no new
operator left to apply.
Step 3: Select and apply an operator to the current state.
Step 4: Check new state:
– If it is goal state, then return success and quit.
– Else if it is better than the current state then assign new
state as a current state.
– Else if not better than the current state, then return to
step2.
Step 5: Exit.
102
Hill climbing example I (minimizing h)
3 1 2 1 2
Start
s 4 5 8 h =5 goal 3 4 5
6 7 oop 6 7 8

6 hoop = 0
hoop = 4 5
3 1 2 31 2
4 5 8 4 5
6 7 67 8
hoop = 3 hoop = 1
5 4
31 2 31 2
4 5 hoop = 2 4 5
67 8 67 8
CIS 391 - Intro to AI
4
Problems in Hill Climbing Algorithm:
• Local Maximum: A local maximum is a peak state in
the landscape which is better than each of its
neighboring states, but there is another state also present
which is higher than the local maximum.
• Solution: Backtracking technique can be a solution of
the local maximum in state space landscape. Create a
list of the promising path so that the algorithm can
backtrack the search space and explore other paths as
well.
Problems in Hill Climbing Algorithm:
• Plateau: A plateau is the flat area of the search space in
which all the neighbor states of the current state
contains the same value, because of this algorithm does
not find any best direction to move. A hill-climbing
search might be lost in the plateau area.
• Solution: The solution for the plateau is to take big
steps or very little steps while searching, to solve the
problem. Randomly select a state which is far away
from the current state so it is possible that the algorithm
could find non-plateau region.
Problems in Hill Climbing Algorithm:

• Ridges: A ridge is a special form of the local maximum. It has


an area which is higher than its surrounding areas, but itself
has a slope, and cannot be reached in a single move.
• Solution: With the use of bidirectional search, or by moving
in different directions, we can improve this problem.

106
Steepest-Ascent hill climbing:

• This algorithm examines all the neighboring


nodes of the current state and selects one
neighbour node which is closest to the goal
state.
• It takes more time as it searches for multiple
neighbours.

107
Algorithm for Steepest-Ascent hill
climbing:
Step 1: Evaluate the initial state, if it is goal state then return
success and stop, else make current state as initial state.
Step 2: Loop until a solution is found or the current state does not
change.
– Let SUCC be a state such that any successor of the current
state will be better than it.
– For each operator that applies to the current state:
• Apply the new operator and generate a new state.
• Evaluate the new state.
• If it is goal state, then return it and quit, else compare it
to the SUCC.
• If it is better than SUCC, then set new state as SUCC.
• If the SUCC is better than the current state, then set
current state to SUCC.
Step 3: Exit.
108
Hill Climbing Search

ComSci: Renas R. Rekany Oct2016


Example -2

h(x) = +1 for all the blocks in the support structure if the


block is correctly positioned otherwise -1 for all the blocks
in the support structure.

02/17/25 110
Self Study
Study local Beam algorithm in AI

 How it is different than other algorithms

Applications of Algorithm

02/17/25 111
Local Search Algorithms
Genetic Algorithm
• A genetic algorithm (or GA) is a local search in which successor states are
generated by combining two parent states rather than by modifying a single
state.
• Start with k randomly generated states (population)
• A state is represented as a string over a finite alphabet (often a string of 0s and
1s or digits)
• Evaluation function (fitness function).
Higher values for better states
• Produce the next generation of states by selection, crossover, and mutation

112
Local Search Algorithms
Genetic Algorithm

Basic Questions
 How does one decide who survives

 How does one decide how successfully each survivor produces offsprings

 How are the offsprings related to the parents

 How does one ensure that genetic variation is maintained even though
with every generation individuals are supposed to become fitter

113
Local Search Algorithms
Genetic Algorithm
 Initial Population

 Fitness Function

 Crossover

 Mutation

114
Local Search Algorithms
Genetic Algorithm
 Initial Population

115

You might also like