Hill Climbing Algorithm in AI
Hill Climbing Algorithm in AI
Hill climbing algorithm is a local search algorithm which continuously moves in the direction
of increasing elevation/value to find the peak of the mountain or best solution to the
problem. It terminates when it reaches a peak value where no neighbor has a higher value.
Hill climbing algorithm is a technique which is used for optimizing the mathematical problems.
One of the widely discussed examples of Hill climbing algorithm is Traveling-salesman Problem
in which we need to minimize the distance traveled by the salesman.
It is also called greedy local search as it only looks to its good immediate neighbor state and
not beyond that.
A node of hill climbing algorithm has two components which are state and value.
In this algorithm, we don't need to maintain and handle the search tree or graph as it only
keeps a single current state.
Generate and Test variant: Hill Climbing is the variant of Generate and Test method. The
Generate and Test method produce feedback which helps to decide which direction to move in
the search space.
Greedy approach: Hill-climbing algorithm search moves in the direction which optimizes the cost.
No backtracking: It does not backtrack the search space, as it does not remember the
previous states.
On Y-axis we have taken the function which can be an objective function or cost function, and state-
space on the x-axis. If the function on Y-axis is cost then, the goal of search is to find the global
minimum and local minimum. If the function of Y-axis is Objective function, then the goal of the search
is to find the global maximum and local maximum.
Different regions in the state space landscape:
Local Maximum: Local maximum is a state which is better than its neighbor states, but there is
also another state which is higher than it.
Global Maximum: Global maximum is the best possible state of state space landscape. It has the
highest value of objective function.
Flat local maximum: It is a flat space in the landscape where all the neighbor states of current
states have the same value.
Steepest-Ascent hill-climbing:
Step 1: Evaluate the initial state, if it is goal state then return success and Stop.
Step 2: Loop Until a solution is found or there is no new operator left to apply.
b. Else if it is better than the current state then assign new state as a current state.
c. Else if not better than the current state, then return to step2.
Step 5: Exit.
Step 1: Evaluate the initial state, if it is goal state then return success and stop, else make
current state as initial state.
Step 2: Loop until a solution is found or the current state does not change.
a. Let SUCC be a state such that any successor of the current state will be better than it.
c. If it is goal state, then return it and quit, else compare it to the SUCC.
e. If the SUCC is better than the current state, then set current state to SUCC.
Step 5: Exit.
3. Stochastic hill climbing:
Stochastic hill climbing does not examine for all its neighbor before moving. Rather, this search
algorithm selects one neighbor node at random and decides whether to choose it as a current state or
examine another state.
Solution: Backtracking technique can be a solution of the local maximum in state space landscape.
Create a list of the promising path so that the algorithm can backtrack the search space and explore
other paths as well.
2. Plateau: A plateau is the flat area of the search space in which all the neighbor states of the
current state contains the same value, because of this algorithm does not find any best direction to
move. A hill-climbing search might be lost in the plateau area.
Solution: The solution for the plateau is to take big steps or very little steps while searching, to solve
the problem. Randomly select a state which is far away from the current state so it is possible that the
algorithm could find non-plateau region.
3. Ridges: A ridge is a special form of the local maximum. It has an area which is higher than
its surrounding areas, but itself has a slope, and cannot be reached in a single move.
Solution: With the use of bidirectional search, or by moving in different directions, we can improve
this problem.
Simulated Annealing:
A hill-climbing algorithm which never makes a move towards a lower value guaranteed to be
incomplete because it can get stuck on a local maximum. And if algorithm applies a random walk, by
moving a successor, then it may complete but not efficient. Simulated Annealing is an algorithm
which yields both efficiency and completeness.
In mechanical term Annealing is a process of hardening a metal or glass to a high temperature then cooling
gradually, so this allows the metal to reach a low-energy crystalline state. The same process is used in
simulated annealing in which the algorithm picks a random move, instead of picking the best move. If the
random move improves the state, then it follows the same path. Otherwise, the algorithm follows the path
which has a probability of less than 1 or it moves downhill and chooses another path.