Lecture - 11 - 16
Lecture - 11 - 16
Turn 3 (Agent 1's turn): Agent 1 (O) checks lines: 1, 2, 3, 4, Turn 4 (Agent 2's turn): Agent 2 (X) checks lines: 1, 2, 3, 4, 5,
5, 6, 7, 8, 9, 1-5-9, and 3-5-7. No potential winning lines for 6, 7, 8, 9, 1-5-9, and 3-5-7. No potential winning lines for X are
O are found. Agent 1 chooses position 1. found. Agent 2 chooses position 3.
Iteration
Turn 5 (Agent 1's turn): Agent 1 (O) checks lines: 1, 2, 3, 4, 5, 6,
7, 8, 9, 1-5-9, and 3-5-7. Potential winning line found for O: 1, 5,
9. Agent 1 chooses position 9.
With Ratings
With Passwin
• For a low number of cities, this question could be reasonably brute-forced. However, as the number of cities increases, it
becomes increasingly difficult to come to a solution.
• The nearest-neighbor (NN) heuristic solves this problem nicely: the computer always picks the nearest unvisited city next
on the path. NN does not always provide the best solution, but it is close enough to the best solution that the difference is
often negligible for the purpose of answering the TSP. By using this heuristic, the Big-O complexity of TSP can be reduced
from O(n!) to O(n^2).
Applying Heuristics to Your Algorithms
• To apply heuristics to your algorithms, you need to know the solution or goal you’re looking for ahead of
time. If you know your end goal, you can specify rules that can help you achieve it.
• If the algorithm is being designed to find out how many moves a knight can make on a square, 8x8
chessboard while visiting every square, it’s possible to create a heuristic that causes the knight to always
choose the path with the most available moves afterward.
• However, because we’re trying to create a specific path, it may be better to create a heuristic that causes
the knight to choose the path with the fewest available moves afterward.
• Since the available decisions are much narrower, so too are the available solutions, and so they are
found more quickly.
Generate and Test Search
• Iterative Nature: Iteration allows for learning from previous attempts. Solutions are refined through
repeated iterations.
3. Ridge: Any point on a ridge can look like a peak because movement in all possible directions is
downward. Hence the algorithm stops when it reaches this state.
• To overcome Ridge: In this kind of obstacle, use two or more rules before testing. It implies moving in
several directions at once.
Pros & Cons of Hill Climbing Algorithm
Advantages –
• Hill Climbing is a simple and intuitive algorithm that is easy to understand and implement.
• It can be used in a wide variety of optimization problems, including those with a large search space and complex
constraints.
• Hill Climbing is often very efficient in finding local optima, making it a good choice for problems where a good solution
is needed quickly.
• The algorithm can be easily modified and extended to include additional heuristics or constraints.
Limitations –
• Hill Climbing can get stuck in local optima, meaning that it may not find the global optimum of the problem.
• The algorithm is sensitive to the choice of initial solution, and a poor initial solution may result in a poor final solution.
• Hill Climbing does not explore the search space very thoroughly, which can limit its ability to find better solutions.
• It may be less effective than other optimization algorithms, such as genetic algorithms or simulated annealing, for
certain types of problems.
state-space landscape
objective
global maximum
function
cost
global minimum
function
current
state
neighbors
Hill
Climbing
Example – House & Hospital Distance
• The "House and Hospital" problem is a classic optimization problem that involves
finding the optimal assignment of houses to hospitals based on certain criteria. The
problem statement typically goes something like this:
Problem Statement:
• We have a set of houses and a set of hospitals. And our goal might be, in a world
that's formatted as the grid. Our objective in this case is to try and minimize the
distance of any of the houses from a hospital.
• There are a number of ways we could calculate that distance, but one way is the
Manhattan distance, this idea of how many rows and columns would you have to
move inside of this grid layout in order to get to a hospital.
Example – House & Hospital Distance
Cost: 17
Cost: 17
Cost: 17
Cost: 15
Cost: 13
Cost: 11
This is optimal as per local maxima of the hill climbing algo. But What about the Global maxima?
Cost: 9
This is optimal as per global maxima but hill climbing will not give this solution.
Conclusion
• Remember that while hill climbing algorithms are relatively simple and intuitive, they may struggle with
complex optimization problems that have many local optima or discontinuous search spaces.
• To address the limitations of hill climbing algorithm, researchers have developed various extensions
and hybrid approaches that combine hill climbing with other optimization methods, such as genetic
algorithms, simulated annealing, or particle swarm optimization.
• These combinations aim to leverage the strengths of different techniques while mitigating their
weaknesses.
• It's important to convey that while hill climbing algorithms provide a foundational understanding of
optimization, they are just one piece of the broader landscape of optimization techniques.
Simulated Annealing
• A variation of hill climbing in which, at the beginning of the process, some downhill moves may be
made.
• To do enough exploration of the whole space early on, so that the final solution is relatively insensitive to
the starting state.
• Lowering the chances of getting caught at a local maximum, or plateau, or a ridge.