0% found this document useful (0 votes)
1 views

Algo_Ch_5 and 6

Chapter 5 discusses backtracking as a problem-solving technique for various computational problems, including the 8 Queens Problem, Graph Coloring, Hamiltonian Cycle, Knapsack Problem, and Traveling Salesman Problem. Each section outlines the problem description, backtracking approach, and provides pseudocode for implementation. The chapter concludes by emphasizing the effectiveness of backtracking in exploring multiple solutions and ensuring compliance with problem constraints.

Uploaded by

yabsram94
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

Algo_Ch_5 and 6

Chapter 5 discusses backtracking as a problem-solving technique for various computational problems, including the 8 Queens Problem, Graph Coloring, Hamiltonian Cycle, Knapsack Problem, and Traveling Salesman Problem. Each section outlines the problem description, backtracking approach, and provides pseudocode for implementation. The chapter concludes by emphasizing the effectiveness of backtracking in exploring multiple solutions and ensuring compliance with problem constraints.

Uploaded by

yabsram94
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 46

Design and

Analysis of
Algorithm
Chapter 5: Back
Tracking

Elsay.M
Chapter 5: Back Tracking (6hr)

5.1. 8 Queens Problem


5.2. Graph Coloring
5.3. Hamiltonian Cycle
5.4. Knapsack Problems
5.5. Traveling Salesman Problems
Introduction to Backtracking
• Backtracking is a problem-solving technique that
involves searching for a solution by exploring possible
options and undoing ("backtracking") previous steps
if a solution is not feasible.
• It is typically used to solve problems that involve
constraints or decision-making processes, where we
need to find all possible solutions or a valid solution
that satisfies given conditions.
• Backtracking explores all potential solutions
recursively and backtracks once it determines that a
solution path is not viable.
5.1 The 8 Queens Problem
• Problem Description
• The 8 Queens problem involves placing 8 queens on an 8x8
chessboard such that no two queens attack each other.
Queens can attack other pieces in the same row, column, or
diagonal. The goal is to find all possible configurations that
allow for placing all 8 queens without conflict.
• Backtracking Approach
1. Place a queen in the first row, then move to the next row.
2. For each row, try placing a queen in each column.
3. Check if the current placement is safe (i.e., no other queens threaten
the position).
4. If it is safe, place the queen and move to the next row.
5. If it is not safe, backtrack (i.e., remove the queen from the previous
row and try the next column).
Example
• Start by placing the first queen in the first
row.
• If the queen is placed at (1,1), the next
queen can be placed in the second row in
any column except the first column and
columns that share a diagonal with (1,1).
• Continue until either a solution is found or
no valid placements are left, at which point
you backtrack.
1. The 8 Queens Problem
• Problem Description
• The task is to place 8 queens on an 8x8 chessboard such that no
two queens can attack each other. Queens can attack vertically,
horizontally, and diagonally.
• Example and Solution
o Step 1: Start with an empty 8x8 board.
o Step 2: Place the first queen in row 1, column 1 (Q1).
o Step 3: In the second row, place the second queen in the first available
position that is not under attack by the first queen (row 2, column 3).
o Step 4: Continue placing queens in each subsequent row, ensuring no two
queens are in the same column, row, or diagonal.
o Step 5: If no valid position is found in a row, backtrack to the previous
row and move the last queen to the next available position.
o Step 6: Repeat this process until all 8 queens are placed.
Cont.. 1 2 3 4 5 6 7 8
1 Q
2 Q
3 Q
4 Q
5 Q
6 Q
7 Q
8 Q

• How Backtracking Works:


• Place one queen at a time and check if the position is safe.
• If a safe placement is not possible, backtrack to the previous
position and try placing the queen in the next available
column.
Algorithm Pseudocode (python)
def solveNQueens(board, row):
if row >= 8:
printSolution(board)
return
for col in range(8):
if isSafe(board, row, col):
board[row][col] = 1
solveNQueens(board, row + 1)
board[row][col] = 0 # backtrack
5.2 Graph Coloring
• Problem Description
1.Graph coloring is the problem of assigning colors to the vertices
of a graph such that no two adjacent vertices have the same
color. The objective is to use the minimum number of colors.
• Backtracking Approach
1. Start by assigning the first color to the first vertex.
2. Move to the next vertex and assign it a color different from its
adjacent vertices.
3. If a valid color is found, move to the next vertex.
4. If no valid color is available, backtrack to the previous vertex
and try a different color.
Example: 2. Graph Coloring Problem
• Problem Description
• The goal is to color the vertices of a graph using the
minimum number of colors such that no two adjacent
vertices have the same color.
• Example and Solution
• Consider the following graph:
• A — B
• | \ |
C — D
Cont..
• Vertices: A, B, C, D
• Edges: AB, AC, AD, BC, BD
Step-by-Step Coloring:
o Step 1: Start with vertex A and color it Red.
o Step 2: Move to vertex B, color it Blue (since A is Red, B must be a different color).
o Step 3: Move to vertex C, color it Blue (A is Red, so C can be Blue).
o Step 4: Move to vertex D, color it Red (B is Blue and C is Blue, so D must be Red).
Solution:
• Vertex A: Red, Vertex B: Blue
• Vertex C: Blue, Vertex D: Red
• How Backtracking Works: Try coloring each vertex with a different color
than its adjacent vertices. If a vertex cannot be colored, backtrack and
change the color of the previous vertex.
Algorithm Pseudocode
def graphColoring(graph, colors, vertex):
if vertex == V: # All vertices are colored
return True
for color in range(1, m+1): # Try all colors
if isSafe(graph, vertex, color):
colors[vertex] = color
if graphColoring(graph, colors, vertex + 1):
return True
colors[vertex] = 0 # backtrack
return False
5.3 Hamiltonian Cycle
• Problem Description
• A Hamiltonian cycle is a cycle in a graph that visits each
vertex exactly once and returns to the starting vertex.
The goal is to determine if such a cycle exists in a given
graph.
Backtracking Approach
1. Start at a vertex and try to construct a path by visiting an
adjacent vertex.
2. Ensure that the next vertex has not already been visited and is
adjacent to the current vertex.
3. If all vertices are visited and the last vertex is adjacent to the
first vertex, a Hamiltonian cycle is found.
4. If no valid path is found, backtrack and try a different vertex.
Example: Hamiltonian Cycle
• Problem Description: A Hamiltonian cycle is a cycle that visits every vertex in a
graph exactly once and returns to the starting vertex.
• Example and Solution: Consider the following graph:
• A — B
• | |
• C — D
Vertices: A, B, C, D and Edges: AB, AC, BD, CD
• Step-by-Step Hamiltonian Cycle:
o Step 1: Start at vertex A.
o Step 2: Move to vertex B.
o Step 3: Move to vertex D.
o Step 4: Move to vertex C.
o Step 5: Return to vertex A to complete the cycle.
• Solution: The Hamiltonian cycle is A → B → D → C → A.
• How Backtracking Works: Start from a vertex and keep visiting unvisited vertices.
• If you cannot complete the cycle, backtrack and try a different path.
Algorithm Pseudocode(python)
def hamiltonianCycle(graph, path, position):
if position == V:
if graph[path[position - 1]][path[0]] == 1:
return True
else:
return False
for vertex in range(1, V):
if isSafe(vertex, graph, path, position):
path[position] = vertex
if hamiltonianCycle(graph, path, position + 1):
return True
path[position] = -1 # backtrack
• return False
5.4 Knapsack Problem
• Problem Description
• The Knapsack problem involves selecting a subset of items, each
with a weight and a value, to maximize the total value without
exceeding the weight capacity of the knapsack.
• Backtracking Approach
• Start with no items in the knapsack.
• For each item, decide whether to include it in the knapsack.
• If including the item keeps the total weight below the capacity,
add it and move to the next item.
• If the total weight exceeds the capacity, backtrack and try a
different combination.
Example: Knapsack Problem
• Problem Description: You are given items, each with a weight and value, and a
knapsack with a weight capacity. The goal is to select items to maximize value without
exceeding the weight limit.
• Example and Solution: Knapsack capacity: 5 kg. Items:
• Item 1: Weight = 2 kg, Value = $12
• Item 2: Weight = 1 kg, Value = $10
• Item 3: Weight = 3 kg, Value = $20
• Item 4: Weight = 2 kg, Value = $15
• Step-by-Step Solution:
• Step 1: Start with an empty knapsack.
• Step 2: Try adding items one by one.
 Add Item 1: Total weight = 2 kg, Total value = $12.
 Add Item 2: Total weight = 3 kg, Total value = $22.
 Add Item 4: Total weight = 5 kg, Total value = $37.
• Step 3: The knapsack is now full, with a total weight of 5 kg and a total value of $37.
• Solution: The optimal selection is Item 1, Item 2, and Item 4, giving a total value of $37.
• How Backtracking Works: Try adding each item to the knapsack.
• If the total weight exceeds the capacity, backtrack and try a different combination.
Example 2.
Consider a knapsack with a weight limit of 50 kg and
three items with the following weights and values:
1. Item 1: Weight 10 kg, Value $60
2. Item 2: Weight 20 kg, Value $100
3. Item 3: Weight 30 kg, Value $120
• Backtrack to explore all combinations of items, such as
(1,2), (2,3), and select the one with the highest value.
Algorithm Pseudocode(python)
def knapsack(capacity, weights, values, n):
if n == 0 or capacity == 0:
return 0
if weights[n-1] > capacity:
return knapsack(capacity, weights, values, n-1)
else:
return max(values[n-1] + knapsack(capacity -
weights[n-1], weights, values, n-1), knapsack(capacity,
weights, values, n-1))

5.5 Traveling Salesman Problem (TSP)

• Problem Description
• The Traveling Salesman Problem (TSP) involves finding the
shortest possible route that visits each city exactly once and
returns to the starting city. This problem is similar to the
Hamiltonian cycle but includes minimizing the total distance.
• Backtracking Approach
o Start from a city and move to the next city that hasn't been visited
yet.
o Keep track of the total distance traveled.
o If all cities are visited and the current city is adjacent to the starting
city, return the total distance.
o If no valid solution is found, backtrack and try a different route.
Example: 5. Traveling Salesman Problem (TSP)
• Problem Description: The Traveling Salesman Problem (TSP) involves finding the
shortest possible route that visits a set of cities exactly once and returns to the
starting city.
• Example and Solution: Consider 4 cities (A, B, C, D) with the following distances
between them:
o A → B: 10 km, A → C: 15 km, A → D: 20 km, B → C: 35 km, B → D: 25 km, C → D: 30
km
• Step-by-Step Solution:
o Step 1: Start at city A.
o Step 2: Visit city B (10 km).
o Step 3: From B, visit city D (25 km).
o Step 4: From D, visit city C (30 km).
o Step 5: Return to city A (15 km).
• Total Distance: 10 + 25 + 30 + 15 = 80 km.
• Solution: The shortest route is A → B → D → C → A, with a total distance of 80 km.
• How Backtracking Works: Try visiting each city exactly once.
• If the route is not optimal, backtrack and try another route.
Algorithm Pseudocode (python)
def tsp(graph, position, n, count, cost, visited):
if count == n and graph[position][0]:
return cost + graph[position][0]
min_cost = float('inf')
for i in range(n):
if not visited[i] and graph[position][i]:
visited[i] = True
min_cost = min(min_cost, tsp(graph, i, n, count + 1, cost
+ graph[position][i], visited))
visited[i] = False # backtrack
• return min_cost
Conclusion
• Backtracking is an effective way to solve problems where multiple solutions
are possible. For each of these problems (Queens, Graph Coloring,
Hamiltonian Cycle, Knapsack, and TSP), backtracking explores all
possibilities, ensuring that the solution meets the problem's constraints.
• If a partial solution fails, backtracking allows us to discard that path and try
other options, leading to an optimal or valid solution.

• Backtracking is a powerful technique for solving optimization and constraint


satisfaction problems.
• By exploring all possible solutions and backtracking when necessary, we can
find valid solutions to complex problems like the 8 Queens, Graph Coloring,
Hamiltonian Cycle, Knapsack Problem, and Traveling Salesman Problem.
Design and Analysis
of Algorithm
Chapter 6:
Introduction to
Probabilistic Algo –
Parallel algorithm

Elsay.M
6.1 Introduction to Probabilistic Algorithms

• Definition: A probabilistic algorithm is an algorithm that


uses random numbers to influence its behavior. Unlike
deterministic algorithms, which produce the same output for
a given input every time, probabilistic algorithms may
produce different outputs for the same input on different
executions.
• Probabilistic algorithms are particularly useful when dealing
with problems that are too complex to solve exactly within
reasonable time or space constraints. They trade accuracy
for efficiency and simplicity.
Key Characteristics Probabilistic
Algorithms
• Randomness: These algorithms rely on random
number generation as part of their logic.
• Efficiency: They often solve problems faster by taking
advantage of random choices, especially in cases where
deterministic algorithms would require much more time.
• Approximation: The results of probabilistic algorithms
are not always exact; they offer approximate solutions
with a high probability of being correct.
Types of Probabilistic Algorithms

1. Las Vegas Algorithms:


 Definition: These algorithms always produce correct results, but their
runtime is variable.
 Example: Randomized QuickSort.
• Key Point: The result is guaranteed, but the running time may vary
depending on the randomness.
2. Monte Carlo Algorithms:
• Definition: These algorithms have a fixed running time but may
produce incorrect results with a small probability.
• Example: Primality Testing (checking whether a number is prime or
composite).
• Key Point: The algorithm is fast, but there is a small chance
it might return an incorrect result.
Cont.
3. Atlantic City Algorithms:
o Definition: These are probabilistic algorithms that run in
polynomial time and have a guaranteed probability of success,
usually greater than 1/2.
o Key Point: They offer an optimal combination of correctness
and runtime.
Applications of Probabilistic Algorithms
• Primality Testing: Algorithms like the Miller-Rabin
primality test are used to determine if a number is prime,
particularly for large numbers. They are much faster than
deterministic algorithms.
• Example: For a large number NNN, the Miller-Rabin test
probabilistically checks if NNN is prime in logarithmic time.
• Randomized QuickSort: In QuickSort, if the pivot is
chosen randomly, the expected time complexity becomes
O(nlog⁡n), even in the worst case.
• Monte Carlo Methods in Simulations: Monte Carlo
algorithms are widely used in physics simulations, financial
modeling, and other fields requiring probabilistic
approximation.
Advantages and Disadvantages
of Probabilistic Algorithms
Advantages:
• Speed: Often faster than deterministic algorithms.
• Simplicity: Easier to implement in some cases due to their
reliance on randomness.
• Effectiveness: Good at providing approximate solutions when
exact solutions are not feasible.
Disadvantages:
• Uncertainty: There is no guarantee of getting the correct
answer (in Monte Carlo algorithms).
• Randomness Dependency: The performance and accuracy
depend on the quality of the random number generator.
Deterministic Vs Probabilistic
• Probabilistic and deterministic algorithms are two fundamental types of
algorithms in computer science, differing in their approach to producing
results, handling inputs, and expected execution times.
• 1. Definition and Approach
• Deterministic Algorithm: This type of algorithm, given a specific input,
always follows the same path and produces the same output every time it
is executed. The process is predictable, with no random elements involved.
Examples include classic sorting algorithms like Merge Sort and Quick Sort
(without random pivot selection).
• Probabilistic Algorithm: Also known as a randomized algorithm, it
incorporates random variables or choices in its execution. Given the same
input, a probabilistic algorithm may produce different outputs or follow
different paths in different runs. These algorithms are useful when exact
solutions are too slow or unnecessary. Examples include the randomized
version of Quick Sort and the Monte Carlo algorithm for approximating
values.
2. Reliability and Accuracy
• Deterministic Algorithm: Guarantees the same and
exact answer every time it’s executed on the same
input. There’s no probability involved in its results, so
it’s often used when a precise result is required.
• Probabilistic Algorithm: May yield approximate
answers with a certain probability of correctness or
varying results in multiple runs. Probabilistic algorithms
are often faster but trade some accuracy or certainty,
useful when an approximate solution is sufficient or for
tasks with acceptable error margins.
3. Execution Time and Efficiency
• Deterministic Algorithm: Has predictable execution
time, as it doesn’t rely on randomness. This predictability
makes it easier to analyze and optimize in some cases.
However, it might be less efficient in certain scenarios,
especially if an exhaustive solution is required.
• Probabilistic Algorithm: The time complexity can vary
across different executions. In cases where average
performance is sufficient, probabilistic algorithms are
often faster and may have better average-case complexity
due to the random elements reducing the chance of
encountering worst-case scenarios.
4. Use Cases and Applications
• Deterministic Algorithm: Preferred in situations
requiring precision, such as in cryptographic
applications, database indexing, and formal verification
systems where results must be reproducible.
• Probabilistic Algorithm: Common in scenarios that
benefit from randomness, such as machine learning,
game theory, and simulations, where exploration of
many possible solutions can yield near-optimal results
more quickly than exhaustive approaches.
5. Examples and Typical Algorithms

• Deterministic Algorithms: Binary Search, Dijkstra's


Algorithm, Merge Sort.
• Probabilistic Algorithms: Monte Carlo (approximates
results with a high probability of accuracy), Las Vegas
Algorithm (always finds the correct result but the time
to do so varies).
Summary Table
Aspect Deterministic Algorithm Probabilistic Algorithm

Path Taken Fixed path for each input Randomized path influenced
by probability
Result Accuracy Exact, same result every run Approximate or probabilistic,
may vary per run
Execution Time Predictable Varied, with possible speed
advantages
Typical Applications Cryptography, Database Simulations, Machine
Operations Learning, Game Theory
Examples Merge Sort, Binary Search Monte Carlo, Randomized
Quick Sort
Cont..
• In conclusion, the choice between a deterministic and
probabilistic algorithm depends on the need for
accuracy, speed, and application requirements.
• Deterministic algorithms are best for precise, repeatable
tasks, while probabilistic algorithms are valuable for
efficient approximate solutions.
6.2 Introduction to Parallel
Algorithms
• Definition
• A parallel algorithm is one that divides a problem into
sub-problems that can be solved concurrently by
multiple processors. Parallel algorithms aim to speed up
computation by executing different parts of the
algorithm simultaneously on different processors.
• Parallel computing is crucial in handling large datasets
or problems requiring significant computational
resources, such as machine learning models, scientific
simulations, and real-time systems.
Key Characteristics of Parallel Algorithms
• Concurrency: The ability to execute multiple
computations at the same time.
• Decomposition: The problem is divided into smaller
sub-tasks that can be solved simultaneously.
• Communication and Synchronization: These sub-
tasks may need to communicate or synchronize to
complete the overall task.
Types of Parallelism
• Data Parallelism: The same operation is performed on
different pieces of distributed data. For example, adding
two large arrays element-wise in parallel.
• Example: Matrix multiplication can be performed in parallel by
dividing the rows and columns and performing computations
simultaneously.
• Task Parallelism: Different tasks or functions are
performed simultaneously, often operating on the same or
different data. For example, one thread processes input
while another thread performs computations.
• Example: A web server handling multiple client requests
concurrently by assigning each request to a separate thread.
Models of Parallel Computation
1. PRAM (Parallel Random Access Machine): This is
a theoretical model of parallel computation where an
arbitrary number of processors can access a shared
memory simultaneously.
• Types:
• EREW PRAM (Exclusive Read Exclusive Write): No two processors can
access the same memory location simultaneously.
• CREW PRAM (Concurrent Read Exclusive Write): Multiple processors
can read from the same memory location at the same time but cannot
write simultaneously.
• CRCW PRAM (Concurrent Read Concurrent Write): Multiple processors
can read and write to the same memory location at the same time.
Cont..
2. Distributed Memory Model: Each processor has its
local memory, and processors communicate by sending
messages.
o MPI (Message Passing Interface): A standard for
distributed computing, where processes run on separate
machines and communicate by sending messages.
3. Shared Memory Model: All processors share a single
memory space and can access any memory location.
Synchronization is essential to avoid conflicts.
Parallel Algorithm Design
Techniques
1. Divide and Conquer: The problem is divided into independent
sub-problems that can be solved in parallel.
• Example: Merge Sort can be parallelized by dividing the array into two
halves and sorting both halves simultaneously.
2. Dynamic Programming: In dynamic programming problems,
many sub-problems are independent and can be computed in
parallel.
• Example: Computing Fibonacci numbers for a large input using
memoization and parallel processing.
3. Greedy Algorithms: Greedy problems like finding the
Minimum Spanning Tree (MST) using algorithms like Prim's or
Kruskal’s can benefit from parallel processing by dividing the
edge selection process among multiple processors.
Applications of Parallel
Algorithms
1. Scientific Computing: Many problems in physics, chemistry, and
biology require the simulation of natural processes, which can be
computationally intensive. Parallel algorithms are used to speed up
these simulations.
• Example: Weather forecasting models use parallel algorithms to compute
atmospheric conditions over different geographical regions simultaneously.
2. Machine Learning: Training machine learning models, especially
deep learning models, is computationally expensive. Parallel
algorithms, particularly those running on GPUs, significantly reduce the
training time.
• Example: Gradient descent, an optimization algorithm used in machine
learning, can be parallelized to compute the gradients of large datasets.
3. Database Query Processing: Parallel algorithms are used in
distributed databases to handle multiple queries at the same time,
thereby improving efficiency and reducing response time.
Challenges in Parallel Algorithm
Design
1. Load Balancing: Ensuring that each processor gets
an approximately equal amount of work to do.
2. Communication Overhead: The time taken for
processors to communicate can sometimes outweigh
the benefits of parallelism, especially in distributed
systems.
3. Synchronization: Proper synchronization is needed
to avoid race conditions, where two or more processes
attempt to modify the same memory location
simultaneously.
Conclusion
• Both probabilistic and parallel algorithms are powerful
tools in modern computing:
o Probabilistic algorithms use randomness to improve
efficiency and are especially useful for complex problems
where exact solutions are infeasible.
o Parallel algorithms speed up computation by dividing the
work among multiple processors, making them essential for
solving large-scale problems and handling massive datasets.
• By understanding the fundamentals of these algorithmic
approaches, developers can design more efficient and
scalable solutions for a variety of real-world problems.

You might also like