Algo_Ch_5 and 6
Algo_Ch_5 and 6
Analysis of
Algorithm
Chapter 5: Back
Tracking
Elsay.M
Chapter 5: Back Tracking (6hr)
• Problem Description
• The Traveling Salesman Problem (TSP) involves finding the
shortest possible route that visits each city exactly once and
returns to the starting city. This problem is similar to the
Hamiltonian cycle but includes minimizing the total distance.
• Backtracking Approach
o Start from a city and move to the next city that hasn't been visited
yet.
o Keep track of the total distance traveled.
o If all cities are visited and the current city is adjacent to the starting
city, return the total distance.
o If no valid solution is found, backtrack and try a different route.
Example: 5. Traveling Salesman Problem (TSP)
• Problem Description: The Traveling Salesman Problem (TSP) involves finding the
shortest possible route that visits a set of cities exactly once and returns to the
starting city.
• Example and Solution: Consider 4 cities (A, B, C, D) with the following distances
between them:
o A → B: 10 km, A → C: 15 km, A → D: 20 km, B → C: 35 km, B → D: 25 km, C → D: 30
km
• Step-by-Step Solution:
o Step 1: Start at city A.
o Step 2: Visit city B (10 km).
o Step 3: From B, visit city D (25 km).
o Step 4: From D, visit city C (30 km).
o Step 5: Return to city A (15 km).
• Total Distance: 10 + 25 + 30 + 15 = 80 km.
• Solution: The shortest route is A → B → D → C → A, with a total distance of 80 km.
• How Backtracking Works: Try visiting each city exactly once.
• If the route is not optimal, backtrack and try another route.
Algorithm Pseudocode (python)
def tsp(graph, position, n, count, cost, visited):
if count == n and graph[position][0]:
return cost + graph[position][0]
min_cost = float('inf')
for i in range(n):
if not visited[i] and graph[position][i]:
visited[i] = True
min_cost = min(min_cost, tsp(graph, i, n, count + 1, cost
+ graph[position][i], visited))
visited[i] = False # backtrack
• return min_cost
Conclusion
• Backtracking is an effective way to solve problems where multiple solutions
are possible. For each of these problems (Queens, Graph Coloring,
Hamiltonian Cycle, Knapsack, and TSP), backtracking explores all
possibilities, ensuring that the solution meets the problem's constraints.
• If a partial solution fails, backtracking allows us to discard that path and try
other options, leading to an optimal or valid solution.
Elsay.M
6.1 Introduction to Probabilistic Algorithms
Path Taken Fixed path for each input Randomized path influenced
by probability
Result Accuracy Exact, same result every run Approximate or probabilistic,
may vary per run
Execution Time Predictable Varied, with possible speed
advantages
Typical Applications Cryptography, Database Simulations, Machine
Operations Learning, Game Theory
Examples Merge Sort, Binary Search Monte Carlo, Randomized
Quick Sort
Cont..
• In conclusion, the choice between a deterministic and
probabilistic algorithm depends on the need for
accuracy, speed, and application requirements.
• Deterministic algorithms are best for precise, repeatable
tasks, while probabilistic algorithms are valuable for
efficient approximate solutions.
6.2 Introduction to Parallel
Algorithms
• Definition
• A parallel algorithm is one that divides a problem into
sub-problems that can be solved concurrently by
multiple processors. Parallel algorithms aim to speed up
computation by executing different parts of the
algorithm simultaneously on different processors.
• Parallel computing is crucial in handling large datasets
or problems requiring significant computational
resources, such as machine learning models, scientific
simulations, and real-time systems.
Key Characteristics of Parallel Algorithms
• Concurrency: The ability to execute multiple
computations at the same time.
• Decomposition: The problem is divided into smaller
sub-tasks that can be solved simultaneously.
• Communication and Synchronization: These sub-
tasks may need to communicate or synchronize to
complete the overall task.
Types of Parallelism
• Data Parallelism: The same operation is performed on
different pieces of distributed data. For example, adding
two large arrays element-wise in parallel.
• Example: Matrix multiplication can be performed in parallel by
dividing the rows and columns and performing computations
simultaneously.
• Task Parallelism: Different tasks or functions are
performed simultaneously, often operating on the same or
different data. For example, one thread processes input
while another thread performs computations.
• Example: A web server handling multiple client requests
concurrently by assigning each request to a separate thread.
Models of Parallel Computation
1. PRAM (Parallel Random Access Machine): This is
a theoretical model of parallel computation where an
arbitrary number of processors can access a shared
memory simultaneously.
• Types:
• EREW PRAM (Exclusive Read Exclusive Write): No two processors can
access the same memory location simultaneously.
• CREW PRAM (Concurrent Read Exclusive Write): Multiple processors
can read from the same memory location at the same time but cannot
write simultaneously.
• CRCW PRAM (Concurrent Read Concurrent Write): Multiple processors
can read and write to the same memory location at the same time.
Cont..
2. Distributed Memory Model: Each processor has its
local memory, and processors communicate by sending
messages.
o MPI (Message Passing Interface): A standard for
distributed computing, where processes run on separate
machines and communicate by sending messages.
3. Shared Memory Model: All processors share a single
memory space and can access any memory location.
Synchronization is essential to avoid conflicts.
Parallel Algorithm Design
Techniques
1. Divide and Conquer: The problem is divided into independent
sub-problems that can be solved in parallel.
• Example: Merge Sort can be parallelized by dividing the array into two
halves and sorting both halves simultaneously.
2. Dynamic Programming: In dynamic programming problems,
many sub-problems are independent and can be computed in
parallel.
• Example: Computing Fibonacci numbers for a large input using
memoization and parallel processing.
3. Greedy Algorithms: Greedy problems like finding the
Minimum Spanning Tree (MST) using algorithms like Prim's or
Kruskal’s can benefit from parallel processing by dividing the
edge selection process among multiple processors.
Applications of Parallel
Algorithms
1. Scientific Computing: Many problems in physics, chemistry, and
biology require the simulation of natural processes, which can be
computationally intensive. Parallel algorithms are used to speed up
these simulations.
• Example: Weather forecasting models use parallel algorithms to compute
atmospheric conditions over different geographical regions simultaneously.
2. Machine Learning: Training machine learning models, especially
deep learning models, is computationally expensive. Parallel
algorithms, particularly those running on GPUs, significantly reduce the
training time.
• Example: Gradient descent, an optimization algorithm used in machine
learning, can be parallelized to compute the gradients of large datasets.
3. Database Query Processing: Parallel algorithms are used in
distributed databases to handle multiple queries at the same time,
thereby improving efficiency and reducing response time.
Challenges in Parallel Algorithm
Design
1. Load Balancing: Ensuring that each processor gets
an approximately equal amount of work to do.
2. Communication Overhead: The time taken for
processors to communicate can sometimes outweigh
the benefits of parallelism, especially in distributed
systems.
3. Synchronization: Proper synchronization is needed
to avoid race conditions, where two or more processes
attempt to modify the same memory location
simultaneously.
Conclusion
• Both probabilistic and parallel algorithms are powerful
tools in modern computing:
o Probabilistic algorithms use randomness to improve
efficiency and are especially useful for complex problems
where exact solutions are infeasible.
o Parallel algorithms speed up computation by dividing the
work among multiple processors, making them essential for
solving large-scale problems and handling massive datasets.
• By understanding the fundamentals of these algorithmic
approaches, developers can design more efficient and
scalable solutions for a variety of real-world problems.