0% found this document useful (0 votes)
4 views

Daa Assignment 3_compressed

The document outlines the fundamentals of algorithms, including their properties, time and space complexity, and asymptotic notations (Big O, Omega, Theta). It provides a structured approach to designing algorithms, emphasizing the importance of understanding the problem, defining inputs and outputs, and verifying the algorithm with test cases. Additionally, it discusses the significance of analyzing best, worst, and average case scenarios in algorithm performance.

Uploaded by

Rigma Umesh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Daa Assignment 3_compressed

The document outlines the fundamentals of algorithms, including their properties, time and space complexity, and asymptotic notations (Big O, Omega, Theta). It provides a structured approach to designing algorithms, emphasizing the importance of understanding the problem, defining inputs and outputs, and verifying the algorithm with test cases. Additionally, it discusses the significance of analyzing best, worst, and average case scenarios in algorithm performance.

Uploaded by

Rigma Umesh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 59

`

1.
a) An algorithm is a finite set of instructions that, if followed, accomplishes a particular task.
Properties of an Algorithm
• All algorithms must satisfy the following criteria:
1. Input. Zero or more quantities are externally supplied.
2. Output. At least one quantity is produced.
3. Definiteness. Each instruction is clear and unambiguous.
4. Finiteness. If we trace out the instructions of an algorithm, then for all cases, the
algorithm terminates after a finite number of steps.
5. Effectiveness: Every instruction must be very basic so that it can be carried out, in principle,
by a person using only pencil and paper.

2.
3.a)
O – NOTATION (Upper Bound)
• The O-notation (pronounced as Big “Oh”) is used to measure the performance of an algorithm
which depends on the volume of Input data.
• The O-notation is used to define the order of growth of an algorithm, as the input size increases,
the performance varies.
• The function f (n) = O (g (n)) (read as f of n is big oh of g of n) iff there exists two positive
constants c and n0 such that f (n) <= cg(n) for all n, where n>=n0.
O(g(n)) = { f(n) : there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0 }.
Big-O notation represents the upper bound of the running time of an algorithm. Thus, it gives the
worst-case complexity of an algorithm.
OMEGA NOTATION (Ω)
• The function f (n) = Ω (g(n)) (read as “ f of n is omega of g of n” ) iff there exists two positive
constants c and n0 such that f(n) >= c * g(n) for all n, where n >= n0. Ω(g(n)) = { f(n) : there exist
positive constants c and n0 such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0 }.
• Omega notation represents the lower bound of the running time of an algorithm.
• Thus, itprovides the best case complexity of an algorithm.
b)
4.a)
Time Complexity
• The time T (P) taken by a program P is the sum of the compile time and the run (execution) time.
• The compile time does not depend on the instance characteristics. (also we may assume that a
compiled program will be run several times without recompilation)
• This run time is denoted by tp (instance characteristics).
• If we know the characteristics of the compiler to be used, we could proceed to determine the
number of additions, subtractions, multiplications, divisions, compares, loads, stores, and so on, that
are performed when the code for P is used on an instance with characteristic n.
Space Complexity
The space needed by each of these algorithms is seen to be the sum of the following components:
*A fixed part that is independent of the characteristics (eg. Number, size) of the inputs and
outputs. This part typically includes the instruction space (space for code), space for simple variable
and fixed-size component variables (aggregate), space for constants and so on.
* A variable part that consists of the consists of the space needed by component variables
whose size is dependent on the particular problem instance being solved, the space needed by
referenced variables (to the extent that this depends on instance characteristics ) and the recursion
stack space (depends on the instance characteristics).
• The space requirement S(P) of any algorithm P may therefore be written S (P) = c+ Sp
(instance characteristics), where c is a constant.
• When analyzing the space complexity of an algorithm, we concentrate solely on estimating
Sp (instanced characteristics).
• For any given problem, we need first to determine which instance characteristics to use to
measure the space requirements.
Algorithm Sum
Algorithm Sum(a, n)
{
s:=0.0;
for i:= 1 to n do
s:= s+ a [i];
return s;
}

b)
O – NOTATION (Upper Bound)
• The O-notation (pronounced as Big “Oh”) is used to measure the performance of an algorithm
which depends on the volume of Input data.
• The O-notation is used to define the order of growth of an algorithm, as the input size increases,
the
performance varies.
• The function f (n) = O (g (n)) (read as f of n is big oh of g of n) iff there exists two positive
constants c and n0 such that f (n) <= cg(n) for all n, where n>=n0.
O(g(n)) = { f(n) : there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0 }.
Big-O notation represents the upper bound of the running time of an algorithm. Thus, it gives the
worst-case complexity of an algorithm.

OMEGA NOTATION (Ω)


• The function f (n) = Ω (g(n)) (read as “ f of n is omega of g of n” ) iff there exists two positive
constants c and n0 such that f(n) >= c * g(n) for all n, where n >= n0. Ω(g(n)) = { f(n) : there exist
positive constants c and n0 such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0 }.
• Omega notation represents the lower bound of the running time of an algorithm.
• Thus, itprovides the best case complexity of an algorithm.

THETA NOTATION (Θ)


• The function, f(n) = Θ (g(n)) (read as f of n is big Theta of g of n) iff there exists two positive
constants c1, c2 and n0 such that c1.g(n) <= f(n) <= c2.g(n) for all n, where
n >= n0.
• Suppose we have a program having count 3n+2. we can write it as f(n)=3n+2.
• We saythat 3n+2= Θ (n), that is of the order of n, because c1. n >= 3n + 2 <= c2. n .after a
particular valueof n.
• But we cannot say that 3n + 2 = Θ (1) or Θ (n2) or Θ(n 3).
• Theta notation encloses the function from above and below. Since it represents the upper and the
lower bound of the running time of an algorithm, it is used for analyzing the average-case
complexity of an algorithm.

Little ‘Oh’ Notation (o)


• The asymptotic upper bound provided by O-notation may or may not be asymptotically
tight.
• The bound 2n 2 = O(n 2) is asymptotically tight, but the bound 2n 2 = O(n 3) is not.
• We use o-notation to denote an upper bound that is not asymptotically tight.
• We formally define o(g(n)) (“little-oh of g of n”) as the set o(g(n)) = { f(n) : for any positive
constants c > 0, there exists a constant n 0 >0, such that 0 ≤ f(n) < cg(n) for all n ≥ n0 }
• For example, 2n = o(n 2 ) , but 2n 2 != o(n 2) .

Little Omega (ω)


• ω(g(n)) = { f(n) : for any positive constants c>0 and n0>0 such that 0 ≤ cg(n) < f(n) for all n
≥ n0 }
• We use ω notation to denote a lower bound that is not asymptotically tight.
• Main difference with Ω is that, ω defines for some constants c by ω defines for all constants.
• One way to define it is by

5.
Designing an algorithm typically involves several structured steps to ensure that the solution is
effective, efficient, and correct. Below is a step-by-step guide to designing an algorithm:

1.Understand the Problem


- Clarify the problem statement: Read the problem carefully, understand what is being asked, and
identify the input and output.
- Identify constraints: Understand any constraints or limitations, such as time complexity, space
complexity, and other domain-specific conditions.
- Identify edge cases: Consider possible edge cases or special conditions that need to be handled.
2. Define the Inputs and Outputs
- Inputs: Clearly define the format and type of inputs the algorithm will receive.
- Outputs: Define what the algorithm is supposed to return or output after processing the inputs.

3. Break Down the Problem


- Divide the problem into smaller parts: Decompose the problem into manageable subproblems or
components, which can be solved independently.
- Look for patterns: Identify patterns or relationships in the problem to simplify the design
process.

4. Choose the Right Approach/Strategy


- Select an appropriate algorithmic paradigm: Depending on the nature of the problem, choose an
approach such as:
- Brute-force
- Divide and conquer
- Greedy
- Dynamic programming
- Backtracking
- Graph algorithms (e.g., BFS, DFS)
- Decide on data structures: Select the appropriate data structures (arrays, linked lists, stacks,
queues, trees, graphs, hash tables, etc.) that best suit the problem.

5. Design the Algorithm


- Write pseudocode: Draft the algorithm in pseudocode or a flowchart to outline the steps in
human-readable form.
- Consider algorithmic efficiency: Analyze the time complexity (Big O notation) and space
complexity of the algorithm to ensure it meets the problem’s constraints.
- Handle edge cases: Ensure the algorithm accounts for edge cases and exceptional inputs.

6. Verify the Algorithm with Test Cases


- Manually test with sample inputs: Run the algorithm with various test cases, including normal
cases, edge cases, and invalid inputs, to ensure it works correctly.
- Perform dry runs: Trace the algorithm manually for a set of inputs to verify that it behaves as
expected.

7. Optimize (if necessary)


- Evaluate performance: Analyze if the algorithm can be optimized in terms of time and space
complexity.
- Refine the design: If there are performance issues, revisit the algorithm to optimize it (e.g.,
improve sorting or searching efficiency, reduce space usage, etc.).

8. Write the Code


- Translate the pseudocode to code: Implement the algorithm in your chosen programming
language.
- Follow best coding practices: Write clean, readable code with proper indentation, comments, and
error handling.

9. Test and Debug


- Test the implementation: Run your code with a variety of inputs, including edge cases, and
verify that it produces the correct results.
- Debug issues: If there are any issues, use debugging tools or techniques (e.g., print statements,
debuggers) to trace and fix the bugs.
10. Refactor (if necessary)
- Refactor for clarity and efficiency: After the algorithm works correctly, refactor the code to
improve readability, efficiency, or maintainability.
- Optimize the implementation: Make improvements where needed, such as reducing redundant
computations or improving memory usage.

11. Document the Algorithm


- Write documentation: Add comments and documentation to explain the purpose of the
algorithm, its inputs, outputs, and any specific steps or optimizations.
- Provide examples: Include example inputs and outputs in the documentation to help users
understand how the algorithm works.

12. Evaluate and Review


- Peer review: If possible, have others review the algorithm and its code to ensure there are no
errors or potential improvements.
- Compare with known algorithms: Check if there is an existing solution or a better algorithm for
the problem.

6.
7.a. Time Complexity
• Best case: If the algorithm finds the element at the first search itself, it is referred as a best case
algorithm.
• Worst case : If the algorithm finds the element at the end of the search or if the searching of the
element fails, the algorithm is in the worst case or it requires maximum number of steps thatcan
be executed for the given parameters.
• Average case: The analysis of average case behavior is complex than best case and worst case
analysis, and is taken by the probability of Input data. As the volume of Input data increases, the
average case algorithm behaves like worst case algorithm. In an average case behavior; the
searched element is locatedin between a position of the first and last element.

8. a. Asymptotic notation (O, Ω, Θ )

• The Asymptotic notation introduces some terminology that enables to make meaningful
statements about the time and space complexities of an algorithm. The functions f and g are
nonnegative functions.

1. O – NOTATION (Upper Bound)

• The O-notation (pronounced as Big “Oh”) is used to measure the performance of an


algorithm which depends on the volume of Input data.
• The O-notation is used to define the order of growth of an algorithm, as the input size increases,
theperformance varies.
• The function f (n) = O (g (n)) (read as f of n is big oh of g of n) iff there exists two positive
constants c and n0 such that f (n) <= cg(n) for all n, where n>=n0.

O(g(n)) = { f(n) : there exist positive constants c and n0 such


that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0 }

• Big-O notation represents the upper bound of the running time of an algorithm.
hus, it gives the worst-case complexity of an algorithm.
• Suppose we have a program having count 3n+2.
o We can write it as f (n) =3n+2. We say that 3n+2 = O(n), that is of the order of n,
because f (n) <= 4n or 3n + 2 <=4n, for all n>=2.
• Another e.g. Suppose f (n) =10n 2 +4n+2.
•We say that f (n) =O (n 2 ) sine 10n 2 +4n+2<=11n 2 for n>=2
•But here we can’t say that f (n) =O(n) since 10n 2 + 4n+2 never less than or equal to cn,
•That is ,10n 2 +4n+2 != O (n) , But we are able to say that f (n) =O (n 3 ) since f (n) can be
less than or equal to cn 3, same as 10n 2 +4n+2<=10n4 , for n>=2.

2.OMEGA NOTATION (Ω)

• The function f (n) = Ω (g(n)) (read as “ f of n is omega of g of n” ) iff there exists two positive
constants c and n0 such that f(n) >= c * g(n) for
all n, where n >= n0.

Ω(g(n)) = { f(n) : there exist positive constants c and n0 such


that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0 }

• Omega notation represents the lower bound of the running time of an algorithm.
• Thus, itprovides the best case complexity of an algorithm.
• Suppose we have a program having count 3n+2. we can write it as f(n)=3n+2.
• And we say that 3n+2= Ω (n), that is of the order of n, because f(n)>=3n or 3n+2>=3n, for all
n>=1.
• Normally suppose a program has step count equals 5, and then we say that it has an order Ω
(constant) or Ω (1).
• If f(n) = 3n+5 then Ω (n) or Ω (1), But we cannot express its time complexity as Ω(n 2 ).
• If f(n) = 5n 2+8n+2 then Ω (n 2) or Ω (n) or Ω (1)

3. THETA NOTATION (Θ)

• The function, f(n) = Θ (g(n)) (read as f of n is big Theta


of g of n) iff there exists two positive constants c1, c2
and n0 such that c1.g(n) <= f(n) <= c2.g(n) for all n, where
n >= n0.
• Suppose we have a program having count 3n+2. we can
write it as f(n)=3n+2.
• We saythat 3n+2= Θ (n), that is of the order of n,
because c1. n >= 3n + 2 <= c2. n .after a particular value
of n.
• But we cannot say that 3n + 2 = Θ (1) or Θ (n2) or Θ
(n 3).
• Theta notation encloses the function from above and below. Since it represents the upper and the
lower
bound of the running time of an algorithm, it is used for analyzing the average-case complexity of
an
algorithm.
• Normally suppose a program has step count equals 5, then we say that it has an order Θ(constant)
or Θ (1).
• If f(n) = 3n+5 then Θ (n)
• If f(n) = 5n2+8n+2 then Θ (n 2).

4.Little ‘Oh’ Notation (o)

• The asymptotic upper bound provided by O-notation may or may not be asymptotically
tight.
• The bound 2n 2 = O(n 2) is asymptotically tight, but the bound 2n 2 = O(n 3) is not.
• We use o-notation to denote an upper bound that is not asymptotically tight.
• We formally define o(g(n)) (“little-oh of g of n”) as the set
o(g(n)) = { f(n) : for any positive constants c > 0, there exists a constant n 0 >0, such
that 0 ≤ f(n) < cg(n) for all n ≥ n0 }
• For example, 2n = o(n 2 ) , but 2n 2 != o(n 2) .
• The definitions of O-notation and o-notation are similar.
• The main difference is that in f(n) = O(g(n) , the bound 0 ≤ f(n) ≤ cg(n) holds for some
constant c > 0, but in f(n) = o(g(n)), the bound 0 ≤ f(n) < cg(n) holds for all constants c > 0.
• Intuitively, in o-notation, the function f(n) becomes insignificant relative to g(n) as n
approaches infinity.

5. Little Omega (ω)

Definition
• ω(g(n)) = { f(n) : for any positive constants c>0 and n0>0 such that 0 ≤ cg(n) < f(n) for all n
≥ n0 }
• We use ω notation to denote a lower bound that is not asymptotically tight.
• Main difference with Ω is that, ω defines for some constants c by ω defines for all
constants.

11. Merge Sort is a divide-and-conquer algorithm used to sort an array or a list of elements. It
works by recursively dividing the list into two halves, sorting each half, and then merging the sorted
halves back together to produce a fully sorted list.

Time Complexity:
• Best Case: O(n log n)
• Worst Case: O(n log n)
• Average Case: O(n log n)

Eg: Input Array: [38, 27, 43, 3, 9, 82, 10]

Step 1: Divide the Array


• First, split the array into two halves:
• Left half: [38, 27, 43, 3]
• Right half: [9, 82, 10]

Step 2: Recursively Split Until Base Case


• Split [38, 27, 43, 3] into two halves:

• Left: [38, 27]


• Right: [43, 3]
• Split [38, 27] into:

• Left: [38]
• Right: [27]
• Now both are single elements, so they are "sorted."
• Similarly, split [43, 3]:

• Left: [43]
• Right: [3]
• These are already sorted as well.
• Now split [9, 82, 10] into:

• Left: [9]
• Right: [82, 10]
• Split [82, 10] into:

• Left: [82]
• Right: [10]
• Both are sorted.

Step 3: Merge the Halves


• Now, we merge the individual elements back together, sorting them in the process:
1. Merge [38] and [27]:

• Sorted merge: [27, 38]


2. Merge [43] and [3]:

• Sorted merge: [3, 43]


3. Merge [27, 38] and [3, 43]:

• Sorted merge: [3, 27, 38, 43]


4. Merge [82] and [10]:

• Sorted merge: [10, 82]


5. Merge [9] and [10, 82]:

• Sorted merge: [9, 10, 82]


6. Finally, merge [3, 27, 38, 43] and [9, 10, 82]:

• Sorted merge: [3, 9, 10, 27, 38, 43, 82]

Final Sorted Array:


• [3, 9, 10, 27, 38, 43, 82]

12. Quick Sort is a divide-and-conquer algorithm that works by


selecting a "pivot" element from the array and partitioning the
other elements into two subarrays. The first subarray contains
elements less than the pivot, and the second subarray contains
elements greater than the pivot. The process is then recursively
repeated on the subarrays.

Time Complexity:
• Best & Average Case: O(n log n)
• Worst Case: O(n²) (if the pivot choice is poor, like always picking the smallest or largest
element)
• Space Complexity: O(log n) for recursive stack (average case)

Eg: Input: [10, 7, 8, 9, 1, 5]

Step 1: Choose Pivot


• Let's select the last element as the pivot, which is 5.

Step 2: Partition the Array


• Reorder the array such that elements less than 5 are on the left and elements greater than 5
are on the right.
• After partitioning:
• Left of 5: [1]
• Right of 5: [10, 7, 8, 9]
• Pivot 5 is now in its correct position.

Array after partitioning: [1, 5, 8, 7, 9, 10]

Step 3: Recursively Sort Subarrays


• Left subarray: [1] (already sorted, only one element)
• Right subarray: [10, 7, 8, 9]

Sorting the Right Subarray [10, 7, 8, 9]:


• Pivot = 9.
• Partition:
• Left of 9: [7, 8]
• Right of 9: [10]
• Pivot 9 is now in its correct position.

Array after partitioning: [1, 5, 8, 7, 9, 10]

Sorting the Subarray [7, 8]:


• Pivot = 8.
• Partition:
• Left of 8: [7]
• Right of 8: (empty)
• Pivot 8 is now in its correct position.

Array after partitioning: [1, 5, 7, 8, 9, 10]

Now, all subarrays are sorted. The final sorted array is:
Sorted Output: [1, 5, 7, 8, 9, 10]
1) A minimum spanning tree is a spanning tree, but has weights or lengths
associated with the edges, and the total weight of the tree (the sum of the
weights of its edges) is at a minimum.

A Minimum cost spanning tree is a subset T of edges of G such that all the vertices remain
connected when only edges in T are used, and sum of the lengths of the edges in T is as
small as possible. Hence it is then a spanning tree with weight less than or equal to the
weight of every other spanning tree.

* To compute the spanning tree with minimum cost, the


following considerations are taken:
* Let G = (V,E) be an undirected connected graph with V
vertices and E edge.
* A sub-graph t = (V,E’) of the G is a Spanning tree of G iff ‘t’
is a tree.

4)
a)The greedy knapsack is an algorithm for making decisions that have to make a locally
optimal choice at each stage in the hope that this will eventually lead to the best overall
decision.

Given n inputs and a knapsack or bag. Each object i is associated with a weight wi and
profit pi. If fraction of an object xi, 0 ≤ xi ≥ 1 is placed in knapsack earns profit pixi.

void GreedyKnapsack (float m, int n)


{
// p[i]/w[i] ≥ p[i+1]/w[i+1]
for (int i=1; i<=n; i++)
x[i] = 0.0;
float u = m;
for (i=1; i<=n; i++)
{
if(w[i] > u) break;
x[i] = 1.0;
u = u – w[i];
}
if (i<=m) x[i] = u/w[i];
}

b) n=4
m=15
P=[10,5,7,11]
W=[3,4,3,5]

dp[i][w]=dp[i−1][w]
dp[i][w]=dp[i−1][w−W[i]]+P[i]

The optimal solution is to select Items 1 and 4, with a total profit of 18.

5)
a)Binary search is an efficient algorithm for finding an element in a sorted array. It works
by repeatedly dividing the search interval in half. If the value of the search key is less than
the item in the middle of the interval, the search continues on the left half; otherwise, it
continues on the right half.

def binary_search(arr, low, high, target):


# Base case: target is not in the array
if low > high:
return -1 # Element not found

mid = low + (high - low) // 2 # Calculate middle index

# Check if target is present at mid


if arr[mid] == target:
return mid # Element found

# If target is smaller, search in the left half


elif arr[mid] > target:
return binary_search(arr, low, mid - 1, target)

# If target is larger, search in the right half


else:
return binary_search(arr, mid + 1, high, target)

b)Control Abstraction in the greedy strategy refers to a high-level framework or template


that
outlines the control flow of a greedy algorithm. Control abstraction provides a structure to
solve optimization problems by specifying which steps the algorithm must take.

Control Abstraction Pseudocode:

function GreedyAlgorithm(Input):
Solution = {}
while (not TerminationCondition(Solution, Input)):
Candidate = SelectCandidate(Input)
if (FeasibilityCheck(Solution, Candidate)):
Solution = Solution ∪ {Candidate}
Remove Candidate from Input
return Solution
6)
a) The Partition Exchange Sort is an algorithm that falls under the class of
divide-and-conquer algorithms. The main idea behind this algorithm is to partition the
array into two subarrays based on a pivot element, and then exchange elements to
ensure that the elements on the left side of the pivot are less than or equal to the pivot,
and those on the right are greater than or equal to the pivot.

The partition exchange sort algorithm can be outlined as follows:


1. Choose a pivot: Select an element from the array, commonly the last element.
2. Partition the array:
​ Reorganize the array so that:
​ All elements less than or equal to the pivot are on its left.
​ All elements greater than the pivot are on its right.
3. Recursion:
​ Recursively apply the same process to the left and right subarrays (elements
before and after the pivot).
4. Repeat the process until the entire array is sorted.

Step 1: Initial Array


We start with the array:
```
[24, 12, 35, 23, 45, 34, 20, 48]
```

Step 2: Choose a Pivot


We choose the **last element** as the pivot (common choice in Quick Sort). In this case,
the pivot is `48`.

Step 3: Partition the Array


The goal of the partitioning step is to rearrange the elements so that:
- All elements less than or equal to the pivot (`48`) go to the left side.
- All elements greater than the pivot go to the right.

Partitioning process
1. Start with two pointers: one at the beginning (`i = -1`) and the other scanning through
the array (`j = 0 to 6`).
2. For each element `arr[j]`:
- If `arr[j] <= pivot (48)`, increment `i` and swap `arr[i]` with `arr[j]`.
- If `arr[j] > pivot`, do nothing and move to the next element.

After partitioning, the pivot (`48`) ends up in its correct position (index 7).

Array after partitioning

[24, 12, 35, 23, 45, 34, 20, 48]

Here, all elements are already less than the pivot, so no swaps are needed except for
putting the pivot in its correct place at the end of the array.

- Pivot `48` is now at its final position (index 7).


- Left subarray[24, 12, 35, 23, 45, 34, 20]`
Right subarray: There are no elements greater than `48`.

Step 4: Recursively Apply Quick Sort to Subarrays**


We now recursively apply the same process to the subarrays:
- Left subarray `[24, 12, 35, 23, 45, 34, 20]`
- Right subarray: Empty (no further action needed).

Left Subarray: `[24, 12, 35, 23, 45, 34, 20]`**

Pivot: The pivot is the last element, `20`.

Partitioning process:
- Start with `i = -1` and `j = 0 to 5`.
- For each element `arr[j]`, if `arr[j] <= 20`, increment `i` and swap `arr[i]` and `arr[j]`.

Array after partitioning:


- `24` and `12` are greater than `20`, so we do not swap them.
- The pivot `20` is now placed correctly.

Partitioned array:
[12, 20, 35, 23, 45, 34, 24]

After partitioning, the pivot `20` is at position 1. Now we have two subarrays:
- Left subarray `[12]` (already sorted)
- Right subarray: `[35, 23, 45, 34, 24]`

Step 5: Recursively Sort Right Subarray


Now we recursively apply quick sort to the right subarray `[35, 23, 45, 34, 24]`.

Pivot: `24`.

Partitioning proce:
- Start with `i = -1` and `j = 0 to 3`.
- If `arr[j] <= 24`, increment `i` and swap `arr[i]` with `arr[j]`.

Array after partitioning:


[23, 24, 45, 34, 35]

After partitioning, pivot `24` is at position 1. Now we have two subarrays:


- Left subarray: `[23]` (already sorted)
- Right subarray: `[45, 34, 35]`

Step 6: Recursively Sort Right Subarray `[45, 34, 35]`


Pivot: `35`.

Partitioning process
- Start with `i = -1` and `j = 0 to 1`.
- If `arr[j] <= 35`, increment `i` and swap `arr[i]` with `arr[j]`.

Array after partitioning:


[34, 35, 45]

Now, the pivot `35` is at position 1, and we have:


- Left subarray: `[34]` (already sorted)
- *Right subarray**: `[45]` (already sorted)

Step 7: Final Sorted Array**

Now, we combine all the sorted subarrays:


- `[12]` from the first partition.
- `20` from the second partition.
- `[23]` from the third partition.
- `[24]` from the fourth partition.
- `[34, 35, 45]` from the fifth partition.

Thus, the final **sorted array** is:


[12, 20, 23, 24, 34, 35, 45, 48]
Summary of Quick Sort on the Array `[24, 12, 35, 23, 45, 34, 20, 48]`**:
1. **First Partition** (pivot = `48`):
- Left subarray: `[24, 12, 35, 23, 45, 34, 20]`, Pivot `48` is at index 7.
2. **Second Partition** (pivot = `20`):
- Left subarray: `[12]`, Right subarray: `[35, 23, 45, 34, 24]`, Pivot `20` is at index 1.
3. **Third Partition** (pivot = `24`):
- Left subarray: `[23]`, Right subarray: `[45, 34, 35]`, Pivot `24` is at index 4.
4. **Fourth Partition** (pivot = `35`):
- Left subarray: `[34]`, Right subarray: `[45]`, Pivot `35` is at index 1.

Final sorted array: `[12, 20, 23, 24, 34, 35, 45, 48]`.
7 a) In general, greedy algorithms have five pillars:
1. A candidate set, from which a solution is created
2. A selection function, which chooses the best candidate to be added to the solution
3. A feasibility function, that is used to determine if a candidate can be used to contribute to
a solution
4. An objective function, which assigns a value to a solution, or a partial solution

Greedy algorithms are used in applications like


1. minimum spanning
2. shortest path route
3. job scheduling
4. activity selection

b) Let G=(V,E) be a directed graph with edge cost cij.


• The variable cij is defined such that cij>0 for all i and j and cij = ∞ if <i, j> ∉ E.
• Let |V| =n and assume n > 1.
• A tour of G is a directed simple cycle that includes every vertex in V.
• The cost of a tour is the sum of the cost of the edges on the tour.
• The traveling salesperson problem is to find a tour of minimum cost.

Eg:
8 a) There are n jobs to be processed on a machine.
• Each job i has a deadline di≥ 0 and profit pi≥0 .
• Pi is earned iff the job is completed by its deadline.
• The job is completed if it is processed on a machine
for unit time.
• Only one machine is available for processing jobs.
• Only one job is processed at a time on the machine.
b) A Minimum cost spanning tree is a subset T of edges of G such that all the vertices remain
connected when only edges in T are used, and sum of the lengths of the edges in T is as small as
possible. Hence it is then a spanning tree with weight less than or equal to the weight of every other
spanning tree.
b) Algorithm:
Float kruskal (int E[][], float cost[][], int n, int t[][2])
{
int parent[w];
consider heap out of edge cost;
for (i=1; i<=n; i++)
parent[i] = -1; //Each vertex in different set
i=0;
mincost = 0;
while((i<n-1) && (heap not empty))
{
Delete a minimum cost edge (u,v) from the heap and reheapify;
j = Find(u); k = Find(v); // Find the set
if (j != k)
{
i++;
t[i][1] = u;
t[i][2] = v;
mincost += cost[u][v];
Union(j, k);
}
if (i != n-1) printf(“No spanning tree \n”);
else return(mincost);
}
}

9)
10 a) Time

complexity of the program is Θ(sn), where s is the number of terms in the result set. Hence we
can write the worse case complexity as O(n2).

b)Principle of Optimality
An optimal sequence of decisions has the property that whatever the initial state and decisions are,
the remaining decisions must constitute an optimal decision sequence with regard to the state
resulting from the first decision.

12)There are n jobs to be processed on a machine.


• Each job i has a deadline di≥ 0 and profit pi≥0 .
• Pi is earned iff the job is completed by its deadline.
• The job is completed if it is processed on a machine
for unit time.
• Only one machine is available for processing jobs.
• Only one job is processed at a time on the machine.
Branch and Bound, Lower Bound Theory

1. Explain the general method of branch and bound? [12M]

Answer:

● Branch and Bound is another method to systematically search a solution space.


● Just like backtracking, we will use bounding functions to avoid generating subtrees that do
not contain an answer node.
● The term “branch and bound” refers to all state space search methods in which all children
of the E-node are generated before any other live node can becomes the E-node.
● In branch and bound terminology, FIFO search: BFS like state space search: list of live
nodes is a first-in-first-out(FIFO) list, or, a queue.
● LIFO search: D-search like state space search: list of live nodes is a last-in-first-out (LIFO)
list, or, a stack.
● Bounding functions are used to help avoiding the generation of subtree that do not contain
and answer node

2.Apply branch and bound to 0/1 knapsack problem and elaborate it? [8M]

Answer:

Steps of Branch and Bound for 0/1 Knapsack:

Step 1: Initial Setup


● Start with the root node, which corresponds to no items being chosen and no weight
being used.
● Set the initial best solution to 0 (or negative infinity, depending on the
implementation).

Step 2: Branching

● From the root node, consider the two possible branches:


○ Branch 1: Include the first item (i.e., select the item into the knapsack).
○ Branch 2: Exclude the first item (i.e., don't select the item).
● For each branch, create a new subproblem with updated weight and value.

Step 3: Bounding

● For each node (subproblem), calculate the upper bound on the maximum value
achievable from that node:
○ If including the item in the knapsack would exceed the capacity, exclude that
item.
○ If including the item does not exceed the capacity, add its value and subtract
its weight from the remaining capacity.
○ Use the greedy strategy to fill the remaining capacity with the most valuable
items (based on value-to-weight ratio).

Step 4: Pruning

● If the upper bound at a node is less than the best solution found so far, prune that
branch (i.e., do not explore it further).
● If the upper bound is greater than the best solution, continue branching.

Step 5: Termination

● The algorithm terminates when all nodes have been either explored or pruned.
● The best solution found during the exploration is the optimal solution.

3.Explain the method of reduction to solve TSP problem using branch and bound?
[12M]

Answer:

Steps to Solve TSP Using Branch and Bound with Reduction:

1. Initial Setup:
○ Start with the cost matrix (or distance matrix), where the element at position
(i,j)(i, j)(i,j) represents the cost (distance) of traveling from city iii to city jjj.
○ The goal is to find the shortest possible cycle that visits each city exactly once
and returns to the starting city.
2. Reduction Techniques: The key idea is to reduce the cost matrix to help compute a
lower bound on the optimal solution. These reductions help eliminate unpromising
branches early in the search.
○ Row Reduction: For each row of the cost matrix, subtract the smallest value
in the row from every element in that row.
■ This step ensures that every row has at least one zero, which is useful
for bounding.
○ Column Reduction: For each column of the cost matrix, subtract the
smallest value in the column from every element in that column.
■ Similarly, this step ensures that every column has at least one zero.
3. The resulting matrix after these reductions is called the reduced cost matrix. After
these reductions, the sum of all the minimum values from each row and column gives
the lower bound (L) on the cost of the optimal tour.
4. Branching (Exploring Partial Solutions):
○ The search space for TSP consists of all possible permutations of cities, and
the goal is to explore this space efficiently. In the Branch and Bound
approach, we create a search tree where each node represents a partial tour.
○ At each node, we decide which cities are still to be visited, and we explore
further branches by choosing the next city to visit.
○ We branch out by considering all possible next cities to visit from the current
city, generating new subproblems. For example, if the salesman is currently at
city iii and we still need to visit cities j,k,lj, k, lj,k,l, we generate three
branches—one for each possible next city (i.e., jjj, kkk, and lll).
5. Bounding:
○ For each node in the search tree, we calculate a lower bound (cost of the
best possible solution that could be obtained from this partial tour). This lower
bound is calculated based on the reduced cost matrix, which gives an
approximation of the minimum cost to complete the tour from the current
node.
○ If the lower bound of a node exceeds the current best solution (known as the
best bound or best tour found so far), we prune that branch (i.e., stop
exploring that subproblem) because it cannot lead to a better solution.
○ If a complete tour is found (i.e., a leaf node in the search tree), we check if the
total cost is less than the current best solution, and if so, update the best
solution.
6. Reduction After Each Branch:
○ After each branching step, the reduced cost matrix is updated to reflect the
new partial solution. This update involves adjusting the matrix to account for
the fixed decisions (i.e., cities already visited) and recalculating the lower
bound.
○ As the search tree grows, the reduced cost matrix is recalculated and used to
prune branches that cannot possibly lead to a better solution than the current
best.
7. Termination:
○ The algorithm continues branching and bounding until all branches are either
explored or pruned. The best solution found during the search is the optimal
solution to the TSP.
4. Explain the principles of FIFO branch and bound? [8M]

Answer:

5. a. Explain the properties of LC-search? [6M]


b. Explain control abstraction of LC-branch and bound? [6M]

Answer:
A)
● A lower bound for a problem is the worst-case running time of the best possible algorithm
for that problem.
● To prove a lower bound of Ω(n lg n) for sorting, we would have to prove that no algorithm,
however smart, could possibly be faster, in the worst-case, then n lg n.
● The Decision Tree method is a common technique used to establish lower bounds on the
time complexity of algorithmic problems.
● It provides a framework for proving that a particular problem or computational task cannot
be solved more efficiently than a certain lower bound.
● The key idea is to construct a decision tree that represents different possible executions of
an algorithm and analyze the height of this tree to determine the lower bound.
● This method is particularly useful for establishing lower bounds in the context of
comparison-based sorting and searching algorithms.

B)
6. Briefly explain the FIFO brach and bound solution with example? [12M]

Answer:
● Branch and Bound is another method to systematically search a solution space.
● Just like backtracking, we will use bounding functions to avoid generating subtrees that do
not contain an answer node.
● The term “branch and bound” refers to all state space search methods in which all children
of the E-node are generated before any other live node can becomes the E-node.
● In branch and bound terminology, FIFO search: BFS like state space search: list of live
nodes is a first-in-first-out(FIFO) list, or, a queue.
● LIFO search: D-search like state space search: list of live nodes is a last-in-first-out (LIFO)
list, or, a stack.
● Bounding functions are used to help avoiding the generation of subtree that do not contain
and answer node

● In both LIFO and FIFO Branch and Bound the selection rule for the next E-node in rigid
and blind.
● The selection rule for the next E-node does not give any preference to a node that has a
very good chance of getting the search to an answer node quickly.
● The search for an answer node can be speeded by using an “intelligent” ranking function
c( ) for live nodes.
● The next E-node is selected on the basis of this ranking function.

7. Briefly explain the LC brach and bound solution with example? [12M]
Answer:
● The difficulty with using either of these “ideal” cost functions is that computing the cost of a
node will usually involve a search of the subtree x for an answer node.
● Hence, by the time the cost of a node is determined, that subtree has been searched and
there is no need to explore x again.
● Therefore the search algorithm usually rank nodes based only on an estimate ĝ(.), of their
cost.

Let ĝ(x) be an estimate of the additional effort needed to reach an answer node from x. Node
x is assigned a rank using function ĉ(.) such that
ĉ(x) = f(h(x)) + ĝ(x)
where h(x) is the cost of reaching x from the root and f(.) is any non decreasing function.

● Hence the next node to be selected will be the one with least ĉ(x) value. Hence it is called
LC Search.
● In ĉ(x), if g(x) = 0, f(h(x)) is the level of x, hence LC search generates nodes by level, or it
transforms to BFS (FIFO).
● In ĉ(x), if f(h(x))=0, and ĝ (x) >= ĝ (y) whenever y is a child of x, then LC search transforms
to D-Search (LIFO)

8. State 0/1 knapsack problem and design an algorithm of LC Branch and Bound and find
the solution for the knapsack instance with any example? [12M]

Answer:

The 0/1 Knapsack Problem is a classical problem in combinatorial optimization. It involves a


set of items, each with a weight and a value, and a knapsack that can hold a certain weight.
The goal is to select a subset of the items to maximize the total value without exceeding the
weight capacity of the knapsack.

Formally, the problem can be stated as:

● Given:
○ A set of nnn items, each with a weight wiw_iwi​and a value viv_ivi​for
i=1,2,…,ni = 1, 2, \dots, ni=1,2,…,n.
○ A knapsack with a capacity W.
● Objective:
○ Find a subset of items such that the total weight does not exceed W, and the
total value is maximized.

Branch and Bound for 0/1 Knapsack:

Branch and Bound (BB) is an algorithmic technique used to solve combinatorial optimization
problems, such as the 0/1 knapsack problem. It systematically explores all possible subsets
of items, pruning branches of the search tree that cannot lead to a better solution than the
current best one.
Key Concepts:

● Bounding function: Provides an estimate of the best possible solution that can be
obtained from a given node (partial solution). The bounding function is used to prune
branches.
● Node: A partial solution to the problem, represented by selecting some items and
excluding others.
● Pruning: When the bound indicates that a branch cannot lead to an optimal solution,
the branch is discarded.

Steps to Solve the 0/1 Knapsack Problem Using Branch and Bound:

For the knapsack instance with:

● Values: [60, 100, 120]


● Weights: [10, 20, 30]
● Knapsack capacity: 50

Step-by-Step Execution:

1. Initialization:
○ Sort items by value-to-weight ratio:
■ Item 1: 60/10=6
■ Item 2: 100/20=5
■ Item 3: 120/30​=4
○ Sorted by descending ratio: [Item 1, Item 2, Item 3]
2. Root Node:
○ No items are selected, profit = 0, weight = 0.
○ The bound is calculated using the fractional knapsack approach, and the root
node is added to the queue.
3. Branching:
○ For each node, two child nodes are generated (one with the current item
included and one with it excluded).
○ The algorithm proceeds by exploring these branches, calculating bounds at
each step, and pruning if necessary.
4. Solution:
○ The maximum profit found is 220, which corresponds to selecting items 2 and
3 (values 100 and 120) with a total weight of 50.

9. Explain any one application of branch and bound? [12M]

Answer:
● Most straightforward way to solve the puzzle problem is to search the state space for the
goal state and use the path from the initial state to the goal state as an answer.
● There are 16! different arrangements of the tiles on the frame. Only half of them are
reachable from any given initial state.
● If we number frame positions 1 to 16, position i is the frame position containing the tile
number i in the goal arrangement. Position 16 is the empty spot.
● Let position(i) be the position number in the initial state of the title numbered i.
● Then position(16) will denote the position of the empty spot.
10. Apply the branch-and- bound technique in solving the travelling salesman problem?
[12M]

Answer:
• The traveling sales person problem finds application in a variety of situations.
• Suppose we have to route a postal van to pick up mail from mail boxes located at n
different sits.
• An n + 1 vertex graph can be used to represent the situation.
• One vertex represents the post office from which the postal van starts and to which it must
return.
• Edge is assigned a cost equal to the distance from site i to site j.
• The route taken by the postal van is a tour, and we are interested in finding a tour of
minimum length.
Basic Traversal and Search Techniques. Back Tracking
1. Explain any one application back tracking with example?
[8M]
One of the well-known applications of backtracking is solving the N-Queens problem.
2. Describe in detail 8-queens problem using back tracking?
[8M]

Backtracking is used to place queens one by one in different columns and check for possible
conflicts. If placing a queen in a column violates the constraints, the algorithm backtracks
and tries a different column.

Step-by-Step Explanation:

1. Start with an empty board.


2. Place the first queen in the first row and first column.
3. Move to the second row and place a queen in the first column.
○ If there is a conflict (i.e., another queen can attack), try the next column.
○ Repeat this until a safe column is found or all columns are tried.
4. Place the queen if a safe column is found and move to the next row.
5. If no safe column is found in the current row, backtrack to the previous row and move
the queen to the next column.
6. Continue this process until all 8 queens are placed or all possibilities are exhausted.

Example of the Algorithm:

Consider the following partial steps for the 8-Queens problem:

1. Row 1: Place Q1 at (1,1).


2. Row 2: Place Q2 at (2,3) (first safe column found).
3. Row 3: Place Q3 at (3,5) (first safe column found).
4. Row 4: No safe column is available. Backtrack to Row 3.
5. Row 3: Move Q3 to the next safe column, e.g., (3,6).
6. Row 4: Place Q4 at (4,2) (first safe column found).
7. Continue until all queens are safely placed or backtrack as needed.
3.Explain 0/1 knapsack problem by using backtracking with an examples?
8. What is Spanning trees explain with suitable examples?

● To compute the spanning tree with minimum cost, the following considerations are
taken:
● Let G = (V,E) be an undirected connected graph with V vertices and E edge.
● A sub-graph t = (V,E’) of the G is a Spanning tree of G iff ‘t’ is a tree.
● A tree is defined to be an undirected, acyclic and connected graph (or more simply, a
graph in which there is only one path connecting each pair of vertices).
● Assume there is an undirected, connected graph G.
● A spanning tree is a subgraph of G, is a tree, and contains all the vertices of G.
● A minimum spanning tree is a spanning tree, but has weights or lengths associated
with the edges, and the total weight of the tree (the sum of the weights of its edges)
is at a minimum.
● Kruskal’s Algorithm
● Prim’s Algorithm
10. Determine Sum of subsets problem?
1 a)

The P=NP Problem


It is not hard to show that every problem in P is also in NP, but it is unclear whether every problem
in NP is also in P. The P=NP Problem. The best we can say is that thousands of computer scientists
have been unsuccessful for decades to design polynomial-time algorithms for some problems in the
class NP. This constitutes overwhelming empirical evidence that the NP-P classes P and NP are
indeed distinct, but no formal mathematical proof of this fact is known

2) NP-complete problems
A decision problem E is NP-complete if every problem in the class NP is polynomial-time reducible
to E. The Hamiltonian cycle problem, the decision versions of the TSP and the graph coloring
problem, as well as literally hundreds of other problems are known to be NP-complete.
NP-hard problems

Optimization problems whose decision versions are NP complete are called


NP-hard

3)

4)
Class P
P is a set of all decision problems solvable by deterministic algorithms in polynomial time.
The class P EXAMPLE: The Minimum Spanning Tree Problem is in the class P.
The class NP
A problem that is NP-Complete has the property that it can be solved in polynomial time iff all
other NP-Complete problems can also be solved in polynomial time. If an NP-Hard problem can be
solved in polynomial time, then all NP-Complete problems can be solved in polynomial time.
All NP-Complete problems are NP-Hard, but not all NP-Hard problems are NP-Complete.
NP stands for Nondeterministic Polynomial
NP is the set of all decision problems solvable by nondeterministic algorithms in polynomial time.
5) NP-complete problems
A decision problem E is NP-complete if every problem in the class NP is polynomial-time reducible
to E. The Hamiltonian cycle problem, the decision versions of the TSP and the graph coloring
problem, as well as literally hundreds of other problems are known to be NP-complete.
NP-hard problems
Optimization problems whose decision versions are NP complete are called
NP-hard.

6)

You might also like