Analysis and Design of Algorithm Assignment
Analysis and Design of Algorithm Assignment
SET – 1
1) a)
Properties of an Algorithm
An algorithm is a set of instructions to solve a problem. It should possess the following
properties:
Steps:
i) Branching: Build a state-space tree with each level being item inclusion/exclusion.
ii) Bounding: Find an upper bound for every node. If the bound is below the best so-
lution obtained, prune the branch.
iii) Exploration: Perform depth-first or best-first search to traverse the solution space.
Example:
For set (w, v) = {(2, 40), (3, 50), (4, 70)} and W = 5, B&B solves subsets and eliminates
infeasible ones, and the optimal solution is found efficiently.
B&B is extensively applied in travelling salesman problems, job scheduling, and inte-
ger programming.
1) b)
General Plan to Examine the Efficiency of Recursive Algorithms
The performance of a recursive program is normally modeled using recurrence relations
and asymptotic complexity analysis (Big-O notation). The overall strategy is as follows:
In top-down heap building, items are added one at a time to an initially empty heap,
and the heap property is preserved at each step. It is an incremental process where every
new item inserted is placed at the next free position and subsequently heapified up (or
percolated up) to avoid violating the heap property. For instance, consider inserting
values {5, 10, 15, 20, 25} into an initially empty max-heap. The initial element 5 is
placed at the root. Then 10 is inserted as a child of 5 and moved up because it is larger.
This repeats until all values are inserted and each one floats up if required. The time
complexity of this method is O(n log n) since every insertion operation is of O(log n)
time complexity owing to the percolation process.
Bottom-up heap construction, on the other hand, begins with an unsorted array and
converts it into a heap by heapifying down (or percolating down) non-leaf nodes. In-
stead of inserting elements individually, all elements are placed in their respective po-
sitions first, and then heapification is applied from the last non-leaf node (at index n/2
- 1) up to the root. For example, consider an array {15, 10, 5, 20, 25} arranged in a
complete binary tree. Beginning from the bottommost non-leaf node, we heapify down,
where each node is compared to its children and swapped if needed to ensure the heap
property. This is repeated all the way up to the root, ensuring that the topmost element
is the largest (or smallest) in a max-heap (or min-heap). The bottom-up strategy has an
O(n) time complexity, which is much more efficient than the top-down strategy for
large sets.
A notable difference between the two methods is that the top-down method has the heap
order in place at every stage during insertion, whereas the bottom-up method builds the
heap as a batch operation by reconfiguring the structure in situ. The bottom-up approach
is favored for the purpose of building a heap out of a pre-existing array since it provides
an improvement over performance. For instance, in Heap Sort, the heap is usually con-
structed using the bottom-up technique for effective sorting. However, the top-down
technique is applicable where elements come in dynamically, as in the case of priority
queues where elements are inserted and deleted constantly. In summary, although both
approaches accomplish the same objective of building a heap, the bottom-up method is
usually faster and more efficient, while the top-down method is easier and helpful in
dynamic situations.
3) a)
Divide and Conquer is a effective algorithmic approach that makes sorting much more
efficient than basic methods such as Bubble Sort and Insertion Sort. This method in-
cludes dividing a problem into subproblems, solving each of them individually, and
then merging their solutions to obtain the final answer. Two of the most popular sorting
algorithms based on this technique are Merge Sort and Quick Sort.
One of the most significant benefits of Divide and Conquer sorting algorithms is their
time complexity. Merge Sort has a worst-case time complexity of 𝑂(𝑛 𝑙𝑜𝑔 𝑛), so it is
far faster than 𝑂(𝑛²) sorting algorithms for large datasets. Likewise, Quick Sort aver-
ages 𝑂(𝑛 𝑙𝑜𝑔 𝑛) time, though its worst-case is 𝑂(𝑛²). This is because the problem size
decreases logarithmically at each step, resulting in fewer total operations.
Another significant advantage of Divide and Conquer sorting techniques is that they
can efficiently work with large amounts of data. Merge Sort, for example, is very stable
and performs nicely even with external storage sorting and linked lists. It preserves
elements' original relative order, and this is highly important in various applications.
Quick Sort, on the contrary, is in-place and does not use extra space, so it is memory-
conscious compared to Merge Sort.
In general, Divide and Conquer sorting algorithms offer an organized and scalable way
of sorting vast data. The balance between efficiency and flexibility that they achieve
makes them more desirable for most real-world scenarios, including database manage-
ment, search algorithms, and computational biology.
3) b)
Insertion Sort is a simple and intuitive sorting algorithm that works by building a sorted
array one element at a time. It is particularly useful for small datasets and nearly sorted
lists. The performance of Insertion Sort is analyzed based on three cases: best case,
worst case, and average case.
The best case scenario for Insertion Sort occurs when the input array is already sorted
in ascending order. In this case, each element is compared with the previous one, but
no shifting is required. Since only 𝑛 − 1 comparisons are made and no swaps occur,
the time complexity in the best case is 𝑂(𝑛). This makes Insertion Sort efficient for
nearly sorted data.
The worst case happens when the input array is sorted in reverse order. Each element
must be compared with all the previous elements and shifted to its correct position at
the beginning of the array. This results in 𝑂(𝑛²) time complexity, as every element
requires (𝑛 − 1) + (𝑛 − 2) + . . . + 1 = (𝑛² − 𝑛)/2 comparisons and shifts. This
quadratic complexity makes Insertion Sort inefficient for large, randomly ordered da-
tasets.
The average case occurs when the elements of the array are in random order. On aver-
age, each element is compared with half of the sorted portion before being placed in
its correct position. This results in 𝑂(𝑛²) complexity, similar to the worst case. How-
ever, the exact number of operations depends on the level of disorder in the array.
Overall, while Insertion Sort is not optimal for large datasets, it is valuable for small
arrays and nearly sorted data due to its simplicity and adaptability.
SET – 2
Time Complexity:
The time complexity of this method is O(nW), with n being the number of items and W
being the capacity of the knapsack. Because it fills an n × W table, it's much more
efficient than the exponential brute force method.
Conclusion:
Dynamic Programming solves the 0/1 Knapsack Problem optimally in polynomial time
and is within moderately large inputs, unlike recursive brute-force method. DP side-
steps unnecessary repetition of work and is efficient and hence used as a tool of choice
to solve such optimization problems.
5) The binomial coefficient, denoted as 𝐶(𝑛, 𝑘) or "n choose k", represents the number of
ways to choose k elements from a set of n elements without considering the order. It is
a fundamental concept in combinatorics and is defined mathematically as:
𝑛!
𝐶(𝑛, 𝑘+ =
𝑘! (𝑛 − 𝑘)!
While this formula is straightforward, computing factorials directly can lead to large
numbers and inefficient calculations. Instead, the Dynamic Programming (DP) ap-
proach provides an efficient way to compute binomial coefficients without excessive
recomputation.
This recurrence relation means that to compute C(n, k), we only need values from the
previous row, significantly reducing computational effort compared to the factorial
method.
Conclusion
The Dynamic Programming approach efficiently computes binomial coefficients using
Pascal’s Triangle properties. It avoids unnecessary factorial calculations and reduces
redundant computations. This method is widely used in combinatorial problems, prob-
ability theory, and algorithmic applications such as polynomial expansions and combi-
natorial counting.
6) a)
The Greedy Choice Property is a basic property of greedy algorithms, which says that
an optimal solution can be constructed incrementally by making a series of locally op-
timal decisions. That is, at every step of the algorithm, the choice that appears best at
that time is made without looking ahead, but it still results in a globally optimal solu-
tion.
One of the greatest examples proving this property is Activity Selection, where the ob-
jective is to choose the highest number of non-overlapping activities based on their start
and finish times. By always selecting the activity that completes first and recursively
choosing the next available, we have an optimal solution without having to compare
every possible combination.
Another one is Huffman Coding, which is applied to data compression. The algorithm
builds an optimal prefix code by always combining the two smallest frequency nodes
together first, resulting in the most efficient encoding.
Not all problems, however, have the Greedy Choice Property. Problems such as the 0/1
Knapsack Problem need dynamic programming instead, as making the locally optimal
choice at each step does not necessarily result in the best solution overall.
Therefore, the Greedy Choice Property is a necessary condition for checking if a prob-
lem can be solved optimally by a greedy algorithm.
6) b)
The sorting problem can be understood using a decision tree, which is a binary tree
representation of the comparisons made by a comparison-based sorting algorithm to
determine the correct order of elements. The decision tree is used to analyze the com-
plexity of sorting algorithms, particularly in terms of their lower bound.
For example, consider sorting three elements (𝑎, 𝑏, 𝑐). The decision tree starts with a
comparison between a and b:
i) If 𝑎 < 𝑏, we proceed with further comparisons involving 𝑐.
ii) If 𝑎 > 𝑏, a different path is followed. This process continues until all elements are
correctly arranged in one of the leaf nodes, representing a sorted sequence.
Complexity Analysis
Since there are 𝑛! possible permutations of 𝑛 elements, the decision tree must have at
least 𝑛! leaves to account for all possible sorted orders. The minimum height of a binary
tree with 𝑛! leaves is:
ℎ ≥ 𝑙𝑜𝑔2(𝑛!)
Thus, decision trees help demonstrate the fundamental limit of comparison-based sort-
ing algorithms.