0% found this document useful (0 votes)
3 views

DAA_ans

The document discusses various algorithms and methods in computer science, including recursion techniques, the Master Theorem for solving recurrence relations, and the Union-Find algorithm for disjoint sets. It also covers greedy algorithms, dynamic programming, backtracking algorithms like the N-Queen and 8-Queen problems, and sorting techniques such as Heap Sort and External Sorting. Additionally, it explains the Bellman-Ford algorithm for shortest paths in graphs with negative weights and the characteristics of greedy algorithms.

Uploaded by

S Mukherjee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

DAA_ans

The document discusses various algorithms and methods in computer science, including recursion techniques, the Master Theorem for solving recurrence relations, and the Union-Find algorithm for disjoint sets. It also covers greedy algorithms, dynamic programming, backtracking algorithms like the N-Queen and 8-Queen problems, and sorting techniques such as Heap Sort and External Sorting. Additionally, it explains the Bellman-Ford algorithm for shortest paths in graphs with negative weights and the characteristics of greedy algorithms.

Uploaded by

S Mukherjee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

1.

Recursion tree method is best than the Substitution method for solving a
recurrence relation because in recursion tree method, we draw a recurrence tree and
calculate the time taken by every level of tree. Finally, we sum the work done at
all levels. To draw the recurrence tree, we start from the given recurrence and
keep drawing till we find a pattern among levels. The pattern is typically a
arithmetic or geometric series but in substitution method we make a guess for the
solution and then we use mathematical induction to prove the guess is correct or
incorrect.

2.Master's Theorem:

The master method is a formula for solving recurrence relations of the form:

T(n) = aT(n/b) + f(n),


where,
n = size of input
a = number of subproblems in the recursion
n/b = size of each subproblem. All subproblems are assumed
to have the same size.
f(n) = cost of the work done outside the recursive call,
which includes the cost of dividing the problem and
cost of merging the solutions

Here, a ≥ 1 and b > 1 are constants, and f(n) is an asymptotically positive


function.

If a ≥ 1 and b > 1 are constants and f(n) is an asymptotically positive function,


then the time complexity of a recursive relation is given by

T(n) = aT(n/b) + f(n)

where, T(n) has the following asymptotic bounds:

1. If f(n) = O(n^logb a-ϵ), then T(n) = Θ(n^logb a).

2. If f(n) = Θ(n^logb a), then T(n) = Θ(n^logb a * log n).

3. If f(n) = Ω(n^logb a+ϵ), then T(n) = Θ(f(n)).

ϵ > 0 is a constant.

3.If a function is calling itself and that recursive call is the last statement in
a function then it is called tail recursion. After that call there is nothing, it
is not performing anything, so, it is called tail recursion.

void fun(int n)
{
if(n>0)
{
printf("%d",n);
fun(n-1);
}
}
fun(3);

The tail recursive functions are considered better than non-tail recursive
functions as tail-recursion can be optimized by the compiler.

4. Turing machines are an important tool for studying the limits of computation and
for understanding the foundations of computer science. They provide a simple yet
powerful model of computation that has been widely used in research and has had a
profound impact on our understanding of algorithms and computation.

5. Union Find Algorithm:

An algorithm that implements find and union operations on a disjoint set data
structure. It finds the root parent of an element and determines whether if two
elements are in the same set or not. If two elements are at different sets, merge
the smaller set to the larger set. At the end, we can get the connected components
in a graph.

We are given 10 individuals say, a, b, c, d, e, f, g, h, i, j

Following are relationships to be added:


a <-> b
b <-> d
c <-> f
c <-> i
j <-> e
g <-> j

Given queries like whether a is a friend of d or not. We basically need to create


following 4 groups and maintain a quickly accessible connection among group items:
G1 = {a, b, d}
G2 = {c, f, i}
G3 = {e, g, j}
G4 = {h}

6. Characteristics of Greedy Algorithms:

~ *Making locally optimal choices*

A greedy algorithm selects the best option available at that specific moment at
each step without taking the decision’s long-term implications into account.
~ *No backtracking*

A greedy algorithm’s decisions are final and cannot be changed or undone after they
have been made. The algorithm keeps going without going back to its earlier
choices.

~ *Iterative process*

Greedy algorithms operate in a succession of iterative phases, each building on the


one before it.

~ *Efficiency of greedy algorithms*


Greedy algorithms frequently have a low number of steps and are consequently
computationally quick, they are often effective in terms of temporal complexity.
Nevertheless, because greedy algorithms don’t always come up with the best feasible
answer, this efficiency could come at the expense of optimality.

7. 0/1 Knapsack Problem:

Given n items where each item has some weight and profit associated with it and
also given a bag with capacity W, [i.e., the bag can hold at most W weight in it].
The task is to put the items into the bag such that the sum of profits associated
with them is the maximum possible.

Note: The constraint here is we can either put an item completely into the bag or
cannot put it at all [It is not possible to put a part of an item into the bag].

Input: W = 4, profit[] = [1, 2, 3], weight[] = [4, 5, 1]


Output: 3

Explanation: There are two items which have weight less than or equal to 4. If we
select the item with weight 4, the possible profit is 1. And if we select the item
with weight 1, the possible profit is 3. So the maximum possible profit is 3. Note
that we cannot put both the items with weight 4 and 1 together as the capacity of
the bag is 4.

Input: W = 3, profit[] = [1, 2, 3], weight[] = [4, 5, 6]


Output: 0

8. The primary reason the greedy algorithm fails for the 0-1 Knapsack problem is
that it does not consider the possibility of excluding certain items to achieve a
better overall solution. The greedy approach only focuses on immediate gains,
choosing items with high value-to-weight ratios without considering the potential
impact on future decisions.

Consider a backpack with a weight capacity of 4, and items with the following
weights and values:

Item | Weight | Value | value/Weight


A 3 1.8 0.6
B 2 1 0.5
C 2 1 0.5

If you apply Greedy on value you will first select item A, so the residual weight
capacity will be 1. Since both items A and B weigh more than that residual value,
you won't be able to add any more items. Now, this is a feasible solution but not
an optimal solution.

Greedy based on value per weight would first choose item A and then quit, as
residual capacity is not enough to accommodate any more item, so the total value
reached is 1.8.

The optimal solution, however, is to choose items B and C, which together exactly
weigh the full capacity and have a combined value of 2.

9. Dynamic Programming is a technique in computer programming that helps to


efficiently solve a class of problems that have overlapping subproblems and optimal
substructure property.

If any problem can be divided into subproblems, which in turn are divided into
smaller subproblems, and if there are overlapping among these subproblems, then the
solutions to these subproblems can be saved for future reference. In this way,
efficiency of the CPU can be enhanced. This method of solving a solution is
referred to as dynamic programming.

Such problems involve repeatedly calculating the value of the same subproblems to
find the optimum solution.

Example: Calculating fibonacci-

Let n be the number of terms.

1. If n = 0, return 0.
2. If n = 1, return 1.
3. Else, return the sum of two preceding numbers.

Hence, we have the sequence 0,1,1, 2, 3. Here, we have used the results of the
previous steps as shown below. This is called a dynamic programming approach.

F(0) = 0
F(1) = 1
F(2) = F(1) + F(0)
F(3) = F(2) + F(1)
F(4) = F(3) + F(2)
10. Matrix chain Multiplication Algoritm:

1: **Initialize:**
- Create a 2D array m where m[i][j] will hold the minimum number of
scalar multiplications needed to compute the
matrix product A_i * A_{i+1} * … * A_j.
- Create a 2D array s where s[i][j] will hold the index of the matrix
after which the optimal split occurs.

2.: **Matrix Dimensions:**


- Let p be an array where p[i] denotes the number of rows in matrix A_i and
p[i+1] denotes the number of columns in matrix A_i.
Hence, p has a length of n + 1.

3: **Base Case:**
- For i = 1 to n, set m[i][i] = 0 (the cost of multiplying one matrix is zero).

4: **Compute Optimal Costs:**


- For chain length l from 2 to n:
- For i = 1 to n - l + 1:
- Set j = i + l - 1
- Initialize m[i][j] to infinity.
- For k = i to j - 1:
- Compute the cost of splitting the product at position k:
q = m[i][k] + m[k+1][j] + p[i-1] * p[k] * p[j]
- If q < m[i][j]:
- Update m[i][j] = q
- Update s[i][j] = k

5: **Return Results:**
- The minimum number of scalar multiplications needed is found in m[1][n].
- The optimal parenthesization can be retrieved from the s array
using a recursive function or a stack-based approach.

11. 8-Queen Problem (BackTracking Algorithm):

Step by step approach:

1: Initialize an empty 8×8 chessboard with all positions set to 0.

2: Start with the first row (row 0) and try placing a queen in each column.

3: For each placement, check if the position is safe (not attacked by any
previously placed queen). A position is unsafe if another queen is in the same
column or on the same diagonal.
4: If a safe position is found, place the queen (set position to 1) and recursively
try to place queens in subsequent rows. Otherwise, backtrack by removing the queen
and trying the next column.

5: If all rows are successfully filled (8 queens placed), a valid solution is


found.

12. Bellman-Ford Algorithm:

1: Set initial distance to zero for the source vertex, and set initial distances to
infinity for all other vertices.

2: For each edge, check if a shorter distance can be calculated, and update the
distance if the calculated distance is shorter.

3:Check all edges (step 2) V−1 times. This is as many times as there are vertices
(V), minus one.

4: Optional: Check for negative cycles.

13. The Union by Rank technique is used to optimize the Union operation by ensuring
that the smaller tree is always attached to the root of the larger tree. This
approach prevents the trees from becoming too imbalanced, which would lead to
inefficient find operations.

example:

Let there be 4 elements: 0, 1, 2, 3

Initially, all elements are in their own subsets: 0 1 2 3

Do Union(0, 1)

1 2 3
/
0

Do Union(1, 2)

2 3
/
1
/
0

Do Union(2, 3)
3
/
2
/
1
/
0

14. N-Queen Problem (BackTracking Algorithm):

1: Place a queen in the first column of the first row.


2: Check if placing the queen in the current position conflicts with any previously
placed queens.
3: If there is no conflict, proceed to the next column. If there is a conflict,
backtrack to the previous column and try placing the queen in the next row.
4: Repeat steps 2 and 3 for all columns.
5: If all queens are placed successfully without conflicts, a solution is found.
6: If no solution is found after exploring all possibilities, backtrack to the
previous column and try placing the queen in the next row.

15. Graph Coloring Algorithm:

1: Create a recursive function that takes the graph, current index, number of
vertices, and color array.

2: If the current index is equal to the number of vertices. Print the color
configuration in the color array.

3: Assign a color to a vertex from the range (1 to m).


For every assigned color, check if the configuration is safe, (i.e. check if the
adjacent vertices do not have the same color) and recursively call the function
with the next index and number of vertices else return false

4: If any recursive function returns true then break the loop and return true
If no recursive function returns true then return false.

16. Job Scheduling Algorithm:

Set of jobs with deadlines and profits are taken as an input with the job
scheduling algorithm and scheduled subset of jobs with maximum profit are obtained
as the final output.

Algorithm:
Step1 − Find the maximum deadline value from the input set
of jobs.
Step2 − Once, the deadline is decided, arrange the jobs
in descending order of their profits.
Step3 − Selects the jobs with highest profits, their time
periods not exceeding the maximum deadline.
Step4 − The selected set of jobs are the output.

17. The heap property:

The heap property says that is the value of Parent is either greater than or equal
to (in a max heap ) or less than or equal to (in a min heap) the value of the
Child.

A heap is described in memory using linear arrays in a sequential manner.

18. Heap Sort algorithm:

1: Build a Max-Heap from the input array.


2: Swap the root (maximum value) with the last element in the array.
3: Reduce the heap size by 1 (excluding the sorted elements at the end).
4: Re-heapify the heap to restore the heap property.

19. Finding All Hamiltonian Cycles:

1: Start with an empty path and choose any vertex as the starting point.
2: Add the starting vertex to the path.
3: Recursively build the path by choosing the next unvisited vertex and adding it
to the path.
4: If the path contains all vertices and the last vertex has an edge to the
starting vertex, we have found a Hamiltonian cycle.
5: If the path does not meet the criteria for a Hamiltonian cycle, backtrack by
removing the last vertex from the path and trying a different unvisited vertex.
6: Continue this process until a Hamiltonian cycle is found or all possibilities
have been exhausted.

18. A negative-weight cycle is a cycle in a graph whose edges sum to a negative


value.

19. External Sorting is a category of sorting algorithm that is able to sort huge
amounts of data. This type of sorting is applied on data set which acquire large
memory which cannot be holded in main memory (RAM) and is stored in secondary
memory ( hard disk).

The idea of sorting used in external sort is quite similar to merge sort. In also
possess two phases like in merge sort,

In the sort phase, small memory size data sets are sorted and then in merge phase,
these are combined to a single dataset.

External Sorting : For a huge data set which cannot be processed in a single go.
Data is divided into small chunks. These chunks are sorted and then stored in data
files.

Algorithm:

Step 1: read the input data from the file and input it as data sets of size of
memory.

Step 2: For each of these mini data sets sorted each of them using merge sort.

Step 3: store the sorted data into a file.


Step 4: Merge each of the sorted data file.

20. Bellman ford is a single source shortest path algorithm based on the bottom-up
approach of dynamic programming. It starts from a single vertex and calculates the
shortest path from the starting vertex to all the nodes in a weighted graph. In
data structures, there are various other algorithms for the shortest path like the
Dijkstra algorithm, Kruskal’s algorithm, Prim’s algorithm, etc. But all these
algorithms work only when the given edge weight is positive in a graph. No
algorithm provides an accurate solution when there is an edge with a negative
weight.

So, Bellman Ford’s algorithm deals with graphs that have negative edge weights and
guarantees to calculate the correct and optimized shortest path between vertices.

Bellman-Ford algorithm is based on the “Principle of Relaxation”

Negative edge weights of the graph can generate a negative cycle in the graph and
with Dijkstra and another algorithm it is very difficult to find the shortest path
when there is a negative edge in the graph, so bellman ford algorithm resolves this
problem.

21. Floyd's Algorithm:

1. Start by updating the distance matrix by treating each vertex as a possible


intermediate node between all pairs of vertices.
2. Iterate through each vertex, one at a time. For each selected vertex k, attempt
to improve the shortest paths that pass through it.
3. When we pick vertex number k as an intermediate vertex, we already have
considered vertices {0, 1, 2, .. k-1} as intermediate vertices.
4. For every pair (i, j) of the source and destination vertices respectively, there
are two possible cases.
~k is not an intermediate vertex in shortest path from i to j. We keep the
value of dist[i][j] as it is.
~k is an intermediate vertex in shortest path from i to j. We update the
value of dist[i][j] as dist[i][k] + dist[k][j], if dist[i][j] > dist[i][k] +
dist[k][j]
5. Repeat this process for each vertex k until all intermediate possibilities have
been considered.

22. The max-flow min-cut theorem is the network flow theorem that says, maximum
flow from the source node to sink node in a given graph will always be equal to the
minimum sum of weights of edges which if removed disconnects the graph into two
components i.e. size of the minimum cut of the graph .

23. BFS Algorithm:

1: Initialization: Enqueue the given source vertex into a queue and mark it as
visited.
2: Exploration: While the queue is not empty:
~Dequeue a node from the queue and visit it (e.g., print its value).
~For each unvisited neighbor of the dequeued node:
~Enqueue the neighbor into the queue.
~Mark the neighbor as visited.
3: Termination: Repeat step 2 until the queue is empty.

24. Below are the steps for finding MST using Kruskal's algorithm:

1: Sort all the edges in a non-decreasing order of their weight.


2: Pick the smallest edge. Check if it forms a cycle with the spanning tree formed
so far. If the cycle is not formed, include this edge. Else, discard it.
3: Repeat step 2 until there are (V-1) edges in the spanning tree.

25. P Class
The P in the P class stands for Polynomial Time. It is the collection of decision
problems(problems with a "yes" or "no" answer) that can be solved by a
deterministic machine (our computers) in polynomial time.

Features:

The solution to P problems is easy to find.


P is often a class of computational problems that are solvable and tractable.
Tractable means that the problems can be solved in theory as well as in practice.
But the problems that can be solved in theory but not in practice are known as
intractable.
NP Class
The NP in NP class stands for Non-deterministic Polynomial Time. It is the
collection of decision problems that can be solved by a non-deterministic machine
(note that our computers are deterministic) in polynomial time.

Features:

The solutions of the NP class might be hard to find since they are being solved by
a non-deterministic machine but the solutions are easy to verify.
Problems of NP can be verified by a deterministic machine in polynomial time.

Co-NP Class
Co-NP stands for the complement of NP Class. It means if the answer to a problem in
Co-NP is No, then there is proof that can be checked in polynomial time.

Features:

If a problem X is in NP, then its complement X' is also in CoNP.


For an NP and CoNP problem, there is no need to verify all the answers at once in
polynomial time, there is a need to verify only one particular answer "yes" or "no"
in polynomial time for a problem to be in NP or CoNP.

NP-hard class
An NP-hard problem is at least as hard as the hardest problem in NP and it is a
class of problems such that every problem in NP reduces to NP-hard.

Features:

All NP-hard problems are not in NP.


It takes a long time to check them. This means if a solution for an NP-hard problem
is given then it takes a long time to check whether it is right or not.
A problem A is in NP-hard if, for every problem L in NP, there exists a
polynomial-time reduction from L to A.
Some of the examples of problems in Np-hard are:

Halting problem.
Qualified Boolean formulas.
No Hamiltonian cycle.

NP-complete class
A problem is NP-complete if it is both NP and NP-hard. NP-complete problems are the
hard problems in NP.

Features:

NP-complete problems are special as any problem in NP class can be transformed or


reduced into NP-complete problems in polynomial time.
If one could solve an NP-complete problem in polynomial time, then one could also
solve any NP problem in polynomial time.
Complexity Class Characteristic feature
P Easily solvable in polynomial time.
NP Yes, answers can be checked in polynomial time.
Co-NP No, answers can be checked in polynomial time.
NP-hard All NP-hard problems are not in NP and it takes a long time to
check them.
NP-complete A problem that is NP and NP-hard is NP-complete.

26. Clique Problem Definition:

The Clique Problem is defined as finding a clique of a predetermined size or the


largest possible clique in a graph, under the constraints that it's an NP-complete
problem.

Consider a graph with the following connections:

A - B, A - C, A - D
B - C, B - D
C - D
The subset {A, B, C, D} forms a clique, because each vertex connects to every other
vertex.

27. The 'Satisfiability Problem' refers to the task of finding an efficient


algorithm to test whether a formula in CNF (Conjunctive Normal Form) is
truth-functionally satisfiable. It is an important problem in theoretical computer
science.

the Cook–Levin theorem, also known as Cook's theorem, states that the Boolean
satisfiability problem is NP-complete. That is, it is in NP, and any problem in NP
can be reduced in polynomial time by a deterministic Turing machine to the Boolean
satisfiability problem.

28. If we consider two dilemmas: Problem A and Problem B. Polynomial time reduction
from Problem A to Problem B is the mapping or transformation that takes instances
of Problem A and generates corresponding instances of Problem B. From this, we can
devise the following:

If the response to Problem A’s particular case is “yes,” then the response to
Problem B’s particular instance of the same problem must likewise be “yes.”

If the response to Problem A’s particular case is “no,” then the response to
Problem B’s particular instance of the same problem must likewise be “no.”

In polynomial time, the transformation can be calculated.

A polynomial time reduction keeps the decision-problem solution intact. If we can


reduce Problem A to Problem B, then Problem A’s difficulty equals Problem B’s.

29. Algorithm (DFT):

1. Initialize all required libraries.


2. Prompt the user to input the number of points in the DFT.
3. Now you may initialize the arrays and accordingly ask for the input sequence.
This is purely due to the inability to declare an empty array in C. Dynamic memory
allocation is one of the solutions. However, simply reordering the prompt is a fair
solution in itself.
4. Implement 2 loops that calculate the value of X(k) for a specific value of k and
n. Keep in mind that Euler's formula will be used to substitute for e-j2kπn/N. This
requires a division where we calculate the real and imaginary bits of the
expression separately.
5. Display the result as you run the calculation.

You might also like