DAA_ans
DAA_ans
Recursion tree method is best than the Substitution method for solving a
recurrence relation because in recursion tree method, we draw a recurrence tree and
calculate the time taken by every level of tree. Finally, we sum the work done at
all levels. To draw the recurrence tree, we start from the given recurrence and
keep drawing till we find a pattern among levels. The pattern is typically a
arithmetic or geometric series but in substitution method we make a guess for the
solution and then we use mathematical induction to prove the guess is correct or
incorrect.
2.Master's Theorem:
The master method is a formula for solving recurrence relations of the form:
ϵ > 0 is a constant.
3.If a function is calling itself and that recursive call is the last statement in
a function then it is called tail recursion. After that call there is nothing, it
is not performing anything, so, it is called tail recursion.
void fun(int n)
{
if(n>0)
{
printf("%d",n);
fun(n-1);
}
}
fun(3);
The tail recursive functions are considered better than non-tail recursive
functions as tail-recursion can be optimized by the compiler.
4. Turing machines are an important tool for studying the limits of computation and
for understanding the foundations of computer science. They provide a simple yet
powerful model of computation that has been widely used in research and has had a
profound impact on our understanding of algorithms and computation.
An algorithm that implements find and union operations on a disjoint set data
structure. It finds the root parent of an element and determines whether if two
elements are in the same set or not. If two elements are at different sets, merge
the smaller set to the larger set. At the end, we can get the connected components
in a graph.
A greedy algorithm selects the best option available at that specific moment at
each step without taking the decision’s long-term implications into account.
~ *No backtracking*
A greedy algorithm’s decisions are final and cannot be changed or undone after they
have been made. The algorithm keeps going without going back to its earlier
choices.
~ *Iterative process*
Given n items where each item has some weight and profit associated with it and
also given a bag with capacity W, [i.e., the bag can hold at most W weight in it].
The task is to put the items into the bag such that the sum of profits associated
with them is the maximum possible.
Note: The constraint here is we can either put an item completely into the bag or
cannot put it at all [It is not possible to put a part of an item into the bag].
Explanation: There are two items which have weight less than or equal to 4. If we
select the item with weight 4, the possible profit is 1. And if we select the item
with weight 1, the possible profit is 3. So the maximum possible profit is 3. Note
that we cannot put both the items with weight 4 and 1 together as the capacity of
the bag is 4.
8. The primary reason the greedy algorithm fails for the 0-1 Knapsack problem is
that it does not consider the possibility of excluding certain items to achieve a
better overall solution. The greedy approach only focuses on immediate gains,
choosing items with high value-to-weight ratios without considering the potential
impact on future decisions.
Consider a backpack with a weight capacity of 4, and items with the following
weights and values:
If you apply Greedy on value you will first select item A, so the residual weight
capacity will be 1. Since both items A and B weigh more than that residual value,
you won't be able to add any more items. Now, this is a feasible solution but not
an optimal solution.
Greedy based on value per weight would first choose item A and then quit, as
residual capacity is not enough to accommodate any more item, so the total value
reached is 1.8.
The optimal solution, however, is to choose items B and C, which together exactly
weigh the full capacity and have a combined value of 2.
If any problem can be divided into subproblems, which in turn are divided into
smaller subproblems, and if there are overlapping among these subproblems, then the
solutions to these subproblems can be saved for future reference. In this way,
efficiency of the CPU can be enhanced. This method of solving a solution is
referred to as dynamic programming.
Such problems involve repeatedly calculating the value of the same subproblems to
find the optimum solution.
1. If n = 0, return 0.
2. If n = 1, return 1.
3. Else, return the sum of two preceding numbers.
Hence, we have the sequence 0,1,1, 2, 3. Here, we have used the results of the
previous steps as shown below. This is called a dynamic programming approach.
F(0) = 0
F(1) = 1
F(2) = F(1) + F(0)
F(3) = F(2) + F(1)
F(4) = F(3) + F(2)
10. Matrix chain Multiplication Algoritm:
1: **Initialize:**
- Create a 2D array m where m[i][j] will hold the minimum number of
scalar multiplications needed to compute the
matrix product A_i * A_{i+1} * … * A_j.
- Create a 2D array s where s[i][j] will hold the index of the matrix
after which the optimal split occurs.
3: **Base Case:**
- For i = 1 to n, set m[i][i] = 0 (the cost of multiplying one matrix is zero).
5: **Return Results:**
- The minimum number of scalar multiplications needed is found in m[1][n].
- The optimal parenthesization can be retrieved from the s array
using a recursive function or a stack-based approach.
2: Start with the first row (row 0) and try placing a queen in each column.
3: For each placement, check if the position is safe (not attacked by any
previously placed queen). A position is unsafe if another queen is in the same
column or on the same diagonal.
4: If a safe position is found, place the queen (set position to 1) and recursively
try to place queens in subsequent rows. Otherwise, backtrack by removing the queen
and trying the next column.
1: Set initial distance to zero for the source vertex, and set initial distances to
infinity for all other vertices.
2: For each edge, check if a shorter distance can be calculated, and update the
distance if the calculated distance is shorter.
3:Check all edges (step 2) V−1 times. This is as many times as there are vertices
(V), minus one.
13. The Union by Rank technique is used to optimize the Union operation by ensuring
that the smaller tree is always attached to the root of the larger tree. This
approach prevents the trees from becoming too imbalanced, which would lead to
inefficient find operations.
example:
Do Union(0, 1)
1 2 3
/
0
Do Union(1, 2)
2 3
/
1
/
0
Do Union(2, 3)
3
/
2
/
1
/
0
1: Create a recursive function that takes the graph, current index, number of
vertices, and color array.
2: If the current index is equal to the number of vertices. Print the color
configuration in the color array.
4: If any recursive function returns true then break the loop and return true
If no recursive function returns true then return false.
Set of jobs with deadlines and profits are taken as an input with the job
scheduling algorithm and scheduled subset of jobs with maximum profit are obtained
as the final output.
Algorithm:
Step1 − Find the maximum deadline value from the input set
of jobs.
Step2 − Once, the deadline is decided, arrange the jobs
in descending order of their profits.
Step3 − Selects the jobs with highest profits, their time
periods not exceeding the maximum deadline.
Step4 − The selected set of jobs are the output.
The heap property says that is the value of Parent is either greater than or equal
to (in a max heap ) or less than or equal to (in a min heap) the value of the
Child.
1: Start with an empty path and choose any vertex as the starting point.
2: Add the starting vertex to the path.
3: Recursively build the path by choosing the next unvisited vertex and adding it
to the path.
4: If the path contains all vertices and the last vertex has an edge to the
starting vertex, we have found a Hamiltonian cycle.
5: If the path does not meet the criteria for a Hamiltonian cycle, backtrack by
removing the last vertex from the path and trying a different unvisited vertex.
6: Continue this process until a Hamiltonian cycle is found or all possibilities
have been exhausted.
19. External Sorting is a category of sorting algorithm that is able to sort huge
amounts of data. This type of sorting is applied on data set which acquire large
memory which cannot be holded in main memory (RAM) and is stored in secondary
memory ( hard disk).
The idea of sorting used in external sort is quite similar to merge sort. In also
possess two phases like in merge sort,
In the sort phase, small memory size data sets are sorted and then in merge phase,
these are combined to a single dataset.
External Sorting : For a huge data set which cannot be processed in a single go.
Data is divided into small chunks. These chunks are sorted and then stored in data
files.
Algorithm:
Step 1: read the input data from the file and input it as data sets of size of
memory.
Step 2: For each of these mini data sets sorted each of them using merge sort.
20. Bellman ford is a single source shortest path algorithm based on the bottom-up
approach of dynamic programming. It starts from a single vertex and calculates the
shortest path from the starting vertex to all the nodes in a weighted graph. In
data structures, there are various other algorithms for the shortest path like the
Dijkstra algorithm, Kruskal’s algorithm, Prim’s algorithm, etc. But all these
algorithms work only when the given edge weight is positive in a graph. No
algorithm provides an accurate solution when there is an edge with a negative
weight.
So, Bellman Ford’s algorithm deals with graphs that have negative edge weights and
guarantees to calculate the correct and optimized shortest path between vertices.
Negative edge weights of the graph can generate a negative cycle in the graph and
with Dijkstra and another algorithm it is very difficult to find the shortest path
when there is a negative edge in the graph, so bellman ford algorithm resolves this
problem.
22. The max-flow min-cut theorem is the network flow theorem that says, maximum
flow from the source node to sink node in a given graph will always be equal to the
minimum sum of weights of edges which if removed disconnects the graph into two
components i.e. size of the minimum cut of the graph .
1: Initialization: Enqueue the given source vertex into a queue and mark it as
visited.
2: Exploration: While the queue is not empty:
~Dequeue a node from the queue and visit it (e.g., print its value).
~For each unvisited neighbor of the dequeued node:
~Enqueue the neighbor into the queue.
~Mark the neighbor as visited.
3: Termination: Repeat step 2 until the queue is empty.
24. Below are the steps for finding MST using Kruskal's algorithm:
25. P Class
The P in the P class stands for Polynomial Time. It is the collection of decision
problems(problems with a "yes" or "no" answer) that can be solved by a
deterministic machine (our computers) in polynomial time.
Features:
Features:
The solutions of the NP class might be hard to find since they are being solved by
a non-deterministic machine but the solutions are easy to verify.
Problems of NP can be verified by a deterministic machine in polynomial time.
Co-NP Class
Co-NP stands for the complement of NP Class. It means if the answer to a problem in
Co-NP is No, then there is proof that can be checked in polynomial time.
Features:
NP-hard class
An NP-hard problem is at least as hard as the hardest problem in NP and it is a
class of problems such that every problem in NP reduces to NP-hard.
Features:
Halting problem.
Qualified Boolean formulas.
No Hamiltonian cycle.
NP-complete class
A problem is NP-complete if it is both NP and NP-hard. NP-complete problems are the
hard problems in NP.
Features:
A - B, A - C, A - D
B - C, B - D
C - D
The subset {A, B, C, D} forms a clique, because each vertex connects to every other
vertex.
the Cook–Levin theorem, also known as Cook's theorem, states that the Boolean
satisfiability problem is NP-complete. That is, it is in NP, and any problem in NP
can be reduced in polynomial time by a deterministic Turing machine to the Boolean
satisfiability problem.
28. If we consider two dilemmas: Problem A and Problem B. Polynomial time reduction
from Problem A to Problem B is the mapping or transformation that takes instances
of Problem A and generates corresponding instances of Problem B. From this, we can
devise the following:
If the response to Problem A’s particular case is “yes,” then the response to
Problem B’s particular instance of the same problem must likewise be “yes.”
If the response to Problem A’s particular case is “no,” then the response to
Problem B’s particular instance of the same problem must likewise be “no.”