Assignment 1 Gecho 1
Assignment 1 Gecho 1
Instructor: simon h.
#1. Explain precisely the way we solved a problem using divide and conquer, greedy algorithm and
dynamic programming.
1. Divide-divide into a number of sub problems that are smaller instances of the same
problem.
2. Conquer-Each sub problem can be solved recursively which basically means each instance
of the sub problem is identical in nature.
3. Combine-The solutions of each sub problem can be combined so as to solve the problem
at hand.
Greedy algorithms aims to make the optimal choice at that given moment. Each
step it chooses the optimal choice, without knowing the future. They do not look
into the future to decide the global optimal solution. They are only concerned with
the optimal solution locally. This means that the overall optimal solution may be
different from the solution the algorithm chooses. Or is an algorithmic paradigm
that builds up a solution piece by piece, always choosing the next piece that
offers the most obvious and immediate benefit. So the problems where
choosing locally optimal also leads to global solution are best fit for Greedy
3. Storing their solutions. The next time the same sub problem occurs, instead
of recomputing its solution, you simply look up the previously computed
solution. This saves computation time at the expense of a (hopefully)
modest expenditure in storage space. Either by adding memorization or
tabulation methods.
#2. If a problem is solved by both greedy algorithm and DP, which one is better to use? Why?
Both dynamic programming and greedy algorithms can be used on problems that
exhibit optimal substructure defines by saying that an optimal solution to the problem
contains within it optimal solutions to sub problems.
However, in order for the greedy solution to be optimal, the problem must also
exhibit what they call the "greedy-choice property"; i.e., a globally optimal solution can
be arrived at by making locally optimal (greedy) choices.
In contrast, dynamic programming is good for problems that exhibit not only optimal
substructure but also overlapping sub problems. This means that a particular sub
problem can be reached in multiple ways. The dynamic programming algorithm
calculates the value of each sub problem once and then can reuse these every time the
algorithm revisits them. Since we have stored previously computed sub solutions.
For example the 0-1 knapsack vs the fractional knapsack problem. Both exhibit the
optimal substructure property, thus the second one can be solved to optimality with a
greedy algorithm or a dynamic programming algorithm, although greedy would be faster in
case of storage and time complexity. But the first one requires dynamic programming or
some other non-greedy approach.
#3. Explain memoization and tubular implementation techniques in DP using any example.
How are different from recursive approach. Write the advantage disadvantage of each approach.
Memoization Method-top Down DP is an optimization technique used to speed up
programs by storing the results of expensive function calls and returning the
cached result when the same inputs occur again.
For example:
If I need to calculating the Fibonacci sequence fib(a), I would just call this, and
it would call fib(a) = fib(a-1) + fib(a-2), which would call fib(a-1) = fib(a-2) + fib(a-3),
...etc..., which would call fib(2) =1+0. Then resolves fib (3), but it doesn't need to
recalculate fib (2), because I cached it.
Its table is filled on demand even though it slows due to recursive computations.
Therefore memoization is preferable since the sub problems are solved lazily, i.e.
precisely the computations needed are carried out.
#5. Changing the order of Fibonacci lookup tables i.e., lookup[n] = lookup [n -2] + lookup [n - 1]
instead of lookup[n] = lookup [n - 1] + lookup [n - 2], does has an effect in the memoization/top
down approach? If yes, explain.