0% found this document useful (0 votes)
109 views4 pages

Assignment 1 Gecho 1

Yes, changing the order of the Fibonacci lookup table would have an effect in the memoization/top-down dynamic programming approach. In the standard memoization approach for Fibonacci, we calculate fib(n) as fib(n) = fib(n-1) + fib(n-2). This means that when calculating fib(n), we first need the values of fib(n-1) and fib(n-2) which come before it in the sequence. However, if we change the lookup table to be lookup[n] = lookup[n-2] + lookup[n-1] instead, it means that now when calculating fib(n) we first need the value of fib(

Uploaded by

Hadis Syoum
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
109 views4 pages

Assignment 1 Gecho 1

Yes, changing the order of the Fibonacci lookup table would have an effect in the memoization/top-down dynamic programming approach. In the standard memoization approach for Fibonacci, we calculate fib(n) as fib(n) = fib(n-1) + fib(n-2). This means that when calculating fib(n), we first need the values of fib(n-1) and fib(n-2) which come before it in the sequence. However, if we change the lookup table to be lookup[n] = lookup[n-2] + lookup[n-1] instead, it means that now when calculating fib(n) we first need the value of fib(

Uploaded by

Hadis Syoum
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

MU-MIT

Department of Computer Science and Engineering


DESIGNING AND ANALYSIS OF ALGORITHMS
ASSIGNMENT-1
BY Getachew Zemichael
Id: mit/ur/1061/10
Dep: cse 3 year
rd

Instructor: simon h.

Date: 4/04/2012 e.c


Answer the following questions:

#1. Explain precisely the way we solved a problem using divide and conquer, greedy algorithm and
dynamic programming.

 Divide and conquer-algorithm is an algorithm design based on multi-


branched recursively. It works by recursively breaking down a problem into two or more
sub-problems of the same or related type, until these become simple enough to be solved
directly. The solutions to the sub-problems are then combined to give a solution to the
original problem. It follows the following steps:

1. Divide-divide into a number of sub problems that are smaller instances of the same
problem.
2. Conquer-Each sub problem can be solved recursively which basically means each instance
of the sub problem is identical in nature.
3. Combine-The solutions of each sub problem can be combined so as to solve the problem
at hand.

 Greedy algorithms aims to make the optimal choice at that given moment. Each
step it chooses the optimal choice, without knowing the future. They do not look
into the future to decide the global optimal solution. They are only concerned with
the optimal solution locally. This means that the overall optimal solution may be
different from the solution the algorithm chooses. Or is an algorithmic paradigm
that builds up a solution piece by piece, always choosing the next piece that
offers the most obvious and immediate benefit. So the problems where
choosing locally optimal also leads to global solution are best fit for Greedy

How??? The Algorithm needs to follow this property:

At that exact moment in time, what is the optimal choice to make?

 Dynamic programming amounts to breaking down an optimization problem into


simpler sub-problems, and storing the solution to each sub-problem so that each
sub-problem is only solved once. Or DP is a method for solving problems by:

1. breaking them down into a collection of simpler sub problems,

2. solving each of those sub problems just once, and

3. Storing their solutions. The next time the same sub problem occurs, instead
of recomputing its solution, you simply look up the previously computed
solution. This saves computation time at the expense of a (hopefully)
modest expenditure in storage space. Either by adding memorization or
tabulation methods.

#2. If a problem is solved by both greedy algorithm and DP, which one is better to use? Why?

Both dynamic programming and greedy algorithms can be used on problems that
exhibit optimal substructure defines by saying that an optimal solution to the problem
contains within it optimal solutions to sub problems.

However, in order for the greedy solution to be optimal, the problem must also
exhibit what they call the "greedy-choice property"; i.e., a globally optimal solution can
be arrived at by making locally optimal (greedy) choices.

In contrast, dynamic programming is good for problems that exhibit not only optimal
substructure but also overlapping sub problems. This means that a particular sub
problem can be reached in multiple ways. The dynamic programming algorithm
calculates the value of each sub problem once and then can reuse these every time the
algorithm revisits them. Since we have stored previously computed sub solutions.

For example the 0-1 knapsack vs the fractional knapsack problem. Both exhibit the
optimal substructure property, thus the second one can be solved to optimality with a
greedy algorithm or a dynamic programming algorithm, although greedy would be faster in
case of storage and time complexity. But the first one requires dynamic programming or
some other non-greedy approach.

#3. Explain memoization and tubular implementation techniques in DP using any example.
How are different from recursive approach. Write the advantage disadvantage of each approach.
 Memoization Method-top Down DP is an optimization technique used to speed up
programs by storing the results of expensive function calls and returning the
cached result when the same inputs occur again.

Memoization = Recursion + Caching

For example:
If I need to calculating the Fibonacci sequence fib(a), I would just call this, and
it would call fib(a) = fib(a-1) + fib(a-2), which would call fib(a-1) = fib(a-2) + fib(a-3),
...etc..., which would call fib(2) =1+0. Then resolves fib (3), but it doesn't need to
recalculate fib (2), because I cached it.
Its table is filled on demand even though it slows due to recursive computations.

 Tabulation Method – Bottom up DP is an approach where we solve a dynamic


programming problem by first filling up a memo table, and then compute the
solution to the original problem based on the results in this table.
For example:
If I am goanna performing Fibonacci, I need to choose to calculate the
numbers in this order: fib (2), fib (3), fib (4)... caching every value so that I can
compute the next ones simply. I can think of it as filling up a table (another form of
caching).
This method computes necessarily all the sub problems, even though they may
not use in the overall solution. But it minimizes space used.
 Recursive Approach you have a function that calls itself until some base case is
reached. That is said to be recursive function. Since this method computes a
solution by calling itself a number of times again and again then it is not efficient in
time complexity as well as memory usage.
#4. How would you choose between Memoization and Tabulation for a problem?
Process of solving optimization problems using DP involves either bottom-up or top-down
I.e. tabulation and memoization dynamic programing approaches respectively.
For a specific problem that can be solved using these methods I need to choose the 2nd
one (memoizatin) dynamic programing approach because:
1. In the tabulation (bottom-up approach) while filling a table of sub solutions, It is
unsatisfying that solutions to some of these smaller sub problems are often not necessary
for getting a solution to the given problem.
2. Even though a memoization DP approach algorithm may be less space-efficient than a space-
efficient version of a bottom-up algorithm. Even though they have same time efficiency.

Therefore memoization is preferable since the sub problems are solved lazily, i.e.
precisely the computations needed are carried out.
#5. Changing the order of Fibonacci lookup tables i.e., lookup[n] = lookup [n -2] + lookup [n - 1]
instead of lookup[n] = lookup [n - 1] + lookup [n - 2], does has an effect in the memoization/top
down approach? If yes, explain.

You might also like