Exercise2 Sol
Exercise2 Sol
These exercises are for your own benefit. Feel free to collaborate and share your answers
with other students. There are many problems on this set, solve as many as you can and ask
for help if you get stuck for too long. Problems marked * are more difficult but also more
fun :).
These problems are taken from various sources at EPFL and on the Internet, too numer-
ous to cite individually.
1 (Excercise 2.2-4 from the book) How can we modify almost any algorithm to have a good best-
case running time?
Solution: Modify the algorithm so it tests whether the input satisfies some special-case condition
and, if it does, output a pre-computed answer. The best-case running time is generally not a
good measure of an algorithm.
For example, we can modify any sorting algorithm as follows: Given an array of n numbers
A[], we can check to see if the numbers are already sorted. If so, we can output the array. This
takes O(n) time, so the best-case running time is O(n). However, this modification can be made
for any sorting algorithm, so it tells us nothing about the actual algorithm.
2 (Exercise 2.3-4 in the book) We can express insertion sort as a recursive procedure as follows.
In order to sort A[1 . . . n], we recursively sort A[1, . . . n − 1] and then insert A[n] into the sorted
array A[1 . . . n − 1].
2a Write a recurrence for the worst-case running time of this recursive version of insertion
sort.
2b Solve the recurrence to give a tight asymptotic bound (in terms of the Θ-notation).
Solution:
2a. Since our initial problem size decreases by 1 in each recursive call, we can see that we will
have n recursive calls. Because we are looking for the worst case, we can approximate the
cost to insert the new element at each recursive step at Θ(n) (the time needed to check all
element of the array before inserting the new one). So we found the following recursion:
T (n) = T (n − 1) + Θ(n).
2b. We can easily see that we are going to make n recursive calls and at each recursive call we
have a cost equal to cn for some constant c > 0. So a good guess for a tight asymptotic
bound for the worst case running time is Θ(n2 ). The following calculation makes us strongly
believe that our guess is good:
T (n) = T (n − 1) + cn
= T (n − 2) + cn + c(n − 1)
= ...
= cn + c(n − 1) + c(n − 2) + ... + 3c + 2c + c
Xn
=c i
i=1
n(n + 1)
=c· = Θ(n2 )
2
Page 1 (of 2)
3 Use the substitution method (mathematical induction) to solve the following recurrences
3a Show that when n is an exact power of 2, the solution of the recurrence
(
2 if n = 2,
T (n) =
2T (n/2) + n if n = 2k , for k > 1
is T (n) = n log2 n.
3b Consider the recurrence T (n) = 8T (n/2) + Θ(n2 ).
• Draw the recursion tree to make a qualified guess of the value of T (n) in terms of the
Θ-notation.
• (half a ∗) Then verify this guess using the substitution method (i.e., mathematical
induction). Ask the TA’s for a hint if you get stuck for too long.
Solution:
3a. We will argue by induction on n that T (n) = n log2 n when n ≥ 2k ∀ k ≥ 1. In the inductive
hypothesis, we assume that:
T (n/2) = (n/2) log2 (n/2)
Here we need n/2 ≥ 2, i.e. n ≥ 4 . Using the inductive hypothesis, we get:
T (n) = 2T (n/2) + n (1)
= 2(n/2) log2 (n/2) + n (2)
= n(log2 n − log2 2) + n
= n log2 n − n + n
= n log2 n
Page 2 (of 2)
3b. Using the recursion tree method, we will make an educated guess that T (n) = Θ(n3 ).
total cost
level 0
-
cn2 cn2
6 H PP
B@ HH PP
B @ HHPPP
PP
B @ H
H PP
)
BBN @ HH q
P
level 1 R
@ j
n 2 n 2 n 2 n 2 n 2 n 2 n 2 n 2 n 2
- c 2 c 2 c 2 c 2 c 2 c 2 c 2 c 2 8×c 2
.. . .. .. .. .. .. ..
A
.. .. .. .. .. .. ..
.. .. .. .. .. .. ..
... A .. .. .. .. .. ..
A
.
log(n) U ... ..
..
..
..
..
..
..
..
..
..
..
..
n 2.
A
n 2 n 2
c 4 . . . . . .c 4 8×8×c 4
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
level i . .. .. .. 8i × c n 2
- .. .. .. 2i
.. .. .. ..
.. .. . .
.. ..
? T (1) T (1) . . .. . . . . .. . . . . .. . . . . .. . . . . .. . . . . .. . . . . .. . . 8log2 n × c
To verify our guess, we can use substitution method. In order to show that T (n) = Θ(n3 ), we
will have to to show that T (n) = Ω(n3 ) and T (n) = O(n3 ).
For O(n3 )-part: The recurrence implies that T (n) ≤ 8T (n/2) + cn for a constant c > 0 if n > 1,
and T (1) ≤ c. We will argue by induction on n that T (n) = O(n3 ). Specifically, we will show
by strong induction on n that T (n) ≤ dn3 − d0 n2 for some constants d, d0 > 0. Using the the
Page 3 (of 2)
The base case is provided by n = 1, where T (n) = T (1) ≤ c and is satisfied by choosing d, d0 so
that d − d0 ≥ c. Thus, setting d = 2c and d0 = c satisfies both constraints and completes the
proof.
For Ω(n3 )-part: The recurrence implies that T (n) ≥ 8T (n/2) + cn for a constant c > 0,
and T (1) ≥ c. We will argue by induction on n that T (n) = Ω(n3 ). Specifically, we will show
by strong induction on n that T (n) ≥ dn3 for a constant d > 0. Using the recurrence and the
inductive hypothesis, we get:
4 (Exercise 4.5-1 in the book) Use the master method to give tight asymptotic bounds for the
following recurrences.
4a T (n) = 2T (n/4) + 1.
√
4b T (n) = 2T (n/4) + n.
4c T (n) = 2T (n/4) + n.
4d T (n) = 2T (n/4) + n2 .
Solution:
4a. The recurrence is of the form T (n) = aT nb + f (n) where a = 2, b = 4 and f (n) = 1, ∀n
(i.e., constant). Let c = 0, we have f (n) = Θ(1) = Θ(nc ) and c = 0 < 12 = log4 (2) = logb (a).
It follows from the first case of the master theorem that T (n) = Θ(nlogb (a) ) = Θ(nlog4 (2) ) =
1 √
Θ(n 2 ) = Θ( n).
1
4b. We have a = 2, b = 4 and f (n) = n 2 = Θ(nc ) for c = 21 = log4 (2) = logb (a). It follows from
√
the second case of the master theorem that T (n) = Θ(nlogb (a) log(n)) = Θ( n log(n)).
4c. We have a = 2, b = 4 and f (n) = n = Θ(nc ) for c = 1 > 21 = log4 (2) = logb (a). It follows
from the third case of the master theorem that T (n) = Θ(f (n)) = Θ(n).
4d. We have a = 2, b = 4 and f (n) = n2 = Θ(nc ) for c = 2 > 12 = log4 (2) = logb (a). It follows
from the third case of the master theorem that T (n) = Θ(f (n)) = Θ(n2 ).
Page 4 (of 2)
Unknown(A, `, r)
1. if ` > r
2. return −∞
3. else if ` = r
4. return A[`]
3. else
4. q ← ` + b r−`
3 c
5. ansL ← Unknown(A, `, q)
6. ansR ← Unknown(A, q + 1, r)
7. if ansL > ansR
8. return ansL
9. else
10. return ansR
Solution: T (n) = T ( n3 ) + T ( 2n
3 ) + Θ(1), T (1) = Θ(1).
5c (12pts) Prove tight asymptotic bounds of T (n) using the substitution method, i.e., show
that T (n) = Θ(f (n)) for some function f .
Solution:
The function T (n) is:
n 2n
T (n) = T +T + 1.
3 3
We guess that the function T (n) satisfies T (n) = Θ(n) and we prove this using the substitution
method. First, we prove an upper bound on T (n). We assume the following inductive hypothesis
for all m < n:
T (m) ≤ c1 m − 1
for some constant c1 > 0. We will prove that the hypothesis holds for n: (We can assume that
n is divisible by 3.)
n n
T (n) = T +T 2 +1
3 3
n 2n
≤ c1 − 1 + c1 −1+1
3 3
= c1 n − 1.
Page 5 (of 2)
6 (*, Problem 2-1 in the book) Although merge sort runs in Θ(n log n) worst-case time and insertion
sort runs in Θ(n2 ) worst-case time, the constant factors in insertion sort make it faster for small
n. Thus, it makes sense to use insertion sort within merge sort when subproblems become
sufficiently small. Consider a modification to merge sort in which n/k sublists of length k are
sorted using insertion sort and then merged using the standard merging mechanism, where k is
a value to be determined.
6a Show that the n/k sublists, each of length k, can be sorted by insertion sort in Θ(nk)
worst-case time.
6b Show that the sublists can be merged in Θ(n log(n/k)) worst-case time.
6c Given that the modified algorithm runs in Θ(nk + n log(n/k)) worst-case time, what is the
largest asymptotic value of k, as a function of n, for which the modified algorithm has the
same asymptotic running time as standard merge sort?
Solution:
a. We know that insertion sort has Θ(n2 ) worst-case time. Since the length of each array is k,
then the worst-case time to sort each such array is Θ(k 2 ). Since we have n/k different lists
to sort, the worst-case time is equal to Θ nk · k 2 = Θ (nk)
b. Just extending the 2-list merge to merge all the lists at once would take Θ(n·(n/k)) = Θ(n2 /k)
time (n from copying each element once into the result list, n/k from examining n/k lists at
each step to select next item for result list). To achieve Θ(n log(n/k)) time merging, we merge
the lists pairwise, then merge the resulting lists pairwise, and so on, until there is just one
list. The pairwise merging requires Θ(n) work at each level, since we are still working on n
elements, even if they are partitioned among sublists. The number of levels, starting with n/k
lists (with k elements each) and finishing with 1 list (with n elements), is log(n/k) Therefore,
the total running time for the merging is Θ(n log(n/k)).
c. The modified algorithm runs in Θ(nk + n log(n/k)) worst-case time and the standard merge
sort runs in Θ(n log n) worst-case time. So if we want that both have the same asymptotic
running time we need the following equality to be true :
This equation will be true either if nk = Θ(n log n), or if n log(n/k) = Θ(n log n). Since
we are looking for the biggest k, we will choose k = log n.
Page 6 (of 2)
Page 7 (of 2)