0% found this document useful (0 votes)
31 views

Exercise2 Sol

The document provides exercises on algorithms for students to practice. It contains 3 problems of varying difficulty marked with stars. Problem 1 asks how to modify algorithms to have a good best-case running time and provides an example of modifying sorting algorithms. Problem 2 expresses insertion sort recursively and asks for the recurrence and solution for its worst-case running time, which is solved to be Θ(n2). Problem 3 uses the substitution method to solve two recurrences, showing one solution is T(n) = n log2 n and making an educated guess the solution to the other is Θ(n3).

Uploaded by

Khalil El Lejri
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views

Exercise2 Sol

The document provides exercises on algorithms for students to practice. It contains 3 problems of varying difficulty marked with stars. Problem 1 asks how to modify algorithms to have a good best-case running time and provides an example of modifying sorting algorithms. Problem 2 expresses insertion sort recursively and asks for the recurrence and solution for its worst-case running time, which is solved to be Θ(n2). Problem 3 uses the substitution method to solve two recurrences, showing one solution is T(n) = n log2 n and making an educated guess the solution to the other is Θ(n3).

Uploaded by

Khalil El Lejri
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Exercise II, Algorithms 2022-2023

These exercises are for your own benefit. Feel free to collaborate and share your answers
with other students. There are many problems on this set, solve as many as you can and ask
for help if you get stuck for too long. Problems marked * are more difficult but also more
fun :).
These problems are taken from various sources at EPFL and on the Internet, too numer-
ous to cite individually.

1 (Excercise 2.2-4 from the book) How can we modify almost any algorithm to have a good best-
case running time?

Solution: Modify the algorithm so it tests whether the input satisfies some special-case condition
and, if it does, output a pre-computed answer. The best-case running time is generally not a
good measure of an algorithm.
For example, we can modify any sorting algorithm as follows: Given an array of n numbers
A[], we can check to see if the numbers are already sorted. If so, we can output the array. This
takes O(n) time, so the best-case running time is O(n). However, this modification can be made
for any sorting algorithm, so it tells us nothing about the actual algorithm.

2 (Exercise 2.3-4 in the book) We can express insertion sort as a recursive procedure as follows.
In order to sort A[1 . . . n], we recursively sort A[1, . . . n − 1] and then insert A[n] into the sorted
array A[1 . . . n − 1].
2a Write a recurrence for the worst-case running time of this recursive version of insertion
sort.
2b Solve the recurrence to give a tight asymptotic bound (in terms of the Θ-notation).
Solution:
2a. Since our initial problem size decreases by 1 in each recursive call, we can see that we will
have n recursive calls. Because we are looking for the worst case, we can approximate the
cost to insert the new element at each recursive step at Θ(n) (the time needed to check all
element of the array before inserting the new one). So we found the following recursion:
T (n) = T (n − 1) + Θ(n).
2b. We can easily see that we are going to make n recursive calls and at each recursive call we
have a cost equal to cn for some constant c > 0. So a good guess for a tight asymptotic
bound for the worst case running time is Θ(n2 ). The following calculation makes us strongly
believe that our guess is good:
T (n) = T (n − 1) + cn
= T (n − 2) + cn + c(n − 1)
= ...
= cn + c(n − 1) + c(n − 2) + ... + 3c + 2c + c
Xn
=c i
i=1
n(n + 1)
=c· = Θ(n2 )
2
Page 1 (of 2)

CS-250 Algorithms • Autumn 2022


Michael Kapralov
Now we will prove that our guess is correct using the substitution method. We first need to
prove that n2 is indeed an asymptotic upper bound, i.e., T (n) = O(n2 ). We prove by weak
induction on n ≥ 1 that there exists a constant a > 0 such that T (n) ≤ an2 for all n ≥ 1.
The base case holds as long as a ≥ c, as T (1) = c. We now provide the inductive step.
We have T (n) ≤ T (n − 1) + cn, and hence by the inductive hypothesis
T (n) ≤ a(n − 1)2 + cn = an2 + a − 2an + cn = an2 + a + (c − 2a)n.
Choosing a ≥ c, we ensure that a + (c − 2a)n ≤ a − an = a(1 − n) ≤ 0, and thus T (n) ≤ an2 ,
as required. This completes the proof of the upper bound.
We now prove the lower bound of Ω(n2 ). Similarly to the above, we prove by weak induction
on n that there exists a constant b > 0 such that T (n) ≥ bn2 for all n ≥ 1. The base case
holds as long as 0 < b ≤ c, as T (1) = c. We now provide the inductive step. We have
T (n) = T (n − 1) + cn, and hence by the inductive hypothesis
T (n) = T (n − 1) + cn
≥ b(n − 1)2 + cn
= bn2 + b − 2bn + cn
= bn2 + b + n(c − 2b)
≥ bn2
as long as 0 < b ≤ c/2. This completes the proof of the lower bound.
Putting the two bounds together, we conclude that T (n) = Θ(n2 ), as required.

3 Use the substitution method (mathematical induction) to solve the following recurrences
3a Show that when n is an exact power of 2, the solution of the recurrence
(
2 if n = 2,
T (n) =
2T (n/2) + n if n = 2k , for k > 1
is T (n) = n log2 n.
3b Consider the recurrence T (n) = 8T (n/2) + Θ(n2 ).
• Draw the recursion tree to make a qualified guess of the value of T (n) in terms of the
Θ-notation.
• (half a ∗) Then verify this guess using the substitution method (i.e., mathematical
induction). Ask the TA’s for a hint if you get stuck for too long.
Solution:
3a. We will argue by induction on n that T (n) = n log2 n when n ≥ 2k ∀ k ≥ 1. In the inductive
hypothesis, we assume that:
T (n/2) = (n/2) log2 (n/2)
Here we need n/2 ≥ 2, i.e. n ≥ 4 . Using the inductive hypothesis, we get:
T (n) = 2T (n/2) + n (1)
= 2(n/2) log2 (n/2) + n (2)
= n(log2 n − log2 2) + n
= n log2 n − n + n
= n log2 n

Page 2 (of 2)

CS-250 Algorithms • Autumn 2022


Michael Kapralov
where (1) follows from the recurrence equation and (2) follows from the inductive hypothesis.
Since the induction step works for n ≥ 4, it is sufficient to show that T (n) = n log2 n for
n = 2 (the base case):
T (2) = 2 = 2 log2 2

3b. Using the recursion tree method, we will make an educated guess that T (n) = Θ(n3 ).

total cost
level 0
-
cn2 cn2
6  H PP
   B@ HH PP
    B @ HHPPP
 PP
    B @ H
H PP
)
    BBN @ HH q
P
level 1  R
@ j
n 2 n 2 n 2 n 2 n 2 n 2 n 2 n 2 n 2
        
- c 2 c 2 c 2 c 2 c 2 c 2 c 2 c 2 8×c 2
.. . .. .. .. .. .. ..
 A
.. .. .. .. .. .. ..
.. .. .. .. .. .. ..
 ... A .. .. .. .. .. ..
 A
.
log(n) U  ... ..
..
..
..
..
..
..
..
..
..
..
..
n 2.

 A
n 2 n 2
 
c 4 . . . . . .c 4 8×8×c 4
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
level i . .. .. .. 8i × c n 2

- .. .. .. 2i
.. .. .. ..
.. .. . .
.. ..
? T (1) T (1) . . .. . . . . .. . . . . .. . . . . .. . . . . .. . . . . .. . . . . .. . . 8log2 n × c

The total running time is as follows:


log2 n−1  n 2
X
T (n) = c × 8i ×
2i
i=0
log2 n−1
X n2
= c × 8i ×
4i
i=0
log2 n−1
X
= c × n2 2i
i=0
= c × n2 (2log2 n − 1)
= c(n3 − n2 )
≤ cn3 = Θ(n3 )

To verify our guess, we can use substitution method. In order to show that T (n) = Θ(n3 ), we
will have to to show that T (n) = Ω(n3 ) and T (n) = O(n3 ).
For O(n3 )-part: The recurrence implies that T (n) ≤ 8T (n/2) + cn for a constant c > 0 if n > 1,
and T (1) ≤ c. We will argue by induction on n that T (n) = O(n3 ). Specifically, we will show
by strong induction on n that T (n) ≤ dn3 − d0 n2 for some constants d, d0 > 0. Using the the

Page 3 (of 2)

CS-250 Algorithms • Autumn 2022


Michael Kapralov
inductive hypothesis, we get:

T (n) ≤ 8T (n/2) + cn2


≤ 8d(n/2)3 − 8d0 (n/2)2 + cn2
= dn3 + n2 (−2d0 + c)
≤ dn3 − d0 n2 if d0 ≥ c

The base case is provided by n = 1, where T (n) = T (1) ≤ c and is satisfied by choosing d, d0 so
that d − d0 ≥ c. Thus, setting d = 2c and d0 = c satisfies both constraints and completes the
proof.
For Ω(n3 )-part: The recurrence implies that T (n) ≥ 8T (n/2) + cn for a constant c > 0,
and T (1) ≥ c. We will argue by induction on n that T (n) = Ω(n3 ). Specifically, we will show
by strong induction on n that T (n) ≥ dn3 for a constant d > 0. Using the recurrence and the
inductive hypothesis, we get:

T (n) ≥ 8T (n/2) + cn2


≥ 8d(n/2)3 + cn2
= dn3 + cn2
≥ dn3 for any c, d > 0

The base case is provided by n = 1, where T (1) ≥ c, and is satisfied by choosing d ≤ c.

4 (Exercise 4.5-1 in the book) Use the master method to give tight asymptotic bounds for the
following recurrences.

4a T (n) = 2T (n/4) + 1.

4b T (n) = 2T (n/4) + n.

4c T (n) = 2T (n/4) + n.

4d T (n) = 2T (n/4) + n2 .

Solution:

4a. The recurrence is of the form T (n) = aT nb + f (n) where a = 2, b = 4 and f (n) = 1, ∀n


(i.e., constant). Let c = 0, we have f (n) = Θ(1) = Θ(nc ) and c = 0 < 12 = log4 (2) = logb (a).
It follows from the first case of the master theorem that T (n) = Θ(nlogb (a) ) = Θ(nlog4 (2) ) =
1 √
Θ(n 2 ) = Θ( n).
1
4b. We have a = 2, b = 4 and f (n) = n 2 = Θ(nc ) for c = 21 = log4 (2) = logb (a). It follows from

the second case of the master theorem that T (n) = Θ(nlogb (a) log(n)) = Θ( n log(n)).

4c. We have a = 2, b = 4 and f (n) = n = Θ(nc ) for c = 1 > 21 = log4 (2) = logb (a). It follows
from the third case of the master theorem that T (n) = Θ(f (n)) = Θ(n).

4d. We have a = 2, b = 4 and f (n) = n2 = Θ(nc ) for c = 2 > 12 = log4 (2) = logb (a). It follows
from the third case of the master theorem that T (n) = Θ(f (n)) = Θ(n2 ).

Page 4 (of 2)

CS-250 Algorithms • Autumn 2022


Michael Kapralov
5 (old exam question) Consider the following procedure Unknown that takes as input an array
A[` . . . r] of r − ` + 1 numbers with the left-index ` and the right-index r:

Unknown(A, `, r)
1. if ` > r
2. return −∞
3. else if ` = r
4. return A[`]
3. else
4. q ← ` + b r−`
3 c
5. ansL ← Unknown(A, `, q)
6. ansR ← Unknown(A, q + 1, r)
7. if ansL > ansR
8. return ansL
9. else
10. return ansR

5a (5pts) Let A[1 . . . 8] = 6 4 2 9 2 8 7 5 . What does a call to Unknown(A, 1, 8)


return?

Solution: 9 (The maximum value in A.)

5b (8pts) Let T (n) be the time it takes to execute Unknown(A, `, r) where n = r − ` + 1 is


the number of elements in the array. Give the recurrence relation of T (n).

Solution: T (n) = T ( n3 ) + T ( 2n
3 ) + Θ(1), T (1) = Θ(1).

5c (12pts) Prove tight asymptotic bounds of T (n) using the substitution method, i.e., show
that T (n) = Θ(f (n)) for some function f .

Solution:
The function T (n) is:  
n 2n
T (n) = T +T + 1.
3 3
We guess that the function T (n) satisfies T (n) = Θ(n) and we prove this using the substitution
method. First, we prove an upper bound on T (n). We assume the following inductive hypothesis
for all m < n:
T (m) ≤ c1 m − 1
for some constant c1 > 0. We will prove that the hypothesis holds for n: (We can assume that
n is divisible by 3.)
n  n
T (n) = T +T 2 +1
3 3
n 2n
≤ c1 − 1 + c1 −1+1
3 3
= c1 n − 1.

Thus, we have T (n) = O(n).

Page 5 (of 2)

CS-250 Algorithms • Autumn 2022


Michael Kapralov
For the lower bound, we assume that T (m) ≥ c2 m for all m < n and for some constant
c2 > 0. Then, we have:
n  
2n
T (n) = T +T +1
3 3
n 2n
≥ c2 + c2
3 3
= c2 n.
Thus, we have T (n) = Ω(n). We can now conclude that T (n) = Θ(n).

6 (*, Problem 2-1 in the book) Although merge sort runs in Θ(n log n) worst-case time and insertion
sort runs in Θ(n2 ) worst-case time, the constant factors in insertion sort make it faster for small
n. Thus, it makes sense to use insertion sort within merge sort when subproblems become
sufficiently small. Consider a modification to merge sort in which n/k sublists of length k are
sorted using insertion sort and then merged using the standard merging mechanism, where k is
a value to be determined.

6a Show that the n/k sublists, each of length k, can be sorted by insertion sort in Θ(nk)
worst-case time.

6b Show that the sublists can be merged in Θ(n log(n/k)) worst-case time.

6c Given that the modified algorithm runs in Θ(nk + n log(n/k)) worst-case time, what is the
largest asymptotic value of k, as a function of n, for which the modified algorithm has the
same asymptotic running time as standard merge sort?

6d How should k be chosen in practice?

Solution:
a. We know that insertion sort has Θ(n2 ) worst-case time. Since the length of each array is k,
then the worst-case time to sort each such array is Θ(k 2 ). Since we have n/k different lists
to sort, the worst-case time is equal to Θ nk · k 2 = Θ (nk)
b. Just extending the 2-list merge to merge all the lists at once would take Θ(n·(n/k)) = Θ(n2 /k)
time (n from copying each element once into the result list, n/k from examining n/k lists at
each step to select next item for result list). To achieve Θ(n log(n/k)) time merging, we merge
the lists pairwise, then merge the resulting lists pairwise, and so on, until there is just one
list. The pairwise merging requires Θ(n) work at each level, since we are still working on n
elements, even if they are partitioned among sublists. The number of levels, starting with n/k
lists (with k elements each) and finishing with 1 list (with n elements), is log(n/k) Therefore,
the total running time for the merging is Θ(n log(n/k)).
c. The modified algorithm runs in Θ(nk + n log(n/k)) worst-case time and the standard merge
sort runs in Θ(n log n) worst-case time. So if we want that both have the same asymptotic
running time we need the following equality to be true :

Θ(nk + n log(n/k)) = Θ(nk + n log n − n log k) = Θ(n log n).

This equation will be true either if nk = Θ(n log n), or if n log(n/k) = Θ(n log n). Since
we are looking for the biggest k, we will choose k = log n.

Page 6 (of 2)

CS-250 Algorithms • Autumn 2022


Michael Kapralov
d. In practice, k should be the largest list length on which insertion sort is faster than merge
sort.

Page 7 (of 2)

CS-250 Algorithms • Autumn 2022


Michael Kapralov

You might also like