0% found this document useful (0 votes)
49 views

Some Document I Need To Upload

1. The naive matrix multiplication algorithm has a complexity of O(n3) due to the three nested loops. 2. The divide and conquer matrix multiplication algorithm has the same O(n3) complexity as naive matrix multiplication due to the application of the Master's theorem. 3. Strassen's 1969 algorithm improves the complexity to O(n2.807) by reducing the number of multiplications performed at each step.

Uploaded by

Mr. Xcoder
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views

Some Document I Need To Upload

1. The naive matrix multiplication algorithm has a complexity of O(n3) due to the three nested loops. 2. The divide and conquer matrix multiplication algorithm has the same O(n3) complexity as naive matrix multiplication due to the application of the Master's theorem. 3. Strassen's 1969 algorithm improves the complexity to O(n2.807) by reducing the number of multiplications performed at each step.

Uploaded by

Mr. Xcoder
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

CSE103: TD 3 Solutions

Exercise 9: naive matrix multiplication


def matmul(M,N):
-> O(1)
-> do n times: O(1)
-> do n times:
-----> do n times:
---------> do n times: O(1)
We highlighted each portion of the code with its complexity, according to
our current cost model. Let the total complexity be T(n). We see that, up to
constant factors, T (n) is O(n3 ) because of the three nested iterators.

Exercise 10: divide and conquer for matrix multiplication


We’re given in the text of the problem that the complexity T (n) of the new
algorithm obeys the recurrence
!n"
T (n) = 8T + Θ(n2 )
2
We can then apply Master’s theorem with the values a = 8, b = 2 and
c = 2. We see that a > bc , so the overall complexity is Θ(nlog2 8 ) = Θ(n3 ).
Unfortunately, this algorithm brings no improvement over the naive matrix
multiplication we explored in exercise 9. In practice, it will even be worse due
to the recursive calls.

Exercise 10*: Volker Strassen’s 1968 Karatsuba-style algo-


rithm
If only 7 multiplications are performed, the constant factor of Θ(n2 ) remains
unchanged since matrix addition takes no more than this number of operations.
The new recurrence formula is
!n"
T (n) = 7T + Θ(n2 )
2
We can then apply Master’s theorem with the values a = 8, b = 2 and c = 2.
We see that still a > bc , so the overall complexity is

Θ(nlog2 7 ) ∼ Θ(n2.807 ).

This is a much better result over the simple algorithms from exercises 9 and 10.

1
CSE103: TD 3 Solutions
Time: 7:25 - 8:55

Solution.

1. True. Proof: Set c = 1 and consider the function ϕ(n) = n3 −n2 −10n−6.
Notice that ϕ′′ (n) = 6n − 2 ≥ 0 for n ≥ 1/3 and ϕ′ (n) = 3n2 − 2n − 10
is positive for n = 3 (we have ϕ′ (3) = 11 > 0). Then ϕ′ (n) is positive
for all n ≥ 3 and ϕ(4) = 2 > 0, so ∀n > 4, ϕ(n) > 0. We deduce that
∀n > m = 4, n2 + 10n + 6 < n3 , so n2 + 10n + 6 = O(n3 ).

2. True. Proof: Asymptotically, 2n is exponential while n1000 is polynomial.


Polynomials are sub-exponential so 42n1000 = O(2n ). The addition of a
Big-Theta and a Big-O of the same order give the Big-Theta of that order,
so 2n + 42n1000 is indeed Θ(2n ).

3. False. Proof: The logarithmic function is sub-polynomial, so n1/4 domi-


nates the logarithm (recall that log n = O(n! ) for any positive ").
4. True. Proof: Obviously n2 = O(n4 ), so 37n4 + n + 15 = Ω(n2 ).

5. True. Proof: Apply the logarithm to the definition of f to get log f (n) =
log n + O(n) log 2. Since log n = O(n) we
# conclude
$ that log f (n) = O(n).
Example: n27n+3 then log f (n) = log n27n+3 = log n + (7n + 3) log 2,
which is indeed O(n)

6. False. Proof: n log n is Ω(n) and is not Θ(n). Suppose there existed c and
n log n
m satisfying 2 2 < c2n , for all n > m. Then we would have, for all
n > m, the inequality log 2n−2 n < log2 c, which would mean that log 2n−2 n
is bounded from above by a constant, which is trivially false because it
asymptotically tends to +∞, yielding a contradiction.

2
Solution.

1. We have T (n) = 2T ( n3 ) + Θ(n), so we apply Master’s Theorem with


a = 2, b = 3 and c = 1. Since a < bc , the asymptotic upper bound on the
complexity is T (n) = Θ(nc ) = Θ(n) .

2. The Master Theorem is inapplicable here because there exists no c ∈ R≥0


such that 2n = Θ(nc ). We have to proceed by the recurrence tree method.
n n+k2k
The sum of the entries on the k th row is n2k · 2 2k = 2 2k . The depth of
the tree can be computed by setting ⌊2 2h ⌋ = 1, which gives h = log2 n.
Then, we obtain
log2 n
% k
2k · 2n/2 .
k=1

Let us examine when the general term is maximal: consider f (n) = k + 2nk ,
such that f ′ (k) = 1 − kn2−k−1 = 0 when kn = 2k+1 , the solution of
which is (unfortunately) a W -function. Thus we use a looser worst-case
k
estimation, 2k · 2n/2 < 2n+1 . As such, we get T (n) = O(2n log n) .

3. Here, the Master Theorem is applicable because 1 < 2 + sin n < 3, so


there exists c = 2 ∈ R≥0 such that n2 /(sin n + 2) = Θ(nc ) = Θ(n2 ). We
set a = 4, b = 2 and c = 2 and notice that a = bc . The result is therefore
Θ(n2 log n)

4. This one is a trivial application of the Master theorem with


& '
n
T (n) = 1 · T + n0 .
3/2

We set a = 1, b = 3/2 and c = 0 and notice that a = bc . The result is


then Θ(n0 log n) = Θ(log n)

3
Solution.

4
1. def weirdSort(l):
O(1) if O(1): -> O(1) elif O(1): -> O(1) -> r1 = T(2n/3), O(n)
-> r2 = O(n), T(2n/3) r = T(2n/3), O(n) O(1)

5
Solution.
1. With our current cost model, let us analyse the resulting complexity:
def mul(x, y, s=0):
-> if O(1):
-----> O(n)
-> if O(1):
-----> O(n)
-> T(n-1)
The Master Theorem is inapplicable. The algorithm follows the recurrence
T (n) = T (n − 1) + Θ(n). By the recursion tree method, we obtain the
(n
upper bound k=1 k = 12 n(n + 1), which is O(n2 ) .

2. In this case the code only performs the addition of 0 and n, which is simply
O(1), independent of n.

3. why not write the explicit recursion to check Still O(n2 ) because even-
tually s and y cannot be added with less than O(n) operations.
4. If x = 0 then the above algorithm results in an infinite recursion, which
crashes once the system recursion limit is reached. If y = 0, unless x =
0 (for which it fails as described above), the algorithm correctly gives 0
because s and y never change from the value 0.

You might also like