0% found this document useful (0 votes)
22 views

2019 Final Solutions

The document contains solutions to exam questions on algorithms and data structures. It includes solutions on topics like amortized analysis, maximum flow, minimum spanning trees, greedy algorithms, shortest paths, and NP-completeness. The solutions provide detailed explanations and examples to illustrate the concepts.

Uploaded by

MING LIAO
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

2019 Final Solutions

The document contains solutions to exam questions on algorithms and data structures. It includes solutions on topics like amortized analysis, maximum flow, minimum spanning trees, greedy algorithms, shortest paths, and NP-completeness. The solutions provide detailed explanations and examples to illustrate the concepts.

Uploaded by

MING LIAO
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

University of Toronto

ECE-345: Algorithms and Data Structures


Solutions to Final Examination (Fall 2019)
1. Multiple Choice, 20 points.
Each question is worth 2 points and it has only one correct answer. Write clearly! If we cannot understand
your answer, you will receive no credit.

(a) b
(b) a
(c) b
(d) a
(e) b
(f) a
(g) a
(h) b
(i) b
(j) a

2. Short Answers [Amortized Analysis, Max Flow, Minimum spanning trees], 5+7+8 points.

(a) It is easy to show that Insert in a heap can happen using O(log n) amortized time since a heap is a
balanced tree. We can also deposit an additional O(log n) credits on each node to use when it is deleted.
Clearly, every newly inserted node can be deleted only once. Using this new credit, DeleteMin can run
in O(1) amortized time, and a sequence of k delete operations can run in O(k) time.
(b) i. Compose the residual graph on the original flow. Add a positive 1 capacity on the edge that has
been increased. Using BFS, search for an augmenting path; if the path exists, we can update the
flow, otherwise, the flow is unchanged. We only need to do this once, as the augmenting path, if it
exists, increases the flow by 1, which is the maximum increase possible.
ii. Again, compose the residual graph on the original flow. If the decreased edge was not at capacity
(that is, it still has positive residual capacity), then we can decrease the edge capacity by one
without affecting the maximum flow. If not, then we add one to the negative capacity on the edge,
and look for an augmenting path in reverse (going from t to s instead of from s to t) which includes
the decreased edge.
(c) Consider an MST T . Suppose there exists a spanning tree T 0 such that the largest edge weight is smaller
than the largest edge weight in T . Call the corresponding edges, e0 and e respectively. Remove e from
T . This breaks T into two connected components. There must exist an edge e00 in T 0 that connects
these components. Clearly, w(e00 ) ≤ w(e0 ) < w(e). Thus the tree T 00 obtained from T by replacing the
edge e with e00 has weight w(T 00 ) = w(T ) + w(e00 ) − w(e) < w(T ), which is a contradiction.
We can use Prim’s algorithm to solve the problem in time O(ElogV + E).

3. Greedy Algorithms, 5+5+10 points.


i 1
2
di 1
2
(a)
ti 3
2
di 1
ti 3 1
Here, choosing the easiest assignment first (1,2) yields 1 · 3 + 2 · 5 = 13.
di
Choosing (2,1) yields 2 · 2 + 1 · 5 = 9 Generally any instance where the ratio ti of the first assignment
is smaller than the second.

1
i 1 2
di 1 3
(b)
ti 2 3
di 1
ti 2 1
Here, choosing the easiest assignment first (1,2) yields 1 · 2 + 3 · 5 = 17.
di
Choosing (2,1) yields 3 · 3 + 1 · 5 = 14 Generally any instance where the ratio ti of the first assignment
is smaller than the second.
(c) First, consider what happens when we swap adjacent assignments.

j
n X
X
T (S) = fsj lsk
j=1 k=1
i
X i+1
X i+1
X i
X
T (S 0 ) = T (S) − dsi tsk − dsi+1 tsk + dsi tsk + dsi+1 tsk
k=1 k=1 k=1 k=1
= T (S) + dsi tsi+1 − dsi+1 tsi
dsi dsi+1
What can we say about the new ordering S 0 ? This depends on t si relative to tsi+1 .

dsi ds
≤ i+1 (1)
tsi tsi+1
dsi tsi+1 − dsi+1 tsi ≤ 0 (2)
0
T (S ) ≤ T (S) (3)
So if the ratio of the earlier assignment was lower than the higher assignment, then we can reduce the
total effort by swapping them.

Let g1 be the assignment with the highest ratio. To prove the greedy choice property, we need an
optimal solution to contain g1 as the first assignment. If we consider any optimal solution O where
o1 6= g1 then we can construct a solution that is just as optimal by swapping g1 with whichever comes
before it, until it is the first choice.

To prove optimal substructure. Given that we greedily pick the first assignment, we are left with n − 1
programs that we need to order to minimize T. Say we have optimal solution O = {o1 , o2 , . . . , on }, and
optimal subsolution Os = {o2 , . . . , on }. For the sake of contradiction, assume that Oa is not the optimal
solution to ordering the remaining n − 1 assignments. Then there exists Ob such that T (Oa ) > T (Ob ).
Then construct O0 = {o1 } ∪ Ob . Then T (O0 ) = do1 to1 + T (Ob ) < do1 to1 + T (Oa ) = T (O), yielding a
contradiction as we assumed that O was optimal. Correctness follows from optimal substructure and
the greedy choice property.
4. Shortest Paths, 20 points.
Initialize N [u] = 0 for all u 6= v and N [v] = 1. Change Relax as follows:
Relax(u, v)
if d[v] > d[u] + w(u, v) then
d[v] ← d[v] + w(u, v)
N [v] ← N [u]
else if d[v] = d[u] + w(u, v) then
N [v] ← N [v] + N [u]
The idea is that, when we find a new shortest path to a node v, we set the number of shortest paths to that
of the predecessor u. However, when we find a second shortest path to v that is equal to the current shortest
path estimate d[v], (and u is the predecessor for the new path) we add the number of shortest paths to u to
the existing value of N [v].

2
5. NP-Completeness, 3+12+5 points.

(a) Certificate: Boolean assignments to the n variables of Φ.


Verify: Walk the formula and check if each clause has at least one true literal and at least one false
literal. Since the formula is walked once, the runtime is clearly O(mn), since there are m clauses and a
clause cannot contain more than one copy of the same literal.
(b) Given a 3-CNF formula Φ, we will construct a 4-CNF formula Φ0 where Φ is satisfiable iff Φ0 is NAE-
satisfiable. The formula Φ0 has the same n variables as Φ and a new variable z. For each clause
(xi ∨ xj ∨ xk ) of Φ, create the clause (xi ∨ xj ∨ xk ∨ z) in Φ0 (that is, disjoin z to each clause).
Claim: Φ is satisfiable ⇔ Φ0 is NAE-satisfiable.
Proof :
⇒: Assume Φ is satisfiable with a truth assignment I. Therefore I assigns 1 to at least one literal in
each clause of Φ. The truth assignment I ∪ {z = 0} is therefore an NAE-satisfying assignment of Φ0 .
⇐: Assume Φ0 is NAE-satisfiable with truth assignment I. Notice from the definition of an NAE-
satisfying assignment, the complement of any NAE-satisfying assignment is also an NAE-satisfying
assignment. We must consider two cases. If z = 0, then I assigns 1 to at least one literal of each clause
of Φ, so I is a satisfying assignment for Φ and the theorem holds. If z = 1, then the complement of I
is also an NAE-satisfying assignment and it has z = 0, so it is an NAE-satisfying assignment of Φ.
(c)

Φ0 =(x1 ∨ x2 ∨ x3 ∨ z) ∧ (x¯2 ∨ x¯3 ∨ x4 ∨ z)

One satisfying assignment of Φ assigns 1 to every variable. A corresponding NAE-satisfying assignment


of Φ0 extends that assignment by assigning 0 to z.

You might also like