Get (Ebook) Data Structures and Algorithm Analysis in C. Solutions Manual. 2nd Edition by Mark Allen Weiss PDF ebook with Full Chapters Now
Get (Ebook) Data Structures and Algorithm Analysis in C. Solutions Manual. 2nd Edition by Mark Allen Weiss PDF ebook with Full Chapters Now
com
https://ebooknice.com/product/data-structures-and-algorithm-
analysis-in-c-solutions-manual-2nd-edition-34897506
OR CLICK HERE
DOWLOAD EBOOK
ebooknice.com
ebooknice.com
https://ebooknice.com/product/data-structures-and-algorithm-analysis-
in-java-2612714
ebooknice.com
ebooknice.com
(Ebook) Data Structures And Problem Solving Using C++ by
Mark Allen Weiss ISBN 9780321205001, 0321205006
https://ebooknice.com/product/data-structures-and-problem-solving-
using-c-33556434
ebooknice.com
https://ebooknice.com/product/data-structures-and-algorithm-analysis-
in-c-3rd-edition-2384264
ebooknice.com
https://ebooknice.com/product/data-structures-problem-solving-using-
java-50277140
ebooknice.com
https://ebooknice.com/product/data-structures-and-algorithm-analysis-
in-java-3rd-edition-2391666
ebooknice.com
ebooknice.com
Data Structures
and
Algorithm Analysis in C
(second edition)
Solutions Manual
-iii-
Chapter 1: Introduction
1.3 Because of round-off errors, it is customary to specify the number of decimal places that
should be included in the output and round up accordingly. Otherwise, numbers come out
looking strange. We assume error checks have already been performed; the routine
SeparateO is left to the reader. Code is shown in Fig. 1.1.
1.4 The general way to do this is to write a procedure with heading
which opens FileName,O does whatever processing is needed, and then closes it. If a line of
the form
#include SomeFile
ProcessFile( SomeFile );
is made recursively. Self-referential includes can be detected by keeping a list of files for
which a call to ProcessFileO has not yet terminated, and checking this list before making a
new call to ProcessFile.O
1.5 (a) The proof is by induction. The theorem is clearly true for 0 < XO ≤ 1, since it is true for
XO = 1, and for XO < 1, log XO is negative. It is also easy to see that the theorem holds for
1 < XO ≤ 2, since it is true for XO = 2, and for XO < 2, log XO is at most 1. Suppose the theorem
is true for pO < XO ≤ 2pO (where pO is a positive integer), and consider any 2pO < YO ≤ 4pO
(pO ≥ 1). Then log YO = 1 + log (YO / 2) < 1 + YO / 2 < YO / 2 + YO / 2 ≤ YO, where the first ine-
quality follows by the inductive hypothesis.
(b) Let 2XO = AO. Then AOBO = (2XO)BO = 2XBO. Thus log AOBO = XBO. Since XO = log AO, the
theorem is proved.
1.6 (a) The sum is 4/3 and follows directly from the formula.
1 2 3 2 3
(b) SO = __ + ___ 2
+ ___ 3
+ . . . . 4SO = 1+ __ + ___ + . . . . Subtracting the first equation from
4 4 4 4 42
1 ___ 2
the second gives 3SO = 1 + __ + 2 + . . . . By part (a), 3SO = 4/ 3 so SO = 4/ 9.
4 4
1 4 9 4 ___ 9 16
(c) SO = __ + 2 + 3 + . . . . 4SO = 1 +
___ ___ __ + 2 + ___ + . . . . Subtracting the first equa-
4 4 4 4 4 43
3 ___ 5 7
tion from the second gives 3SO = 1+ __ + 2 + 3 + . . . . Rewriting, we get
___
4 4 4
∞ i
___ + ∞ ___ 1
3SO = 2 Σ iO Σ iO . Thus 3SO = 2(4/ 9) + 4/ 3 = 20/ 9. Thus SO = 20/ 27.
iO=0 4 iO=0 4
∞ iONO
(d) Let SNO = Σ ___ iO
. Follow the same method as in parts (a) - (c) to obtain a formula for SNO
iO=0 4
in terms of SNO−1, SNO−2, ..., SO0 and solve the recurrence. Solving the recurrence is very
difficult.
-1-
_______________________________________________________________________________
if( N < 0 )
{
putchar(’-’);
N = -N;
}
N = RoundUp( N, DecPlaces );
IntegerPart = IntPart( N ); FractionPart = DecPart( N );
PrintOut( IntegerPart ); /* Using routine in text */
if( DecPlaces > 0 )
putchar(’.’);
PrintFractionPart( FractionPart, DecPlaces );
}
Fig. 1.1.
_______________________________________________________________________________
N 1 N 1
__ − OINO/ 2 − 1OK __
1 ∼
∼ ln NO − ln NO/ 2 ∼∼ ln 2.
__
1.7 Σ i
= Σ i Σ i
iO=OINO/ 2OK iO=1 iO=1
-2-
1.8 24 = 16 ≡ 1 (modO 5). (24)25 ≡ 125 (modO 5). Thus 2100 ≡ 1 (modO 5).
1.9 (a) Proof is by induction. The statement is clearly true for NO = 1 and NO = 2. Assume true
kO+1 k
for NO = 1, 2, ..., kO. Then Σ FiO = Σ FiO+FkO+1. By the induction hypothesis, the value of the
iO=1 iO=1
sum on the right is FkO+2 − 2 + FkO+1 = FkO+3 − 2, where the latter equality follows from the
definition of the Fibonacci numbers. This proves the claim for NO = kO + 1, and hence for all
NO.
(b) As in the text, the proof is by induction. Observe that φ + 1 = φ2. This implies that
φ−1 + φ−2 = 1. For NO = 1 and NO = 2, the statement is true. Assume the claim is true for
NO = 1, 2, ..., kO.
FkO+1 = FkO + FkO−1
by the definition and we can use the inductive hypothesis on the right-hand side, obtaining
F < φkO + φkO−1
kO+1
-3-
Chapter 2: Algorithm Analysis
-4-
See
J. Bentley, "Programming Pearls," Communications of the ACM 30 (1987), 754-757.
Note that if the second line of algorithm 3 is replaced with the statement
Swap( A[i], A[ RandInt( 0, N-1 ) ] );
then not all permutations are equally likely. To see this, notice that for NO = 3, there are 27
equally likely ways of performing the three swaps, depending on the three random integers.
Since there are only 6 permutations, and 6 does not evenly divide
27, each permutation cannot possibly be equally represented.
(b) For the first algorithm, the time to decide if a random number to be placed in AO[iO] has
not been used earlier is OO(iO). The expected number of random numbers that need to be
tried is NO/ (NO − iO). This is obtained as follows: iO of the NO numbers would be duplicates.
Thus the probability of success is (NO − iO) / NO. Thus the expected number of independent
trials is NO/ (NO − iO). The time bound is thus
NO−1 NO−1 NO2
Ni
____ ____ < NO2NO−1 ____
1 N 1
__ = OO(NO2
Σ O −i
< Σ O −i Σ O −i
< NO2
Σ log NO)
iO=0 N iO=0 N iO=0 N PjO=1 Pj
The second algorithm saves a factor of iO for each random number, and thus reduces the time
bound to OO(NOlog NO) on average. The third algorithm is clearly linear.
(c, d) The running times should agree with the preceding analysis if the machine has enough
memory. If not, the third algorithm will not seem linear because of a drastic increase for
large NO.
(e) The worst-case running time of algorithms I and II cannot be bounded because there is
always a finite probability that the program will not terminate by some given time TO. The
algorithm does, however, terminate with probability 1. The worst-case running time of the
third algorithm is linear - its running time does not depend on the sequence of random
numbers.
2.8 Algorithm 1 would take about 5 days for NO = 10,000, 14.2 years for NO = 100,000 and 140
centuries for NO = 1,000,000. Algorithm 2 would take about 3 hours for NO = 100,000 and
about 2 weeks for NO = 1,000,000. Algorithm 3 would use 11⁄2 minutes for NO = 1,000,000.
These calculations assume a machine with enough memory to hold the array. Algorithm 4
solves a problem of size 1,000,000 in 3 seconds.
2.9 (a) OO(NO2).
(b) OO(NOlog NO).
2.10 (c) The algorithm is linear.
2.11 Use a variation of binary search to get an OO(log NO) solution (assuming the array is preread).
2.13 (a) Test to see if NO is an odd number (or 2) and is not divisible by 3, 5, 7, ..., √MM
NOO.
(b) OO( √N
MMOO), assuming that all divisions count for one unit of time.
(c) BO = OO(log NO).
(d) OO(2BO/ 2).
(e) If a 20-bit number can be tested in time TO, then a 40-bit number would require about TO2
time.
(f) BO is the better measure because it more accurately represents the sizeO of the input.
-5-
2.14 The running time is proportional to NO times the sum of the reciprocals of the primes less
than NO. This is OO(NOlog log NO). See Knuth, Volume 2, page 394.
2.15 Compute XO2, XO4, XO8, XO10, XO20, XO40, XO60, and XO62.
2.16 Maintain an OIarray
log NOK
PowersOfXO that can be filled in a for loop. The array will contain XO, XO2,
XO4, up to XO2 . The binary representation of NO (which can be obtained by testing even or
odd and then dividing by 2, until all bits are examined) can be used to multiply the
appropriate entries of the array.
2.17 For NO = 0 or NO = 1, the number of multiplies is zero. If bO(NO) is the number of ones in the
binary representation of NO, then if NO > 1, the number of multiplies used is
OIlog NOK + bO(NO) − 1
-6-
Chapter 3: Lists, Stacks, and Queues
3.2 The comments for Exercise 3.4 regarding the amount of abstractness used apply here. The
running time of the procedure in Fig. 3.1 is O (L + P ).
O O O
_______________________________________________________________________________
void
PrintLots( List L, List P )
{
int Counter;
Position Lpos, Ppos;
Lpos = First( L );
Ppos = First( P );
Counter = 1;
while( Lpos != NULL && Ppos != NULL )
{
if( Ppos->Element == Counter++ )
{
printf( "%? ", Lpos->Element );
Ppos = Next( Ppos, P );
}
Lpos = Next( Lpos, L );
}
}
Fig. 3.1.
_______________________________________________________________________________
3.3 (a) For singly linked lists, the code is shown in Fig. 3.2.
-7-
_______________________________________________________________________________
/* BeforeP is the cell before the two adjacent cells that are to be swapped. */
/* Error checks are omitted for clarity. */
void
SwapWithNext( Position BeforeP, List L )
{
Position P, AfterP;
P = BeforeP->Next;
AfterP = P->Next; /* Both P and AfterP assumed not NULL. */
P->Next = AfterP->Next;
BeforeP->Next = AfterP;
AfterP->Next = P;
}
Fig. 3.2.
_______________________________________________________________________________
(b) For doubly linked lists, the code is shown in Fig. 3.3.
_______________________________________________________________________________
void
SwapWithNext( Position P, List L )
{
Position BeforeP, AfterP;
BeforeP = P->Prev;
AfterP = P->Next;
P->Next = AfterP->Next;
BeforeP->Next = AfterP;
AfterP->Next = P;
P->Next->Prev = P;
P->Prev = AfterP;
AfterP->Prev = BeforeP;
}
Fig. 3.3.
_______________________________________________________________________________
-8-
_______________________________________________________________________________
List
Intersect( List L1, List L2 )
{
List Result;
Position L1Pos, L2Pos, ResultPos;
3.7 (a) One algorithm is to keep the result in a sorted (by exponent) linked list. Each of the MN O
multiplies requires a search of the linked list for duplicates. Since the size of the linked list
is O (MN ), the total running time is O (M 2N 2).
O O O
O O
(b) The bound can be improved by multiplying one term by the entire other polynomial, and
then using the equivalent of the procedure in Exercise 3.2 to insert the entire sequence.
Then each sequence takes O (MN ), but there are only M of them, giving a time bound of
O O O
O (M 2N ).
O
O
O
(c) An O (MN log MN ) solution is possible by computing all MN pairs and then sorting by
O O O O
exponent using any algorithm in Chapter 7. It is then easy to merge duplicates afterward.
(d) The choice of algorithm depends on the relative values of M and N . If they are close,
O O
then the solution in part (c) is better. If one polynomial is very small, then the solution in
part (b) is better.
-9-
_______________________________________________________________________________
List
Union( List L1, List L2 )
{
List Result;
ElementType InsertElement;
Position L1Pos, L2Pos, ResultPos;
small, a standard method that uses O (P ) multiplies instead of O (log P ) might be better
O O O O
because the multiplies would involve a large number with a small number, which is good
for the multiplication routine in part (b).
3.10 This is a standard programming project. The algorithm can be sped up by setting
M' = M mod N , so that the hot potato never goes around the circle more than once, and
O O O O
-10-
then if M' > N / 2, passing the potato appropriately in the alternative direction. This
O O
requires a doubly linked list. The worst-case running time is clearly O (N min (M , N )), O O O O O
although when these heuristics are used, and M and N are comparable, the algorithm might
O O
compiler’s memory management routines do poorly with the particular pattern of free s in P O
3.12 Reversal of a singly linked list can be done nonrecursively by using a stack, but this
requires O (N ) extra space. The solution in Fig. 3.5 is similar to strategies employed in gar-
O O
bage collection algorithms. At the top of the while loop, the list from the start to Pre-O
viousPos is already reversed, whereas the rest of the list, from CurrentPos to the end, is
O O
List
ReverseList( List L )
{
Position CurrentPos, NextPos, PreviousPos;
PreviousPos = NULL;
CurrentPos = L;
NextPos = L->Next;
while( NextPos != NULL )
{
CurrentPos->Next = PreviousPos;
PreviousPos = CurrentPos;
CurrentPos = NextPos;
NextPos = NextPos->Next;
}
CurrentPos->Next = PreviousPos;
return CurrentPos;
}
Fig. 3.5.
_______________________________________________________________________________
deleted from a list of size N , hence O (N 2) is spent performing deletes. The remainder of
O O
O
(d) O (N 2).
O
O
-11-
_______________________________________________________________________________
Position
Find( ElementType X, List L )
{
int i, Where;
Where = 0;
for( i = 1; i < L.SizeOfList; i++ )
if( X == L[i].Element )
{
Where = i;
break;
}
we’ll call S , is used to keep track of the Push and Pop operations, and the other, M , keeps
O O O O
than or equal to the top element in stack M , then we also perform Push(X,M). To imple-
O
ment Pop(E), we perform Pop(S). If X is equal to the top element in stack M , then we also
O O
Pop(M). FindMin(E) is performed by examining the top of M . All these operations are
O
clearly O (1).
O
(b) This result follows from a theorem in Chapter 7 that shows that sorting must take
Ω(N log N ) time. O (N ) operations in the repertoire, including DeleteMin , would be
O O O O O
sufficient to sort.
-12-
_______________________________________________________________________________
/* Assuming a header. */
Position
Find( ElementType X, List L )
{
Position PrevPos, XPos;
PrevPos = FindPrevious( X, L );
if( PrevPos->Next != NULL ) /* Found. */
{
XPos = PrevPos ->Next;
PrevPos->Next = XPos->Next;
XPos->Next = L->Next;
L->Next = XPos;
return XPos;
}
else
return NULL;
}
Fig. 3.7.
_______________________________________________________________________________
3.23 Three stacks can be implemented by having one grow from the bottom up, another from the
top down, and a third somewhere in the middle growing in some (arbitrary) direction. If the
third stack collides with either of the other two, it needs to be moved. A reasonable strategy
is to move it so that its center (at the time of the move) is halfway between the tops of the
other two stacks.
3.24 Stack space will not run out because only 49 calls will be stacked. However, the running
time is exponential, as shown in Chapter 2, and thus the routine will not terminate in a rea-
sonable amount of time.
3.25 The queue data structure consists of pointers Q->Front and Q->Rear, which point to the
O O
beginning and end of a linked list. The programming details are left as an exercise because
it is a likely programming assignment.
3.26 (a) This is a straightforward modification of the queue routines. It is also a likely program-
ming assignment, so we do not provide a solution.
-13-
Chapter 4: Trees
4.1 (a) A . O
(b) G , H , I , L , M , and K .
O O O O O O
(a) A . O
(b) D and E .O O
(c) C . O
(d) 1.
(e) 3.
4.3 4.
4.4 There are N nodes. Each node has two pointers, so there are 2N pointers. Each node but
O O
the root has one incoming pointer from its parent, which accounts for N −1 pointers. The O
4.5 Proof is by induction. The theorem is trivially true for H = 0. Assume true for H = 1, 2, ..., O O
k . A tree of height k +1 can have two subtrees of height at most k . These can have at most
O O O
2k +1−1 nodes each by the induction hypothesis. These 2k +2−2 nodes plus the root prove the
O O
4.6 This can be shown by induction. Alternatively, let N = number of nodes, F = number of O O
full nodes, L = number of leaves, and H = number of half nodes (nodes with one child).
O O
L − F = 1.
O O
4.7 This can be shown by induction. In a tree with no nodes, the sum is zero, and in a one-node
tree, the root is a leaf at depth zero, so the claim is true. Suppose the theorem is true for all
trees with at most k nodes. Consider any tree with k +1 nodes. Such a tree consists of an i
O O O
node left subtree and a k − i node right subtree. By the inductive hypothesis, the sum for
O O
the left subtree leaves is at most one with respect to the left tree root. Because all leaves are
one deeper with respect to the original tree than with respect to the subtree, the sum is at
most 1⁄2 with respect to the root. Similar logic implies that the sum for leaves in the right
subtree is at most 1⁄2, proving the theorem. The equality is true if and only if there are no
nodes with one child. If there is a node with one child, the equality cannot be true because
adding the second child would increase the sum to higher than 1. If no nodes have one
child, then we can find and remove two sibling leaves, creating a new tree. It is easy to see
that this new tree has the same sum as the old. Applying this step repeatedly, we arrive at a
single node, whose sum is 1. Thus the original tree had sum 1.
4.8 (a) - * * a b + c d e.
(b) ( ( a * b ) * ( c + d ) ) - e.
(c) a b * c d + * e -.
-14-
4.9
3 4
1 4 1 6
2 6 2 5 9
5 9 7
4.11 This problem is not much different from the linked list cursor implementation. We maintain
an array of records consisting of an element field, and two integers, left and right. The free
list can be maintained by linking through the left field. It is easy to write the CursorNew O
and CursorDispose routines, and substitute them for malloc and free.
O
4.12 (a) Keep a bit array B . If i is in the tree, then B [i ] is true; otherwise, it is false. Repeatedly
O O O O
generate random integers until an unused one is found. If there are N elements already in O
the tree, then M − N are not, and the probability of finding one of these is (M − N ) / M .
O O O O O
(b) To find an element that is in the tree, repeatedly generate random integers until an
already-used integer is found. The probability of finding one is N / M , so the expected O O
number of trials is M / N = α. O O
(c) The total cost for one insert and one delete is α / (α − 1) + α = 1 + α + 1 / (α − 1). Set-
ting α = 2 minimizes this cost.
4.15 (a) N (0) = 1, N (1) = 2, N (H ) = N (H −1) + N (H −2) + 1.
O O O O O O O O
(b) The heights are one less than the Fibonacci numbers.
4.16
2 6
1 3 5 9
4.17 It is easy to verify by hand that the claim is true for 1 ≤ k ≤ 3. Suppose it is true for k = 1, O O
2, 3, ... H . Then after the first 2H − 1 insertions, 2H −1 is at the root, and the right subtree is
O
O O
namely, 2H through 2H + 2H −1 − 1, insert a new maximum and get placed in the right
O O O
-15-
subtree, eventually forming a perfectly balanced right subtree of height H −1. This follows
O
by the induction hypothesis because the right subtree may be viewed as being formed from
the successive insertion of 2H −1 + 1 through 2H + 2H −1 − 1. The next insertion forces an
O O O
imbalance at the root, and thus a single rotation. It is easy to check that this brings 2H to
O
the root and creates a perfectly balanced left subtree of height H −1. The new key is
O
attached to a perfectly balanced right subtree of height H −2 as the last node in the right
O
path. Thus the right subtree is exactly as if the nodes 2H + 1 through 2H + 2H −1 were
O O O
H −1. Thus after the last insertion, both the left and the right subtrees are perfectly bal-
O
anced, and of the same height, so the entire tree of 2H +1 − 1 nodes is perfectly balanced (and
O
has height H ). O
4.18 The two remaining functions are mirror images of the text procedures. Just switch Right O
4.20 After applying the standard binary search tree deletion algorithm, nodes on the deletion path
need to have their balance changed, and rotations may need to be performed. Unlike inser-
tion, more than one node may need rotation.
4.21 (a) O (log log N ).
O O
4.22
_______________________________________________________________________________
Position
DoubleRotateWithLeft( Position K3 )
{
Position K1, K2;
K1 = K3->Left;
K2 = K1->Right;
K1->Right = K2->Left;
K3->Left = K2->Right;
K2->Left = K1;
K2->Right = K3;
K1->Height = Max( Height(K1->Left), Height(K1->Right) ) + 1;
K3->Height = Max( Height(K3->Left), Height(K3->Right) ) + 1;
K2->Height = Max( K1->Height, K3->Height ) + 1;
return K3;
}
_______________________________________________________________________________
-16-
4.23 After accessing 3,
2 10
1 4 11
6 12
5 8 13
7 9
After accessing 9,
3 10
2 4 11
1 8 12
6 13
5 7
-17-
After accessing 1,
2 10
3 11
4 12
8 13
5 7
After accessing 5,
1 9
2 6 10
4 8 11
3 7 12
13
-18-
4.24
1 9
2 8 10
4 7 11
3 12
13
int
CountNodes( BinaryTree T )
{
if( T == NULL )
return 0;
return 1 + CountNodes(T->Left) + CountNodes(T->Right);
}
int
CountLeaves( BinaryTree T )
{
if( T == NULL )
return 0;
else if( T->Left == NULL && T->Right == NULL )
return 1;
return CountLeaves(T->Left) + CountLeaves(T->Right);
}
_______________________________________________________________________________
-19-
_______________________________________________________________________________
int
CountFull( BinaryTree T )
{
if( T == NULL )
return 0;
return ( T->Left != NULL && T->Right != NULL ) +
CountFull(T->Left) + CountFull(T->Right);
}
_______________________________________________________________________________
4.29 We assume the existence of a function RandInt(Lower,Upper), which generates a uniform
O
_______________________________________________________________________________
SearchTree
MakeRandomTree1( int Lower, int Upper )
{
SearchTree T;
int RandomValue;
T = NULL;
if( Lower <= Upper )
{
T = malloc( sizeof( struct TreeNode ) );
if( T != NULL )
{
T->Element = RandomValue = RandInt( Lower, Upper );
T->Left = MakeRandomTree1( Lower, RandomValue - 1 );
T->Right = MakeRandomTree1( RandomValue + 1, Upper );
}
else
FatalError( "Out of space!" );
}
return T;
}
SearchTree
MakeRandomTree( int N )
{
return MakeRandomTree1( 1, N );
}
_______________________________________________________________________________
-20-
4.30
_______________________________________________________________________________
/* LastNode is the address containing last value that was assigned to a node */
SearchTree
GenTree( int Height, int *LastNode )
{
SearchTree T;
SearchTree
MinAvlTree( int H )
{
int LastNodeAssigned = 0;
return GenTree( H, &LastNodeAssigned );
}
_______________________________________________________________________________
4.31 There are two obvious ways of solving this problem. One way mimics Exercise 4.29 by
replacing RandInt(Lower,Upper) with (Lower+Upper) / 2. This requires computing
2H +1−1, which is not that difficult. The other mimics the previous exercise by noting that
O
the heights of the subtrees are both H −1. The solution follows:
O
-21-
_______________________________________________________________________________
/* LastNode is the address containing last value that was assigned to a node. */
SearchTree
GenTree( int Height, int *LastNode )
{
SearchTree T = NULL;
SearchTree
PerfectTree( int H )
{
int LastNodeAssigned = 0;
return GenTree( H, &LastNodeAssigned );
}
_______________________________________________________________________________
4.32 This is known as one-dimensional range searching. The time is O (K ) to perform the
O O
inorder traversal, if a significant number of nodes are found, and also proportional to the
depth of the tree, if we get to some leaves (for instance, if no nodes are found). Since the
average depth is O (log N ), this gives an O (K + log N ) average bound.
O O O O O
_______________________________________________________________________________
void
PrintRange( ElementType Lower, ElementType Upper, SearchTree T )
{
if( T != NULL )
{
if( Lower <= T->Element )
PrintRange( Lower, Upper, T->Left );
if( Lower <= T->Element && T->Element <= Upper )
PrintLine( T->Element );
if( T->Element <= Upper )
PrintRange( Lower, Upper, T->Right );
}
}
_______________________________________________________________________________
-22-
4.33 This exercise and Exercise 4.34 are likely programming assignments, so we do not provide
code here.
4.35 Put the root on an empty queue. Then repeatedly Dequeue a node and Enqueue its left and
O O
right children (if any) until the queue is empty. This is O (N ) because each queue operation
O O
4.36 (a)
6:-
2:4 8:-
0,1 2, 3 4, 5 6, 7 8, 9
(b)
4:6
1,2,3 4, 5 6,7,8
-23-
4.39
B C G
D E F N
H I J K L M
O P Q R
4.41 The function shown here is clearly a linear time routine because in the worst case it does a
traversal on both T 1 and T 2. O O
_______________________________________________________________________________
int
Similar( BinaryTree T1, BinaryTree T2 )
{
if( T1 == NULL || T2 == NULL )
return T1 == NULL && T2 == NULL;
return Similar( T1->Left, T2->Left ) && Similar( T1->Right, T2->Right );
}
_______________________________________________________________________________
4.43 The easiest solution is to compute, in linear time, the inorder numbers of the nodes in both
trees. If the inorder number of the root of T2 is x , then find x in T1 and rotate it to the root. O O
Recursively apply this strategy to the left and right subtrees of T1 (by looking at the values
in the root of T2’s left and right subtrees). If dN is the depth of x , then the running time O
O
satisfies T (N ) = T (i ) + T (N −i −1) + dN , where i is the size of the left subtree. In the worst
O O O O O O O
O
O
Under the plausible assumption that all values of i are equally likely, then even if dN is O
O
was already formulated in the chapter and is solved in Chapter 7. Under the more reason-
able assumption that dN is typically logarithmic, then the running time is O (N ).
O
O O
4.44 Add a field to each node indicating the size of the tree it roots. This allows computation of
its inorder traversal number.
4.45 (a) You need an extra bit for each thread.
(c) You can do tree traversals somewhat easier and without recursion. The disadvantage is
that it reeks of old-style hacking.
-24-
Chapter 5: Hashing
5.1 (a) On the assumption that we add collisions to the end of the list (which is the easier way if
a hash table is being built by hand), the separate chaining hash table that results is shown
here.
0
1 4371
2
3 1323 6173
4 4344
5
6
7
8
9 4199 9679 1989
(b)
0 9679
1 4371
2 1989
3 1323
4 6173
5 4344
6
7
8
9 4199
-25-
(c)
0 9679
1 4371
2
3 1323
4 6173
5 4344
6
7
8 1989
9 4199
(d) 1989 cannot be inserted into the table because hash 2(1989) = 6, and the alternative locations
O
0
1 4371
2
3 1323
4 6173
5 9679
6
7 4344
8
9 4199
5.2 When rehashing, we choose a table size that is roughly twice as large and prime. In our
case, the appropriate new table size is 19, with hash function h (x ) = x (mod 19).
O O O O
(a) Scanning down the separate chaining hash table, the new locations are 4371 in list 1,
1323 in list 12, 6173 in list 17, 4344 in list 12, 4199 in list 0, 9679 in list 8, and 1989 in list
13.
(b) The new locations are 9679 in bucket 8, 4371 in bucket 1, 1989 in bucket 13, 1323 in
bucket 12, 6173 in bucket 17, 4344 in bucket 14 because both 12 and 13 are already occu-
pied, and 4199 in bucket 0.
-26-
(c) The new locations are 9679 in bucket 8, 4371 in bucket 1, 1989 in bucket 13, 1323 in
bucket 12, 6173 in bucket 17, 4344 in bucket 16 because both 12 and 13 are already occu-
pied, and 4199 in bucket 0.
(d) The new locations are 9679 in bucket 8, 4371 in bucket 1, 1989 in bucket 13, 1323 in
bucket 12, 6173 in bucket 17, 4344 in bucket 15 because 12 is already occupied, and 4199
in bucket 0.
5.4 We must be careful not to rehash too often. Let p be the threshold (fraction of table size) at
O
which we rehash to a smaller table. Then if the new table has size N , it contains 2pN ele-O O
ments. This table will require rehashing after either 2N − 2pN insertions or pN deletions.
O O O
Balancing these costs suggests that a good choice is p = 2/ 3. For instance, suppose we
O
have a table of size 300. If we rehash at 200 elements, then the new table size is N = 150, O
and we can do either 100 insertions or 100 deletions until a new rehash is required.
If we know that insertions are more frequent than deletions, then we might choose p to be O
somewhat larger. If p is too close to 1.0, however, then a sequence of a small number of
O
deletions followed by insertions can cause frequent rehashing. In the worst case, if p = 1.0, O
using a standard sorting algorithm. If terms are merged by using a hash function, then the
merging time is constant per term for a total of O (MN ). If the output polynomial is small
O O
and has only O (M + N ) terms, then it is easy to sort it in O ((M + N )log (M + N )) time,
O O O O O O O O
which is less than O (MN ). Thus the total is O (MN ). This bound is better because the
O O O O
model is less restrictive: Hashing is performing operations on the keys rather than just com-
parison between the keys. A similar bound can be obtained by using bucket sort instead of
a standard sorting algorithm. Operations such as hashing are much more expensive than
comparisons in practice, so this bound might not be an improvement. On the other hand, if
the output polynomial is expected to have only O (M + N ) terms, then using a hash table
O O O
saves a huge amount of space, since under these conditions, the hash table needs only
-27-
O (M + N ) space.
O O O
Another method of implementing these operations is to use a search tree instead of a hash
table; a balanced tree is required because elements are inserted in the tree with too much
order. A splay tree might be particularly well suited for this type of a problem because it
does well with sequential accesses. Comparing the different ways of solving the problem is
a good programming assignment.
5.8 The table size would be roughly 60,000 entries. Each entry holds 8 bytes, for a total of
480,000 bytes.
5.9 (a) This statement is true.
(b) If a word hashes to a location with value 1, there is no guarantee that the word is in the
dictionary. It is possible that it just hashes to the same value as some other word in the dic-
tionary. In our case, the table is approximately 10% full (30,000 words in a table of
300,007), so there is a 10% chance that a word that is not in the dictionary happens to hash
out to a location with value 1.
(c) 300,007 bits is 37,501 bytes on most machines.
(d) As discussed in part (b), the algorithm will fail to detect one in ten misspellings on aver-
age.
(e) A 20-page document would have about 60 misspellings. This algorithm would be
expected to detect 54. A table three times as large would still fit in about 100K bytes and
reduce the expected number of errors to two. This is good enough for many applications,
especially since spelling detection is a very inexact science. Many misspelled words (espe-
cially short ones) are still words. For instance, typing them instead of then is a misspelling
O O
keep an extra stack. When an insertion is first performed into a slot, we push the address (or
number) of the slot onto the stack and set the WhereOnStack field to point to the top of the
O
stack. When we access a hash table slot, we check that WhereOnStack points to a valid part
O
of the stack and that the entry in the (middle of the) stack that is pointed to by the WhereOn-
Stack field has that hash table slot as an address.
O
5.14
-28-
Chapter 6: Priority Queues (Heaps)
6.1 Yes. When an element is inserted, we compare it to the current minimum and change the
minimum if the new element is smaller. DeleteMinO operations are expensive in this
scheme.
6.2
1 1
3 2 3 2
6 7 5 4 12 6 4 8
15 14 12 9 10 11 13 8 15 14 9 7 5 11 13 10
6.3 The result of three DeleteMins,O starting with both of the heaps in Exercise 6.2, is as fol-
lows:
4 4
6 5 6 5
13 7 10 8 12 7 10 8
15 14 12 9 11 15 14 9 13 11
6.4
6.5 These are simple modifications to the code presented in the text and meant as programming
exercises.
6.6 225. To see this, start with iO=1 and position at the root. Follow the path toward the last
node, doubling iO when taking a left child, and doubling iO and adding one when taking a
right child.
-29-
6.7 (a) We show that HO(NO), which is the sum of the heights of nodes in a complete binary tree
of NO nodes, is NO − bO(NO), where bO(NO) is the number of ones in the binary representation of
NO. Observe that for NO = 0 and NO = 1, the claim is true. Assume that it is true for values of
kO up to and including NO−1. Suppose the left and right subtrees have LO and RO nodes,
respectively. Since the root has height OIlog NOK, we have
HO(NO) = OIlog NOK + HO(LO) + HO(RO)
= OIlog NOK + LO − bO(LO) + RO − bO(RO)
= NO − 1 + (OIlog NOK − bO(LO) − bO(RO))
The second line follows from the inductive hypothesis, and the third follows because
LO + RO = NO − 1. Now the last node in the tree is in either the left subtree or the right sub-
tree. If it is in the left subtree, then the right subtree is a perfect tree, and
bO(RO) = OIlog NOK − 1. Further, the binary representation of NO and LO are identical, with the
exception that the leading 10 in NO becomes 1 in LO. (For instance, if NO = 37 = 100101, LO =
10101.) It is clear that the second digit of NO must be zero if the last node is in the left sub-
tree. Thus in this case, bO(LO) = bO(NO), and
HO(NO) = NO − bO(NO)
If the last node is in the right subtree, then bO(LO) = OIlog NOK. The binary representation of RO
is identical to NO, except that the leading 1 is not present. (For instance, if NO = 27 = 101011,
LO = 01011.) Thus bO(RO) = bO(NO) − 1, and again
HO(NO) = NO − bO(NO)
(b) Run a single-elimination tournament among eight elements. This requires seven com-
parisons and generates ordering information indicated by the binomial tree shown here.
e c b
g f d
The eighth comparison is between bO and cO. If cO is less than bO, then bO is made a child of cO.
Otherwise, both cO and dO are made children of bO.
(c) A recursive strategy is used. Assume that NO = 2kO. A binomial tree is built for the NO
elements as in part (b). The largest subtree of the root is then recursively converted into a
binary heap of 2kO−1 elements. The last element in the heap (which is the only one on an
extra level) is then inserted into the binomial queue consisting of the remaining binomial
trees, thus forming another binomial tree of 2kO−1 elements. At that point, the root has a sub-
tree that is a heap of 2kO−1 − 1 elements and another subtree that is a binomial tree of 2kO−1
elements. Recursively convert that subtree into a heap; now the whole structure is a binary
heap. The running time for NO = 2kO satisfies TO(NO) = 2TO(NO/ 2) + log NO. The base case is
TO(8) = 8.
-30-
6.8 Let DO1, DO2, ..., DkO be random variables representing the depth of the smallest, second smal-
lest, and kOthO smallest elements, respectively. We are interested in calculating EO(DkO). In
what follows, we assume that the heap size NO is one less than a power of two (that is, the
bottom level is completely filled) but sufficiently large so that terms bounded by OO(1 / NO)
are negligible. Without loss of generality, we may assume that the kOthO smallest element is
in the left subheap of the root. Let pPjO,kO be the probability that this element is the PjOthO smal-
lest element in the subheap.
kO−1
Lemma: For kO>1, EO(DkO) = Σ pPjO,kO(EO(DPjO) + 1).
PjO=1
Proof: An element that is at depth dO in the left subheap is at depth dO + 1 in the entire
subheap. Since EO(DPjO + 1) = EO(DPjO) + 1, the theorem follows.
Since by assumption, the bottom level of the heap is full, each of second, third, ..., kO−1thO
smallest elements are in the left subheap with probability of 0.5. (Technically, the probabil-
ity should be 1⁄2 − 1/(NO−1) of being in the right subheap and 1⁄2 + 1/(NO−1) of being in the
left, since we have already placed the kOthO smallest in the right. Recall that we have assumed
that terms of size OO(1/NO) can be ignored.) Thus
pPjO,kO = pkO−PjO,kO =
2
1
____
kO−2
( kPjOO−1−2 )
Theorem: EO(DkO) ≤ log kO.
Proof: The proof is by induction. The theorem clearly holds for kO = 1 and kO = 2. We then
show that it holds for arbitrary kO > 2 on the assumption that it holds for all smaller kO. Now,
by the inductive hypothesis, for any 1 ≤ PjO ≤ kO−1,
EO(DPjO) + EO(DkO−PjO) ≤ log PjO + log kO−PjO
Since PfOO(xO) = log xO is convex for xO > 0,
log PjO + log kO−PjO ≤ 2log (kO/ 2)
Thus
EO(DPjO) + EO(DkO−PjO) ≤ log (kO/ 2) + log (kO/ 2)
Furthermore, since pPjO,kO = pkO−PjO,kO,
pPjO,kOEO(DPjO) + pkO−PjO,kOEO(DkO−PjO) ≤pPjO,kOlog (kO/ 2) + pkO−PjO,kOlog (kO/ 2)
From the lemma,
kO−1
EO(DkO) = Σ pPjO,kO(EO(DPjO) + 1)
PjO=1
kO−1
=1+ Σ pPjO,kOEO(DPjO)
PjO=1
Thus
kO−1
EO(DkO) ≤ 1 + Σ pPjO,kOlog (kO/ 2)
PjO=1
-31-
kO−1
≤ 1 + log (kO/ 2) Σ pPjO,kO
PjO=1
≤ 1 + log (kO/ 2)
≤ log kO
-32-
6.16
2
4 11
9 5 12 17
18 10 8 6 18
31 15 11
21
6.17
2 3
7 6 5 4
8 9 10 11 12 13 14 15
6.18 This theorem is true, and the proof is very much along the same lines as Exercise 4.17.
6.19 If elements are inserted in decreasing order, a leftist heap consisting of a chain of left chil-
dren is formed. This is the best because the right path length is minimized.
6.20 (a) If a DecreaseKeyO is performed on a node that is very deep (very left), the time to per-
colate up would be prohibitive. Thus the obvious solution doesn’t work. However, we can
still do the operation efficiently by a combination of DeleteO and InsertO. To DeleteO an arbi-
trary node xO in the heap, replace xO by the MergeO of its left and right subheaps. This might
create an imbalance for nodes on the path from xO’s parent to the root that would need to be
fixed by a child swap. However, it is easy to show that at most log NO nodes can be affected,
preserving the time bound. This is discussed in Chapter 11.
6.21 Lazy deletion in leftist heaps is discussed in the paper by Cheriton and Tarjan [9]. The gen-
eral idea is that if the root is marked deleted, then a preorder traversal of the heap is formed,
and the frontier of marked nodes is removed, leaving a collection of heaps. These can be
merged two at a time by placing all the heaps on a queue, removing two, merging them, and
placing the result at the end of the queue, terminating when only one heap remains.
6.22 (a) The standard way to do this is to divide the work into passes. A new pass begins when
the first element reappears in a heap that is dequeued. The first pass takes roughly
-33-
2*O1*O(NO/ 2) time units because there are NO/ 2 merges of trees with one node each on the
right path. The next pass takes 2*O2*O(NO/ 4) time units because of the roughly NO/ 4 merges
of trees with no more than two nodes on the right path. The third pass takes 2*O3*O(NO/ 8)
time units, and so on. The sum converges to 4NO.
(b) It generates heaps that are more leftist.
6.23
4 11
5 9 12 17
6 8 18 10 18
11 15 31
21
6.24
3 2
7 5 6 4
15 11 13 9 14 10 12 8
6.25 This claim is also true, and the proof is similar in spirit to Exercise 4.17 or 6.18.
6.26 Yes. All the single operation estimates in Exercise 6.22 become amortized instead of
worst-case, but by the definition of amortized analysis, the sum of these estimates is a
worst-case bound for the sequence.
6.27 Clearly the claim is true for kO = 1. Suppose it is true for all values iO = 1, 2, ..., kO. A BkO+1
tree is formed by attaching a BkO tree to the root of a BkO tree. Thus by induction, it contains
a BO0 through BkO−1 tree, as well as the newly attached BkO tree, proving the claim.
6.28 Proof is by induction. Clearly the claim is true for kO = 1. Assume true for all values iO = 1,
2, ..., kO. A BkO+1 tree is formed by attaching a BkO tree to the original BkO tree. The original
-34-
( ) ( )
thus had dk nodes at depth dO. The attached tree had dOk−1 nodes at depth dO−1,
which are now at depth dO. Adding these two terms and using a well-known formula estab-
lishes the theorem.
6.29
4
13 15 23 12
18 51 24 21 24 14
65 65 26 16
18
11 29
55
-35-
Chapter 7: Sorting
7.1
_______________________________________________
_____________________________________________
L _L _____________________________________________
Original L 3 1 4 1 5 9 2 6 5 LL
LL L LL
L L after PO=2 L 1 3 4 1 5 9 2 6 5 L L
L L after PO=3 L 1 3 4 1 5 9 2 6 5 LL
L L after PO=4 L 1 1 3 4 5 9 2 6 5 L L
L L after PO=5 L 1 1 3 4 5 9 2 6 5 LL
L L after PO=6 L 1 1 3 4 5 9 2 6 5 L L
LL L LL
L L after PO=7 L 1 1 2 3 4 5 9 6 5 L L
L L after PO=8 L 1 1 2 3 4 5 6 9 5 L L
after PO=9 L 1 1
L_L______________________________________________
3 4 5 5 6 9 LL
_____________________________________________
2
7.2 OO(NO) because the whileO loop terminates immediately. Of course, accidentally changing the
test to include equalities raises the running time to quadratic for this type of input.
7.3 The inversion that existed between AO[iO] and AO[iO + kO] is removed. This shows at least one
inversion is removed. For each of the kO − 1 elements AO[iO + 1], AO[iO + 2], ..., AO[iO + kO − 1],
at most two inversions can be removed by the exchange. This gives a maximum of
2(kO − 1) + 1 = 2kO − 1.
7.4
________________________________________________
______________________________________________
L L_______________________________________________
Original L 9 8 7 6 5 4 3 2 1 LL
LL L LL
L L after 7-sort L 2 1 7 6 5 4 3 9 8 L L
L L after 3-sort L 2 1 4 3 5 7 6 9 8 L L
LL_L_______________________________________________
after 1-sort L 1 3 4 5 6 7 8 9 L LL
______________________________________________
2
7.5 (a) Θ(NO2). The 2-sort removes at most only three inversions at a time; hence the algorithm
is Ω(NO2). The 2-sort is two insertion sorts of size NO/ 2, so the cost of that pass is OO(NO2).
The 1-sort is also OO(NO2), so the total is OO(NO2).
7.6 Part (a) is an extension of the theorem proved in the text. Part (b) is fairly complicated; see
reference [11].
7.7 See reference [11].
7.8 Use the input specified in the hint. If the number of inversions is shown to be Ω(NO2), then
the bound follows, since no increments are removed until an htO/ 2 sort. If we consider the
pattern formed hkO through hO2kO−1, where kO = tO/ 2 + 1, we find that it has length
NO = hkO(hkO + 1)−1, and the number of inversions is roughly hkO4/ 24, which is Ω(NO2).
7.9 (a) OO(NOlog NO). No exchanges, but each pass takes OO(NO).
(b) OO(NOlog NO). It is easy to show that after an hkO sort, no element is farther than hkO from
its rightful position. Thus if the increments satisfy hkO+1 ≤ chkO for a constant cO, which
implies OO(log NO) increments, then the bound is OO(NOlog NO).
-36-
7.10 (a) No, because it is still possible for consecutive increments to share a common factor. An
example is the sequence 1, 3, 9, 21, 45, htO+1 = 2htO + 3.
(b) Yes, because consecutive increments are relatively prime. The running time becomes
OO(NO3/ 2).
7.11 The input is read in as
142, 543, 123, 65, 453, 879, 572, 434, 111, 242, 811, 102
The result of the heapify is
879, 811, 572, 434, 543, 123, 142, 65, 111, 242, 453, 102
879 is removed from the heap and placed at the end. We’ll place it in italics to signal that it
is not part of the heap. 102 is placed in the hole and bubbled down, obtaining
811, 543, 572, 434, 453, 123, 142, 65, 111, 242, 102, 879
Continuing the process, we obtain
572, 543, 142, 434, 453, 123, 102, 65, 111, 242, 811, 879
543, 453, 142, 434, 242, 123, 102, 65, 111, 572, 811, 879
453, 434, 142, 111, 242, 123, 102, 65, 543, 572, 811, 879
434, 242, 142, 111, 65, 123, 102, 453, 543, 572, 811, 879
242, 111, 142, 102, 65, 123, 434, 453, 543, 572, 811, 879
142, 111, 123, 102, 65, 242, 434, 453, 543, 572, 811, 879
123, 111, 65, 102, 142, 242, 434, 453, 543, 572, 811, 879
111, 102, 65, 123, 142, 242, 434, 453, 543, 572, 811, 879
102, 65, 111, 123, 142, 242, 434, 453, 543, 572, 811, 879
65, 102, 111, 123, 142, 242, 434, 453, 543, 572, 811, 879
7.12 Heapsort uses at least (roughly) NOlog NO comparisons on any input, so there are no particu-
larly good inputs. This bound is tight; see the paper by Schaeffer and Sedgewick [16]. This
result applies for almost all variations of heapsort, which have different rearrangement stra-
tegies. See Y. Ding and M. A. Weiss, "Best Case Lower Bounds for Heapsort," ComputingO
49 (1992).
7.13 First the sequence {3, 1, 4, 1} is sorted. To do this, the sequence {3, 1} is sorted. This
involves sorting {3} and {1}, which are base cases, and merging the result to obtain {1, 3}.
The sequence {4, 1} is likewise sorted into {1, 4}. Then these two sequences are merged to
obtain {1, 1, 3, 4}. The second half is sorted similarly, eventually obtaining {2, 5, 6, 9}.
The merged result is then easily computed as {1, 1, 2, 3, 4, 5, 6, 9}.
7.14 Mergesort can be implemented nonrecursively by first merging pairs of adjacent elements,
then pairs of two elements, then pairs of four elements, and so on. This is implemented in
Fig. 7.1.
7.15 The merging step always takes Θ(NO) time, so the sorting process takes Θ(NOlog NO) time on
all inputs.
7.16 See reference [11] for the exact derivation of the worst case of mergesort.
7.17 The original input is
3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5
After sorting the first, middle, and last elements, we have
3, 1, 4, 1, 5, 5, 2, 6, 5, 3, 9
Thus the pivot is 5. Hiding it gives
3, 1, 4, 1, 5, 3, 2, 6, 5, 5, 9
The first swap is between two fives. The next swap has iO and PjO crossing. Thus the pivot is
-37-
_______________________________________________________________________________
void
Mergesort( ElementType A[ ], int N )
{
ElementType *TmpArray;
int SubListSize, Part1Start, Part2Start, Part2End;
-38-
7.19 (a) If the first element is chosen as the pivot, the running time degenerates to quadratic in
the first two cases. It is still OO(NOlog NO) for random input.
(b) The same results apply for this pivot choice.
(c) If a random element is chosen, then the running time is OO(NOlog NO) expected for all
inputs, although there is an OO(NO2) worst case if very bad random numbers come up. There
is, however, an essentially negligible chance of this occurring. Chapter 10 discusses the
randomized philosophy.
(d) This is a dangerous road to go; it depends on the distribution of the keys. For many dis-
tributions, such as uniform, the performance is OO(NOlog NO) on average. For a skewed distri-
bution, such as with the input {1, 2, 4, 8, 16, 32, 64, ... }, the pivot will be consistently terri-
ble, giving quadratic running time, independent of the ordering of the input.
7.20 (a) OO(NOlog NO) because the pivot will partition perfectly.
(b) Sentinels need to be used to guarantee that iO and PjO don’t run past the end. The running
time will be Θ(NO2) since, because iO won’t stop until it hits the sentinel, the partitioning step
will put all but the pivot in SO1.
(c) Again a sentinel needs to be used to stop PjO. This is also Θ(NO2) because the partitioning
is unbalanced.
7.21 Yes, but it doesn’t reduce the average running time for random input. Using median-of-
three partitioning reduces the average running time because it makes the partition more bal-
anced on average.
7.22 The strategy used here is to force the worst possible pivot at each stage. This doesn’t neces-
sarily give the maximum amount of work (since there are few exchanges, just lots of com-
parisons), but it does give Ω(NO2) comparisons. By working backward, we can arrive at the
following permutation:
20, 3, 5, 7, 9, 11, 13, 15, 17, 19, 4, 10, 2, 12, 6, 14, 1, 16, 8, 18
A method to extend this to larger numbers when NO is even is as follows: The first element is
NO, the middle is NO − 1, and the last is NO − 2. Odd numbers (except 1) are written in
decreasing order starting to the left of center. Even numbers are written in decreasing order
by starting at the rightmost spot, always skipping one available empty slot, and wrapping
around when the center is reached. This method takes OO(NOlog NO) time to generate the per-
mutation, but is suitable for a hand calculation. By inverting the actions of quicksort, it is
possible to generate the permutation in linear time.
7.24 This recurrence results from the analysis of the quick selection algorithm. TO(NO) = OO(NO).
7.25 Insertion sort and mergesort are stable if coded correctly. Any of the sorts can be made
stable by the addition of a second key, which indicates the original position.
7.26 (d) PfOO(NO) can be OO(NO/ log NO). Sort the PfOO(NO) elements using mergesort in
OO(PfOO(NO)log PfOO(NO)) time. This is OO(NO) if PfOO(NO) is chosen using the criterion given. Then
merge this sorted list with the already sorted list of NO numbers in OO(NO + PfOO(NO)) = OO(NO)
time.
7.27 A decision tree would have NO leaves, so OHlog NOJ comparisons are required.
7.28 log NO! ∼
∼ NOlog NO − NOlog eO.
-39-
(b) The information-theoretic lower bound is log 2N ( )
N . Applying Stirling’s formula, we
can estimate the bound as 2NO − ⁄2log NO. A better lower bound is known for this case:
1
2NO−1 comparisons are necessary. Merging two lists of different sizes MO and NO likewise
-40-
7.39 We show that in a binary tree with LO leaves, the average depth of a leaf is at least log LO.
We can prove this by induction. Clearly, the claim is true if LO = 1. Suppose it is true for
trees with up to LO − 1 leaves. Consider a tree of LO leaves with minimum average leaf
depth. Clearly, the root of such a tree must have non-NULL left and right subtrees. Sup-
pose that the left subtree has LLO leaves, and the right subtree has LRO leaves. By the induc-
tive hypothesis, the total depth of the leaves (which is their average times their number) in
the left subtree is LLO(1 + log LLO), and the total depth of the right subtree’s leaves is
LRO(1 + log LRO) (because the leaves in the subtrees are one deeper with respect to the root of
the tree than with respect to the root of their subtree). Thus the total depth of all the leaves
is LO + LLOlog LLO + LROlog LRO. Since PfOO(xO) = xOlog xO is convex for xO ≥ 1, we know that
PfOO(xO) + PfOO(yO) ≥ 2fOO((xO+yO) / 2). Thus, the total depth of all the leaves is at least
LO + 2(LO/ 2)log (LO/ 2) ≥ LO + LO(log LO − 1) ≥ LOlog LO. Thus the average leaf depth is at least
log LO.
-41-
Chapter 8: The Disjoint Set ADT
8.1 We assume that unions operated on the roots of the trees containing the arguments. Also, in
case of ties, the second tree is made a child of the first. Arbitrary union and union by height
give the same answer (shown as the first tree) for this problem. Union by size gives the
second tree.
2 6 3 8 14
4 5 7 10 11 12 13 9 15 16
17
1 4 5 7 10 11 12 13 14
2 6 8 15 16
9 17
8.2 In both cases, have nodes 16 and 17 point directly to the root.
8.4 Claim: A tree of height HO has at least 2HO nodes. The proof is by induction. A tree of
height 0 clearly has at least 1 node, and a tree of height 1 clearly has at least 2. Let TO be the
tree of height HO with fewest nodes. Thus at the time of TO’s last union, it must have been a
tree of height HO−1, since otherwise TO would have been smaller at that time than it is now
and still would have been of height HO, which is impossible by assumption of TO’s minimal-
ity. Since TO’s height was updated, it must have been as a result of a union with another tree
of height HO−1. By the induction hypothesis, we know that at the time of the union, TO had
at least 2HO−1 nodes, as did the tree attached to it, for a total of 2HO nodes, proving the claim.
Thus an NO-node tree has depth at most OIlog NOK.
8.5 All answers are OO(MO) because in all cases α(MO, NO) = 1.
8.6 Assuming that the graph has only nine vertices, then the union/find tree that is formed is
shown here. The edge (4,6) does not result in a union because at the time it is examined, 4
and 6 are already in the same component. The connected components are {1,2,3,4,6} and
-42-
Discovering Diverse Content Through
Random Scribd Documents
with active links or immediate access to the full terms of the Project
Gutenberg™ License.
1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if you
provide access to or distribute copies of a Project Gutenberg™ work
in a format other than “Plain Vanilla ASCII” or other format used in
the official version posted on the official Project Gutenberg™ website
(www.gutenberg.org), you must, at no additional cost, fee or
expense to the user, provide a copy, a means of exporting a copy, or
a means of obtaining a copy upon request, of the work in its original
“Plain Vanilla ASCII” or other form. Any alternate format must
include the full Project Gutenberg™ License as specified in
paragraph 1.E.1.
• You pay a royalty fee of 20% of the gross profits you derive
from the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”
• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.
1.F.
1.F.4. Except for the limited right of replacement or refund set forth
in paragraph 1.F.3, this work is provided to you ‘AS-IS’, WITH NO
OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.
Please check the Project Gutenberg web pages for current donation
methods and addresses. Donations are accepted in a number of
other ways including checks, online payments and credit card
donations. To donate, please visit: www.gutenberg.org/donate.
Most people start at our website which has the main PG search
facility: www.gutenberg.org.
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
ebooknice.com