Algorithms and Complexity 2nd Edition Herbert S. Wilf All Chapters Instant Download
Algorithms and Complexity 2nd Edition Herbert S. Wilf All Chapters Instant Download
com
https://ebookname.com/product/algorithms-and-complexity-2nd-
edition-herbert-s-wilf/
OR CLICK BUTTON
DOWNLOAD EBOOK
https://ebookname.com/product/java-2-a-beginner-s-guide-2nd-edition-
herbert-schildt/
ebookname.com
https://ebookname.com/product/python-algorithms-mastering-basic-
algorithms-in-the-python-language-2nd-edition-hetland/
ebookname.com
https://ebookname.com/product/graph-algorithms-2nd-edition-shimon-
even-2/
ebookname.com
https://ebookname.com/product/automatic-wealth-for-grads-and-anyone-
else-just-starting-out-1st-edition-michael-masterson/
ebookname.com
Corporate Blogging For Dummies 1st Edition Douglas Karr
https://ebookname.com/product/corporate-blogging-for-dummies-1st-
edition-douglas-karr/
ebookname.com
https://ebookname.com/product/india-pakistan-in-war-and-peace-1st-ed-
edition-dixit/
ebookname.com
https://ebookname.com/product/love-s-labour-s-lost-webster-s-korean-
thesaurus-edition-william-shakespeare/
ebookname.com
https://ebookname.com/product/new-orleans-kitchens-recipes-from-the-
big-easy-s-best-restaurants-1st-edition-stacey-meyer/
ebookname.com
https://ebookname.com/product/the-ga%e1%b9%87itatilaka-and-its-
commentary-two-medieval-sanskrit-mathematical-texts-1st-edition-
alessandra-petrocchi/
ebookname.com
From The Bottom Of The Heap The Autobiography Of Black
Panther Robert Hillary King PM Press First Edition Robert
Hillary King
https://ebookname.com/product/from-the-bottom-of-the-heap-the-
autobiography-of-black-panther-robert-hillary-king-pm-press-first-
edition-robert-hillary-king/
ebookname.com
Algorithms and Complexity
Second Edition
Algorithms and Complexity
Second Edition
Herbert S. Wilf
A K Peters
Natick, Massachusetts
Editorial, Sales, and Customer Service Office
A K Peters, Ltd.
63 South Avenue
Natick, MA 01760
www.akpeters.com
All rights reserved. No part of the material protected by this copyright notice may be reproduced
or utilized in any form, electronic or mechanical, including photocopying, recording, or by
any information storage and retrieval system, without written permission from the
copyright owner.
Preface vii
1 Mathematical Preliminaries 9
1.1 Orders of Magnitude . . . . . . . . . . . . . . . . . . . . . . 9
1.2 Positional Number Systems . . . . . . . . . . . . . . . . . . 19
1.3 Manipulations with Series . . . . . . . . . . . . . . . . . . . 23
1.4 Recurrence Relations . . . . . . . . . . . . . . . . . . . . . . 27
1.5 Counting . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.6 Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2 Recursive Algorithms 49
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.2 Quicksort . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
2.3 Recursive Graph Algorithms . . . . . . . . . . . . . . . . . . 61
2.4 Fast Matrix Multiplication . . . . . . . . . . . . . . . . . . . 76
2.5 The Discrete Fourier Transform . . . . . . . . . . . . . . . . 80
2.6 Applications of the FFT . . . . . . . . . . . . . . . . . . . . 91
2.7 A Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
2.8 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . 98
v
vi Contents
5 NP-Completeness 165
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
5.2 Turing Machines . . . . . . . . . . . . . . . . . . . . . . . . 174
5.3 Cook’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . 179
5.4 Some Other NP-Complete Problems . . . . . . . . . . . . . 186
5.5 Half a Loaf ... . . . . . . . . . . . . . . . . . . . . . . . . . . 191
5.6 Backtracking (I): Independent Sets . . . . . . . . . . . . . . 195
5.7 Backtracking (II): Graph Coloring . . . . . . . . . . . . . . 199
5.8 Approximate Algorithms for Hard Problems . . . . . . . . . 203
For the past several years, mathematics majors in the computing track at
the University of Pennsylvania have taken a course in continuous algorithms
(numerical analysis) in the junior year, and in discrete algorithms in the
senior year. This book has grown out of the senior course as I have been
teaching it recently. It has also been tried out on a large class of computer
science and mathematics majors, including seniors and graduate students,
with good results.
Selection by the instructor of topics of interest will be very important,
because normally I’ve found that I can’t cover anywhere near all of this
material in a semester. A reasonable choice for a first try might be to
begin with Chapter 2 (recursive algorithms) which contains lots of moti-
vation. Then, as new ideas are needed in Chapter 2, one might delve into
the appropriate sections of Chapter 1 to get the concepts and techniques
well in hand. After Chapter 2, Chapter 4, on number theory, discusses
material that is extremely attractive, and surprisingly pure and applicable
at the same time. Chapter 5 would be next, since the foundations would
then all be in place. Finally, material from Chapter 3, which is rather inde-
pendent of the rest of the book, but is strongly connected to combinatorial
algorithms in general, might be studied as time permits.
Throughout the book, there are opportunities to ask students to write
programs and get them running. These are not mentioned explicitly, with
a few exceptions, but will be obvious when encountered. Students should
all have the experience of writing, debugging, and using a program that is
nontrivially recursive, for example. The concept of recursion is subtle and
powerful, and is helped a lot by hands-on practice. Any of the algorithms
of Chapter 2 would be suitable for this purpose. The recursive graph algo-
rithms are particularly recommended since they are usually quite foreign
to students’ previous experience and therefore have great learning value.
vii
viii Preface
Herbert S. Wilf
Preface to the Second Edition
The first edition of this book has been posted on my web site as a free
download (see http://www.cis.upenn.edu/∼wilf), and as these words are
written, it is being downloaded at about 1000 different Internet domains
each month. Because of this remarkable circulation, it seems appropriate
to offer it again in paper form, and I’m very pleased that A K Peters, Ltd.
has taken up this task. A number of corrections have been made, and
solutions or hints to most of the problems are now provided.
Herbert S. Wilf
March 2002
ix
0
What this Book Is About
0.1 Background
An algorithm is a method for solving a class of problems on a computer.
The complexity of an algorithm is the cost, measured in running time, or
storage, or whatever units are relevant, of using the algorithm to solve one
of those problems.
This book is about algorithms and complexity, and so it is about meth-
ods for solving problems on computers and the costs (usually the running
time) of using those methods.
Computing takes time. Some problems take a very long time; others
can be done quickly. Some problems seem to take a long time, and then
someone discovers a faster way to do them (a ‘faster algorithm’). The study
of the amount of computational effort that is needed in order to perform
certain kinds of computations is the study of computational complexity.
Naturally, we would expect that a computing problem for which millions
of bits of input data are required would probably take longer than another
problem that needs only a few items of input. So the time complexity of a
calculation is measured by expressing the running time of the calculation
as a function of some measure of the amount of data that is needed to
describe the problem to the computer.
For instance, think about this statement: “I just bought a matrix in-
version program, and it can invert an n × n matrix in just 1.2n3 minutes.”
We see here a typical description of the complexity of a certain algorithm.
The running time of the program is being given as a function of the size of
the input matrix.
A faster program for the same job might run in 0.8n3 minutes for an
n × n matrix. If someone were to make a really important discovery (see
Section 2.4), then maybe we could actually lower the exponent, instead of
1
2 0. What this Book Is About
pp. 110—121.
2 R. Berger, The undecidability of the domino problem, Memoirs Amer. Math. Soc.
problem on a computer, it has been proved that there isn’t any way to do
it, so even looking for an algorithm would be fruitless. That doesn’t mean
that the question is hard for every polygon. Hard problems can have easy
instances. What has been proved is that no single method exists that can
guarantee that it will decide this question for every polygon.
The fact that a computational problem is hard doesn’t mean that every
instance of it has to be hard. The problem is hard because we cannot
devise an algorithm for which we can give a guarantee of fast performance
for all instances.
Notice that the amount of input data to the computer in this example
is quite small. All we need to input is the shape of the basic polygon.
Yet not only is it impossible to devise a fast algorithm for this problem, it
has been proved impossible to devise any algorithm at all that is guaran-
teed to terminate with a Yes/No answer after finitely many steps. That’s
really hard!
Think of an algorithm as being a little box that can solve a certain class
of computational problems. Into the box goes a description of a particular
problem in that class, and then, after a certain amount of time, or of
computational effort, the answer appears.
A ‘fast’ algorithm is one that carries a guarantee of fast performance.
Here are some examples.
Example 0.2. It is guaranteed that every problem that can be input with
B bits of data will be solved in at most 0.7B 15 seconds.
0.3 A Preview
Chapter 1 contains some of the mathematical background that will be
needed for our study of algorithms. It is not intended that reading this
book or using it as a text in a course must necessarily begin with Chapter
1. It’s probably a better idea to plunge into Chapter 2 directly, and then
when particular skills or concepts are needed, to read the relevant portions
of Chapter 1. Otherwise, the definitions and ideas that are in that chapter
may seem to be unmotivated, when in fact, motivation in great quantity
resides in the later chapters of the book.
Chapter 2 deals with recursive algorithms and the analyses of their
complexities.
Chapter 3 is about a problem that seems as though it might be hard,
but turns out to be easy, namely the network flow problem. Thanks to
quite recent research, there are fast algorithms for network flow problems,
and they have many important applications.
In Chapter 4 we study algorithms in one of the oldest branches of math-
ematics, the theory of numbers. Remarkably, the connections between this
0.3. A Preview 7
ancient subject and the most modern research in computer methods are
very strong.
In Chapter 5 we will see that there is a large family of problems, includ-
ing a number of very important computational questions, that are bound
together by a good deal of structural unity. We don’t know if they’re hard
or easy. We do know that we haven’t found a fast way to do them yet, and
most people suspect that they’re hard. We also know that if any one of
these problems is hard, then they all are, and if any one of them is easy,
then they all are.
We hope that, having found out something about what people know
and what people don’t know, the reader will have enjoyed the trip through
this subject and may be interested in helping to find out a little more.
1
Mathematical Preliminaries
9
10 1. Mathematical Preliminaries
We can see already from these few examples that sometimes it might be
easy to prove that a ‘o’ relationship is true and sometimes it might be
rather difficult. Example (e), for instance, requires the use of L’Hopital’s
rule.
If we have two computer programs, and if one of them inverts n × n
matrices in time 635n3 , and if the other one does so in time o(n2.8 ), then we
know that for all sufficiently large values of n the performance guarantee
of the second program will be superior to that of the first program. Of
course, the first program might run faster on small matrices, say up to size
10, 000×10, 000. If a certain program runs in time n2.03 and if someone were
to produce another program for the same problem that runs in o(n2 log n)
time, then that second program would be an improvement, at least in the
theoretical sense. The reason for the ‘theoretical’ qualification, once more,
is that the second program would be known to be superior only if n were
sufficiently large.
The second symbol of the asymptotics vocabulary is the ‘O.’ When
we say that f (x) = O(g(x)) we mean, informally, that f certainly doesn’t
grow at a faster rate than g. It might grow at the same rate or it might
grow more slowly; both are possibilities that the ‘O’ permits. Formally, we
have the next definition:
and 1/(1 + x2 ) = O(1). Now we can see how the ‘o’ gives more precise
information than the ‘O,’ for we can sharpen the last example by saying
that 1/(1 + x2 ) = o(1). This is sharper because not only does it tell us
that the function is bounded when x is large, but we also learn that the
function actually approaches 0 as x → ∞.
This is typical of the relationship between O and o. It often happens
that a ‘O’ result is sufficient for an application. However, that may not be
the case, and we may need the more precise ‘o’ estimate.
The third symbol of the language of asymptotics is the ‘Θ.’
Definition 1.3. We say that f (x) = Θ(g(x)) if there are constants c1 > 0,
c2 > 0, x0 such that for all x > x0 it is true that c1 g(x) < f (x) < c2 g(x).
We might then say that f and g are of the same rate of growth, only
the multiplicative constants are uncertain. Some examples of the ‘Θ’ at
work are:
(x + 1)2 = Θ(3x2 )
(x2 + 5x + 7)/(5x3 + 7x + 2) = Θ(1/x)
q
√ 1
3 + 2x = Θ(x 4 )
(1 + 3/x)x = Θ(1).
The ‘Θ’ is much more precise than either the ‘O’ or the ‘o.’ If we know
that f (x) = Θ(x2 ), then we know that f (x)/x2 stays between two nonzero
constants for all sufficiently large values of x. The rate of growth of f is
established: It grows quadratically with x.
The most precise of the symbols of asymptotics is the ‘∼.’ It tells us
that not only do f and g grow at the same rate, but that in fact f /g
approaches 1 as x → ∞.
Definition 1.4. We say that f (x) ∼ g(x) if limx→∞ f (x)/g(x) = 1.
x2 + x ∼ x2
4
(3x + 1) ∼ 81x4
sin (1/x) ∼ 1/x
(2x3 + 5x + 7)/(x2 + 4) ∼ 2x
2x + 7 log x + cos x ∼ 2x .
12 1. Mathematical Preliminaries
could do n things in time log n and someone found another method that
could do the same job in time O(log log n), then the second method, other
things being equal, would indeed be an improvement, but n might have to
be extremely large before you would notice the improvement.
Next on the scale of rapidity of growth we might mention the powers
of x. For instance, think about x.01 . It grows faster than log x, although
you wouldn’t believe it if you tried to substitute a few values of x and to
compare the answers (see Exercise 1 at the end of this section).
How would we prove that x.01 grows faster than log x? By using
L’Hospital’s rule.
Example 1.1. Consider the limit of x.01 /log x for x → ∞. As x → ∞ the
ratio assumes the indeterminate form ∞/∞, and it is therefore a candidate
for L’Hospital’s rule, which tells us that if we want to find the limit then
we can differentiate the numerator, differentiate the denominator, and try
again to let x → ∞. If we do this, then instead of the original ratio, we
find the ratio
.01x−.99 /(1/x) = .01x.01
which obviously grows without bound as x → ∞. Therefore, the original
ratio x.01 /log x also grows without bound. What we have proved, precisely,
is that log x = o(x.01 ), and therefore in that sense, we can say that x.01
grows faster than log x.
every fixed power of x, just as log x grows slower than every fixed power
of x.
2
Consider elog x . Since this is the same as xlog x , it will obviously grow
1000
faster than x ; in fact, it will be larger than x1000 as soon as log x > 1000,
i.e. , as soon as x > e1000 (don’t hold your breath!).
2
Hence, elog x is an example of a function√that grows faster than every
fixed power of x. Another such example is e x (why?).
Definition 1.6. A function that grows faster than xa , for every constant
a, but grows slower than cx for every constant c > 1 is said to be of moder-
ately exponential growth. More precisely, f (x) is of moderately exponential
growth if for every a > 0 we have f (x) = Ω(xa ) and for every ² > 0 we
have f (x) = o((1 + ²)x ).
= 12 + 22 + 32 + · · · + n2 . (1.1)
Thus, f (n) is the sum of the squares of the first n positive integers.
How fast does f (n) grow when n is large?
Notice at once that among the n terms in the sum that defines f (n),
the biggest one is the last one, namely n2 . Since there are n terms in the
sum and the biggest one is only n2 , it is certainly true that f (n) = O(n3 ),
and even more, that f (n) ≤ n3 for all n ≥ 1.
Suppose we wanted more precise information about the growth of f (n),
such as a statement like f (n) ∼ ?. How might we make such a better
estimate?
The best way to begin is to visualize the sum in (1.1), as shown in
Figure 1.1. In the figure, we see the graph of the curve y = x2 , in the x-y
plane. Further, there is a rectangle drawn over every interval of unit length
in the range from x = 1 to x = n. The rectangles all lie under the curve.
Random documents with unrelated
content Scribd suggests to you:
The Project Gutenberg eBook of Belle Sylvie
This ebook is for the use of anyone anywhere in the United
States and most other parts of the world at no cost and with
almost no restrictions whatsoever. You may copy it, give it away
or re-use it under the terms of the Project Gutenberg License
included with this ebook or online at www.gutenberg.org. If you
are not located in the United States, you will have to check the
laws of the country where you are located before using this
eBook.
Language: French
BELLE SYLVIE
PARIS
LIBRAIRIE PLON
PLON-NOURRIT ET Cie, IMPRIMEURS-ÉDITEURS
8, RUE GARANCIÈRE — 6e
C. S.
BELLE SYLVIE