100% found this document useful (1 vote)
22 views

[EBOOK PDF] Download complete Data structure and algorithm analysis C language English 2 Adapted China Edition Mark Allen Weiss ebook

The document promotes the download of various ebooks related to data structures and algorithms, particularly focusing on the 'Data Structure and Algorithm Analysis in C' by Mark Allen Weiss. It highlights the adaptations made for Chinese students, including changes in content structure and additional examples. The book aims to teach efficient programming and algorithm analysis skills suitable for undergraduate courses.

Uploaded by

navaleapugo
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
22 views

[EBOOK PDF] Download complete Data structure and algorithm analysis C language English 2 Adapted China Edition Mark Allen Weiss ebook

The document promotes the download of various ebooks related to data structures and algorithms, particularly focusing on the 'Data Structure and Algorithm Analysis in C' by Mark Allen Weiss. It highlights the adaptations made for Chinese students, including changes in content structure and additional examples. The book aims to teach efficient programming and algorithm analysis skills suitable for undergraduate courses.

Uploaded by

navaleapugo
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 78

Visit https://ebookultra.

com to download the full version and


explore more ebooks

Data structure and algorithm analysis C language


English 2 Adapted China Edition Mark Allen Weiss

_____ Click the link below to download _____


https://ebookultra.com/download/data-structure-and-
algorithm-analysis-c-language-english-2-adapted-china-
edition-mark-allen-weiss/

Explore and download more ebooks at ebookultra.com


Here are some suggested products you might be interested in.
Click the link to download

Data Structures And Problem Solving Using C 2nd


International Edition Edition Mark Allen Weiss

https://ebookultra.com/download/data-structures-and-problem-solving-
using-c-2nd-international-edition-edition-mark-allen-weiss/

Data structures and algorithm analysis in C Fourth


Edition, International Edition / Chandavarkar

https://ebookultra.com/download/data-structures-and-algorithm-
analysis-in-c-fourth-edition-international-edition-chandavarkar/

Data Structures and Algorithm Analysis in C 3rd Edition


Dr. Clifford A. Shaffer

https://ebookultra.com/download/data-structures-and-algorithm-
analysis-in-c-3rd-edition-dr-clifford-a-shaffer/

Mental States Volume 2 Language and cognitive structure


Studies in Language Companion Series 93rd Edition Andrea
C. Schalley
https://ebookultra.com/download/mental-states-volume-2-language-and-
cognitive-structure-studies-in-language-companion-series-93rd-edition-
andrea-c-schalley/
Discovering Language The Structure of Modern English 1st
Edition Lesley Jeffries

https://ebookultra.com/download/discovering-language-the-structure-of-
modern-english-1st-edition-lesley-jeffries/

Data Analysis and Applications 2 1st edition Edition


Bozeman

https://ebookultra.com/download/data-analysis-and-applications-2-1st-
edition-edition-bozeman/

Andrea Zanzotto The Language of Beauty s Apprentice


Beverly C. Allen

https://ebookultra.com/download/andrea-zanzotto-the-language-of-
beauty-s-apprentice-beverly-c-allen/

Data Structures and Algorithm Analysis in Java ■■■■■■■■■


Java■■■■ ■■■■■■■■■ Weiss

https://ebookultra.com/download/data-structures-and-algorithm-
analysis-in-java-
%e6%95%b0%e6%8d%ae%e7%bb%93%e6%9e%84%e4%b8%8e%e7%ae%97%e6%b3%95%e5%88%
86%e6%9e%90-java%e8%af%ad%e8%a8%80%e6%8f%8f%e8%bf%b0-
%e6%95%b0%e6%8d%ae%e7%bb%93/

Techniques of Saxophone Playing English and German Edition


Marcus Weiss

https://ebookultra.com/download/techniques-of-saxophone-playing-
english-and-german-edition-marcus-weiss/
Data structure and algorithm analysis C language English
2 Adapted China Edition Mark Allen Weiss Digital
Instant Download
Author(s): Mark Allen Weiss
ISBN(s): 9787115139849, 7115139849
Edition: 2
File Details: PDF, 20.07 MB
Year: 2005
Language: english
. ‘Ta
I P w
A
‘4’ III
..

.— V V

Data Structures

arid

Algorithm AnaIVsIs In C

Mark Allen Weiss


e.

9::
=

4
I. . : I.

- in: •1

= : .... I•
I-

II

S.

-- t

:4l_ S : ::Er

I I I

=tfl*
I I 11 I I I -——•—ll
I I .1

ISBN 7—115—13984—9 I I I I I I I I I I I I 1 I •I

lg_ •.59

ISW7-1 15-13984-9/’1P4957

‘iLfft:49.(K) )L
9 787115 139849 >

-
TURING;0]

Data Structures and Algorithm Analysis in C

(Second Edition)

..L+I- ‘-1
I,-, h:i

(3&•2&)
[] Mark Allen Weiss

POSTS & TELECOM PRESS;1]


J4fl (CIP)
flitTr: C1WftMiR: 2/ () flh? (Weiss,M.A.) *.
h)t: )P.Wñ&± 2005.8

(QWJ&. itfltJLWi)

ISBN 7-115-13984-9

1. t II. ft.. III. ©M-&t ©flY-tft-t ®c -WtE-


t IV. TP3I1.12

N*R14SfficiPflt (2005) M092227

t1-fl43J
iFIIMiA! (t1t&
4&***Ji—C 2 }&)

[] Mark Allen Weiss

A B&4±+1J!i&t ,tihlflV9iWfll 14 4
WIS 100061 k !pgilt 315ptpress com.cn

l lit hltp://www.ptpress.com.cn

njAt1 uDJrpAi
1gj1rj ,jffi2ffi
)F4-: 800x1000 1/16

141*: 32.5

: 7271$ 2005W8Jj3W1&

EPi: 1—30003W 2005W SJIt$3W I Ep41J

a4Tu 01-2005-3578 -
ISBN 7-115-13984-9/TP 4957

ffr 49.OO5L
jNR*)M: (010)88593802 P MM: (010)67129223
Adapter’s Foreword

Purpose
The original of this book is an excellent work of Mark Allen Weiss. All the
fundamental topics are covered. The ADT concepts and the analysis of the
algorithms (especially the average analysis)
case emphasized. The extensive
are

are also quite helpful to the students.


examples
Till now the original book has been introduced to Chinese students for two

years and has received positive


feedbacks from many instructors and students. This

re-composition is made to trim the contents of the book so that it better fits a

second-year undergraduate course in data structures and algoritlun analysis for the
Chinese students.

What’sNew
The recomposition includes two major changes. First, the review section
structure

of mathematics has been canceled since sophomore students in China have taken
sufficient courses in mathematics in their first-year study, including calculus, linear
algebra, and discrete mathematics. Secondly, the original Chapter 5 is moved to
follow Chapter 7, in order to show hashing as a method to break the lower bound
of searching by comparisons only.
Other minor changes include adding some interesting data structures and
methods, and rearranging part of the contents. Introduction of the sparse matrix
representation is added as an example of application of mu.ltilists in Section 3.2.
At the mean time, bucket sort and radix sort are discussed in more details in

Chapter 6 (which Chapter 7 in the original book) instead of being given as an


was

example in Section 3.2. In Chapter 4, the two sections about tree traversals, namely
Sections 4.1.2 and 4.6, are merged into one and are inserted into Section 4.2.3.
Threaded binary tree is then formally introduced instead of being mentioned in
exercises only. At the beginning of Chapter 7 (which was Chapter 5 in the original

book), Hashing, a method called interpolation search is briefly discussed to make


the point that it is possible to break the lower bound if we search by methods other
than comparisons. Finally in Section 6.8, Sorting Large Structures, we introduce
table sort as a method to handle the case in which physically sorting large structures
is required.
Adapter’s Foreword

Acknowledgments
We feel grateful to Mark Allen Weiss, the author of the original book, and Pearson
Education, the original publisher, for their great support on this recomposition. It is
their understanding and generosity that make it possible for more Chinese students to

enjoy thisdistinguished book.


Thanks to all the
colleagues and students who have communicated with me regarding
to their impressions of the original book. Special thanks go to Professor Qinming He,

for his helpful feedbacks and suggestions.

Finally I would like to thank everyone at Turing Book Company, the publisher of
this adapted edition, who have put in great effort to make this kind of cooperation
possible.
It is my very first attempt on making a recomposition of a textbook. If you have

any suggestions for improvements, I would very much appreciate your comments.

Yue Chen
chenyuecs.zju.edu.cn
Zhejiang University
PREFACE

Purpose/Goals
This book describes data structures, methods of organizing large amounts of data,
and algorithm analysis, the estimation of the running time of algorithms. As computers

become faster and faster, the need for programs that canhandle large amounts
of input becomes more acute. Paradoxically, this requires more careful attention to

efficiency, since inefficiencies in programs become most obvious when input sizes are
large. By analyzing an algorithm before it is actually coded, students can decide if a
particular solution will be feasible. For example, in this text students look at specific
problems and see how careful implementations can reduce the time constraint for
large amounts of data from 16 years to less than a second. Therefore, no algorithm
or data structure is presented without an explanation of its running time. In some

cases, minute details that affect the running time of the implementation are explored.

Once a solution method is determined, a program must still be written. As


computers have become more powerful, the problems they must solve have become
larger and more complex, requiring development of more intricate programs. The
goal of this text is to teach students good programming and algorithm analysis skills
simultaneously so that they can develop such programs with the maximum amount
of efficiency.
This book is suitable for either an advanced data structures (CS7) course or
a first-year graduate course in algorithm analysis. Students should have some knowledge

of intermediate programming, including such topics as pointers and recursion,


and some background in discrete math.

Approach
I believe it is important for students to learn how to program for themselves, not

how to copy programs frombook. On the other hand, it is virtually impossible to


a

discuss realistic programming issues without including sample code. For this reason,
the book usually provides about one-half to three-quarters of an implementation,
and the student is encouraged to supply the rest. Chapter 12, which is new to this
edition, discusses additional data structures with an emphasis on implementation
details.
PREFACE

The algorithms in this book arepresented in ANSI C, which, despite some


flaws, is arguably the most popular systems programming language. The use of C
instead of Pascal allows the use of dynamically allocated arrays (see, for instance,
rehashing in Chapter 7). It also produces simplified code in several places, usually
because the and (&&) operation is short-circuited.
Most criticisms of C center on the fact that it is easy to write code that is barely
readable. Some of the more standard tricks, such as the simultaneous assignment
and testing against 0 via

if (x=y)

are generally used in the text, since the loss of clarity is compensated by only a
not

few keystrokes and no increased speed. I believe that this book demonstrates that
unreadable code can be ivoided by exercising reasonable care.

Overview

Chapter 1 contains review material on recursion. I believe the only way to be


comfortable with recursion is to see good uses over and over. Therefore, recursion
is prevalent in this text, with examples in every chapter except Chapter 7.
Chapter 2 deals with algorithm analysis. This chapter explains asymptotic analysis
and its major weaknesses. Many examples are provided, including an in-depth
explanation of logarithmic running time. Simple recursive programs are analyzed
by intuitively converting them into iterative programs. More complicated divide-
and-conquer programs are introduced, but some of the analysis (solving recurrence
relations) is implicitly delayed until Chapter 6, where it is performed in detail.
Chapter 3 covers lists, stacks, and queues. The emphasis here is on coding
these data structures using ADT5, fast implementation of these data structures, and
an
exposition of some of their uses. There are almost no programs (just routines),
but the exercises contain plenty of ideas for programming assignments.
Chapter 4 covers trees, with an emphasis on search trees, including external
search trees (B-trees). The uix file system and expression trees are used as examples.
AVE treesand splay trees are introduced but not analyzed. Seventy-five percent of the
code is written, leaving similar cases to be completed by the student. More careful
treatment of search tree implementation details is found in Chapter 12. Additional
coverage of trees, such as file compression and game trees, is deferred until Chapter
10. Data structures for an external medium are considered as the final topic in
several chapters.

Chapter 5 is about priority queues. Binary heaps are covered, and there is
additional material on of the theoretically interesting implementations of
some

priority queues. The Fibonacci heap is discussed in Chapter 11, and the pairing heap
is discussed in Chapter 12.
PREFACE lx

Chapter 6 covers sorting. It is very specific with respect to coding details and
and
analysis. All the important general-purpose sorting algorithms are covered
are analyzed in detail: insertion sort, Shellsort, heapsort,
compared. Four algorithms
and quicksort. The analysis of the average-case running time of heapsort is new to
this edition. External sorting is covered at the end of the chapter.
hash tables. Some analysis is
Chapter 7 is a relatively short chapter concerning
at the end of the chapter.
performed, and extendible hashing is covered
Chapter 8 discusses the disjoint set algorithm with proof of the running time.
This is a short and specific chapter that can be skipped if Kruskal’s algorithm is not
discussed.
Chapter 9 covers graph algorithms. Algorithms on graphs are interesting, not
only because they frequently occur in practice but also because their running time is
so heavily dependent on the proper use of data structures. Virtually all of the standard

algorithms are presented along with appropriate data structures, pseudocode, and
analysis of running time. To place these problems in a proper context, a short
discussion on complexity theory (including NP-completeness and undecidability) is
provided.
Chapter 10 covers algorithm design by examining common problem-solving
techniques. This chapter is heavily fortified with examples. Pseudocode is used in
these later chapters so that the student’s appreciation of an example algorithm is not
obscured by implementation details.
Chapter 11 deals with amortized analysis. Three data from Chapters
structures

4 and 5 and the Fibonacci heap, introduced in this analyzed.


chapter, are

Chapter 12 is new to this edition. It covers search tree algorithms, the k-d tree,
and the pairing heap. This chapter departs from the rest of the text by providing
complete and careful implementations for the search trees and pairing heap. The
material is structured so that the instructor can integrate sections into discussions
from other chapters. For example, the top-down red black tree in Chapter 12 can
be discussed under AVL trees (in Chapter 4).

Chapters 1—9
provide enough material for most one-semester data structures
courses. If time permits, then Chapter 10 be covered. A graduate course
can

on algorithm analysis could cover Chapters 6—11.


The advanced data structures
analyzed in Chapter 11 can easily be referred to in the earlier chapters. The
discussion of NP-completeness in Chapter 9 is far too brief to be used in such a
course. Garey and Johnson’s book on NP-completeness can be used to augment this

text.

Exercises

Exercises, provided at the end of each chapter, match the order in which material
is presented. The last exercises may address the chapter as a whole rather than a
specific section. Difficult exercises are marked with an asterisk, and more challenging
exercises have two asterisks.
PREFACE

References

References are placed at the end of each chapter. Generally the references either
are historical, representing the original of the material, or they represent
source

extensions and improvements to the results given in the text. Some references
represent solutions to exercises.

Code Availability
The example program code in this book is available via anonymous ftp
at ftp://ftp.csfiu.edu/pub/weiss/WEISS_2E.tar.Z

Acknowledgments
Many, many people have helped me in the preparation of books in this series. Some
are listed in other versions of the book; thanks to all.
For this edition, I would like to thank my editors at Addison-Wesley, Carter
Shanklin and Susan Hartman. Ten Hyde did another wonderful job with the
production, and Matthew Harris and his staff at Publication Services did their usual
fine work putting the final pieces together.

M.A.W.

Miami, Florida
July, 1996
CONTENTS

Adapter’s Foreword
Preface

Introduction 1
1.1. What’s the Book About? 1

1.2. A Brief Introduction to Recursion 3

Summary 7
Exercises 7
References 8

2 Algorithm Analysis 9

2.1. Mathematical Background 9


2.2. Model 12
2.3. WhattoAnalyze 12
2.4. Running Time Calculations 14
2.4.1. ASimpleExainple 15
2.4.2. General Rules 15
2.4.3. Solutions for the Maximum Subsequence Sum Problem 18
2.4.4. Logarithms Running Time
in the 22
2.4.5. Checking Your Analysis 27
2.4.6. AGrainofSait 27
Summary 28
Exercises 29
References 33

xi
CONTENTS

3 lists, Stacks, and Queues 35


3.1. Abstract Data Types (AWT5) 35

3.2. TheLisLwT 36
3.2.1. Simple Array Implementation of Lists 37
3.2.2. Linked Lists 37
3.2.3. Programming Details 38
3.2.4. Common Errors 43
3.2.5. DoublyLinked Lists 45
3.2.6. Circularly Linked Lists 46
3.2.7. Examples 46
3.2.8. Cursor Implementation of Linked Lists 50
3.3. The StackAnT 56
3.3.1. stack Model 56
3.3.2. Implementation of Stacks 57
3.3.3. Applications 65
3.4. The QueueMT 73
3.4.1. Queue Model 73
3.4.2. Array Implementation of Queues 73
3.4.3. Applications of Queues 78

Summary 79
Exercises 79

4 Trees 83
4.1. Preliminaries 83
4.1.1. Terminology 83
4.1.2. TreeThtversalswithanApplication 84
4.2. Binary Trees 85
4.2.1. Implementation 86
4.2.2. Expression Trees 87
4.2.3. Tree Traversals 90

4.3. The Search Tree Search Trees


APT—Binary 97
4.3.1. MakeEmpty 97
4.3.2. Find 97
4.3.3. FindMin and FindMax 99
4.3.4. Insert 100
4.3.5. Delete 101
4.3.6. Average-Case Analysis 103
4.4. AVL Trees 106
4.4.1. single Rotation 108
4.4.2. Double Rotation 111
CONTENTS

4.5. Splay Trees 119


4.5.1. A Simple Idea (That Does Not Work) 120
4.5.2. Splaying 122

4.6. B-Trees 128

Summary 133
Exercises 134
References 141

5 Priority Queues (Heaps) 145


5.1. Model 145
5.2. Simple Implementations 146
5.3. Binary Heap 147
5.3.1. Structure Property 147
5.3.2. Heap Order Property 148
5.3.3. Basic Heap Operations 150
5.3.4. Other Heap Operations 154
5.4. Applications of Priority Queues 157
5.4.1. The Selection Problem 157
5.4.2. Event Simulation 159
5.5. d-Heaps 160
5.6. Leftist Heaps 161
5.6.1. LeftistHeapProperty 161
5.6.2. Leftist Heap Operations 162
5.7. Skew Heaps 168
5.8. Binomial Queues 170
5.8.1. Binomial Queue Structure 170
5.8.2. Binomial
Queue Operations 172
5.8.3. Implementation of Binomial Queues 173
Summary 180
Exercises 180
References 184

6 Sorting 187
6.1. Preliminaries 187
6.2. Insertion Sort 188
-

6.2.1. TheAlgorithm 188


6.2.2. AnalysisofinsertionSort 189
CONTENTS

6.3. A Lower Bound for Simple Sorting Algorithms 189


6.4. Sheilsort 190
6.4.1. Worst-Case Analysis of Shellsort 192
6.5. Heapsort 194
6.5.1. Analysis of Heapsort 196
6.6. Mergesort 198
6.6.1. Analysis of Mergesort 200
6.7. Quicksort 203
6.7.1. PickingthePivot 204
6.7.2. Partitioning Strategy 205
6.7.3. SmallArrays 208

6.7.4. Actual Quicksort Routines 208

6.7.5. Analysis of Quicksort 209


6.7.6. A Linear-Expected-Time Algorithm for Selection 213
6.8. Sorting Large Structures 215
6.9. A General Lower Bound for Sorting 216
6.9.1. Decision Trees 217

6.10. Bucket Sort and Radix Sort 219


6.11. External Sorting 222
6.11.1. WhyWe Need NewAlgorithms 222
6.11.2. ModelforExternalSorting 222

6.11.3. TheSimpleAlgorithm 222


6.11.4. MultiwayMerge 224
6.11.5. PoLyphaseMerge 225
6.11.6. Replacement Selection 226
Summary 227
Exercises 229
References 232

7 Hashing 235
7.1. General Idea 235
7.2. Hash Function 237
7.3. Separate Chaining 239
7.4. Open Addressing 244
7.4.1. Linear Probing 244
7.4.2. Quadratic Probing 247
7.4.3. Double Hashing 251
CONTENTS XV

7.5. Rehashing 252


7.6. Extendible Hashing 255
Summary 258
Exercises 259
References 262

8 The Disjoint Set APT 265


8.1. Equivalence Relations 265
8.2. The Dynamic Equivalence Problem 266
8.3. Basic Data Structure 267
8.4. Smart Union Algorithms 271
8.5. Path Compression 273
8.6. Worst Case for Union-by-Rank and Path Compression 275
8.6.1, Analysis of the Union/Find Algorithm 275
8.7. An Application 281
Summary 281
Exercises 282
References 283

9 Graph Algorithms 285

9.1. Definitions 285


9.1.1. Representation of Graphs 286
9.2. Topological Sort 288

9.3. Shortest-Path Algorithms 292


9.3.1. Unweighted Shortest Paths 293
9.3.2. Dijkstra’s Algorithm 297
9.3.3. Graphs with Negative Edge Costs 306
9.3.4. Acycic Graphs 307
9.3.5. All-Pairs Shortest Path 310
9.4. Network Flow Problems 310
9.4.1. A Simple Maximum-Flow Algorithm 311
9.5. Minimum Spanning Tree 315
9.5.1. Prim’s Algorithm 316
9.5.2. Kruskal’s Algorithm 318
CONTEMS

9.6. Applications of Depth-First Search 321


9.6.1. Undirected Graphs 322
9.6.2. Biconnectlvity 324
9.6.3. Euler Circuits 328
9.6.4. Directed Graphs 331
9.6.5. Finding Strong Components 333

9.7. Introduction to NP-Completeness 334


9.7.1. Easyvs.Hard 335
9.7.2. The ClassNP 336
9.7.3. NP-Complete Problems 337

Summary 339
Exercises 339
References 345

10 Algorithm Design Techniques 349


10.1. GreedyAlgorithms 349
10.1.1. ASimpleSchedulingProblem 350
10.1.2. Huffinan Codes 353
10.1.3. ApproximateBinPacking 359

10.2. Divide and Conquer 367


10.2.1. Running Time of Divide and Conquer Algorithms 368
10.2.2. Closest-Points Problem 370
10.2.3. The Selection Problem 375
10.2.4. Theoretical Improvements for Arithmetic Problems 378

10.3. Dynamic Programming 382


10.3.1. Using a Table Instead of Recursion 382
10.3.2. Ordering Matrix Multiplications 385
10.3.3. OptimalBinarySearchTree 389
10.3.4. All-Pairs Shortest Path 392

10.4. Randomized Algorithms 394


10.4.1. Random Number Generators 396
10.4.2. Skip Lists 399
10.4.3. Primality Testing 401

10.5. Backtracking Algorithms 403


10.5.1. The Turnpike Reconstruction Problem 405
10.5.2. Gaines 407

Summary 415
Exercises 417
References 424
CONTEWFS xvii

11 Amortized Analysis 429


11.1. AnUnrelatedPuzzle 430

11.2. Binomial Queues 430

11.3. Skewlleaps 435


fl.4. Fibonacci Heaps 437
11.41. Cuffing Nodes in Leftist Heaps 430
11.4.2. Lazy Merging for Binomial Queues 441
11.4.3. The Fibonacci Heap Operations 444
11.4.4. Proof of the Tune Bound 445
11.5. SplayTrees 447
Summary 451
Exercises 452
References 453

12 Advanced Data Structures and ImplementatIon 455


12.1. Top-Down SplayTrees 455
12.2. Red Black Trees 459
12.2.1. Bottom-Up Insertion 464
12.2.2. Top-Down Red BlackTrees 465
12.2.3. Top-Down Deletion 467

12.3. Deterministic Skip Lists 471

12.4. AA-Trees 478

12.5. Treaps 484


12.6. k-dTrees 487
12.7. Pairing Heaps 490
Summary 496
Exercises 497
References 499
CHAPTER 1

Introduction

In thischapter, we discuss the aims and goals of this text and briefly review

programming concepts. We will


See that how a program performs for reasonably large input is just as important
as its performance on moderate amounts of input.

Briefly review recursion.

1.1. What’sthe Book About?

Suppose you have a group of N numbers and would like to determine the kit largest.
This is known as the selection problem. Most students who have had a programming
course or two would have no difficulty writing a
program to solve this problem.

There are quite a few “obvious”


solutions.
One way to solve this problem would be to read the N numbers into an array,

sort the array in decreasing order by some simple algorithm such as bubblesort, and
then return the element in position k.
A somewhat better algorithm might be to read the first k elements into an array
and sort them (in decreasing order). Next, each remaining element is read one by
one. As a element arrives, it is ignored if it is smaller than the kit element
new

in the array. Otherwise, it is placed in its correct spot in the array, bumping one
element of the array. When the algorithm ends, the element in the kth position
out

is returned as the answer.


Both algorithms are simple to code, and you are encouraged to do so. The

natural questions, then, which algorithm is better and, more important, is either
are

algorithm good enough? A simulation using a random file of 1 million elements


and k =
500,000 will show that neither algorithm finishes in a reasonable amount
of time; each requires several days of computer processing to terminate (albeit
CHAPThR 1/INTRODUCTION

eventually with a correct answer). An alternative method, discussed in Chapter 6,


gives a solution in about a second. Thus, although our
proposed algorithms work,
they cannot beconsidered good algorithms, because they are entirely impractical for
sizes that a third algorithm can handle in a reasonable amount of time.
input
A second problem is to solve a popular word puzzle. The input consists of a
two-dimensional array of letters and a list of words. The object is to find the words
in thepuzzle. These words may be horizontal, vertical, or diagonal in any direction.
As example, the puzzle shown in Figure 1.1 contains the words this, two, fat,
an

and that. The word this begins at row 1, column 1, or (1,1), and extends to (1,4);
two goes from (1,1) to (3,1); fat goes from (4,1) to (2,3); and that goes from (4,4)
to (1,1).
Again, there are at least two straightfonvard algorithms that solve the problem.
For each word in the word list, we check each ordered triple (row, column,
orientation) for the presence of the word. This amounts to lots of nested for loops
but is basically straightforward.
Alternatively, for each ordered quadruple (row, column, orientation, number
of characters) that doesn’t run off an end of the puzzle, we can test whether the
word indicated is in the word list. Again, this amounts to lots of nested for loops. It

is possible to save some time if the maximum number of characters in any word is
known.
It is relatively easy to code up either method of solution and solve many of the
real-life puzzles commonly published in magazines. These typically have 16 rows, 16
columns, and 40 or so words. Suppose, however, we consider the variation where
only the puzzle board is given and the word list is essentially an English dictionary.
Both of the solutions proposed require considerable time to solve this problem and
therefore are not acceptable. However, it is possible, even with a large word list, to
solve the problem in a matter of seconds.
An important concept is that, in many problems, writing a working program is
not good enough. If the program is to be runon a large data set, then the running

time becomes an issue. Throughout this book we will see how to estimate the

running time of a program for large inputs and, more important, how to compare
the running times of two programs without actually coding them. We will see
techniques for drastically improving the speed of a program and for determining
program bottlenecks. These techniques will enable us to find the section of the code
on which to concentrate our
optimization efforts.

Figure 1.1 Sample word puzzle

1 2 3 4

1 t Ii 1 s

2 w a t s

3 o a h g
4 f g d t
1.2. A BRIEF INTRODUCTION TO RECURSION 3

1.2. A Brief Introduction to Recursion

Most mathematical functions that we arefamiliar with are described by a simple


formula. For instance, we can convert temperatures from Fahrenheit to Celsius by
applying the formula

C =
S(F 32)19
Given this formula, it is trivial to write a C function; with declarations and braces
removed, the one-line formula translates to one line of C.
Mathematical functions are sometimes defined in a less standard form. As an
example, we can define a function F, valid on nonnegative integers, that satisfies
F(O) 0 and F(X)
=
2F(X 1) + V. From this definition we see that F(1)
=
1, =

F(2) =
6, F(3) =
21, and F(4) =
58. A function that is defined in terms of itself
is called recursive. C allows functions to be recursive.* It is important to remember
that what C provides is merely an attempt to follow the recursive spirit. Not all
mathematically recursive functions are efficiently (or correctly) implemented by
C’s simulation of recursion. The idea is that the recursive function F ought to be
expressible in only a few lines, just like a nonrecursive function. Figure 1.2 shows
the recursive implementation of F.
Lines I and 2 handle what is known as the base case, that is, the value for
which the function is directly known without resorting to recursion. Just as declaring
F(X) 2F(X
=
1) + X2 is meaningless, mathematically, without including the fact
that F(0) 0, the recursive C function doesn’t make sense without a base case.
=

Line 3 makes the recursive call.

There are several important and possibly confusing points about recursion. A
common question is: Isn’tthis just circular logic? The answer is that although we are
a function in terms of itself, we are not defining a
defining particular instance of the
function in terms of itself. In other words, evaluating F(S) by computing F(S) would
be circular. Evaluating F(S) by computing F(4) is not circular—unless, of course,
F(4) is evaluated by eventually computing F(S). The important two most issues are

probably the how and


why questions. Chapter 3, the how and why
In issues are

formally resolved. We will give an incomplete description here.


It turns out that recursive calls are handled no
differently from any others. If F
is called with the value of 4, then line 3 requires the computation of 2 * F(3) + 4 * 4.
Thus, a call is made to compute F(3). This requires the computation of 2 * F(2) + 3 *
3. Therefore, another call is made to compute F(2). This means that 2 *
F(1) + 2 * 2

must be evaluated. To do so, F(I) is computed as 2 * F(0) + I * I. Now, F(0) must


be evaluated. Since this is a base case, we know a priori that F(O) =
0. This enables
the completion of the calculation for F(l), which is now seen to be 1. Then F(2),

‘Using recursion for numerical calculations is usually a bad idea. We have done so to illustrate the basic
points.
CHAFfER V1NTRODUCTION

F(3), and finally F(4)can be determined. All the bookkeeping needed to keep track

of pending function calls (those started but waiting for a recursive call to complete),
along with their variables, is done by the computer automatically. An important
point, however, is that recursive calls will keep on being made until a base case
is

reached. For instance, an attempt to evaluate F(—1) will result in calls to F(—2),
F(—3), and so on. Since this will never get to a base case, the program won’t be able
to compute the answer (which is undefined anyway). Occasionally, a much more

subtle error is made, which is exhibited in Figure 1.2. The error in the program in

Figure 1.2 is that Bad(1) is defined, by line 3, to be Bad(1). Obviously, this doesn’t
give any clue as to what Bad( 1) actually is. The computer will thus repeatedly
make calls to Bad(1) in an attempt to resolve its values. Eventually, its bookkeeping
system will run out of space, and the program will crash. Generally, we would say
that this function doesn’t work for one special case but is correct otherwise. This
isn’t true here, since Bad(2) calls Bad(1). Thus, Bad(2) cannot be evaluated either.
Furthermore, Bad(3), Bad(4), and Bad(S) all make calls to Bad(2). Since Bad(2)
is unevaluable, none of these values are either. In fact, this program doesn’t work
for any value of N, except 0. With recursive programs, there is no such thing as a

“specialcase.”
These considerations lead to the first two fundamental rules of recursion:

1. Base cases. You must always have some base cases, which can be solved
without recursion.
2. Making progress. For the cases that are to be solved recursively, the recursive
call must always be to a case that makes progress toward a base case.

Figure iZ A nonterminating recursive program

irit
Bad( unsigned mt N )
{
7*1*, if(N==O)
/* 2*! return 0;
else
returnBad(N/3+1)-i-N-1;

this book, we will use recursion to solve problems. As an


Throughout example
of nonmathematical use, consider a large dictionary. Words in dictionaries are
a

defined in terms of other words. When we look up a word, we


might not always
understand the definition, so we
might have to look up words in the definition.
Likewise, we might not understand some of those, so we might have to continue
this search for a while. Because the dictionary is finite, eventually either (1) we will
come to a
point where we understand all of the words in some definition (and thus
5
1.2. A BRIEF lfbcfltODUdNON TO RECURSION

understand that definition and retrace our path through the other definitions) or

will find that the definitions circular and we are stuck, or that some word
(2) we are

we to understand for a definition is not in the dictionary.


need
Our recursive strategy to understand words is as follows: If we know the
in the
meaning of a word, then we are done; otherwise, we look the word up
dictionary. If we understand all the words in the definition, we are done; otherwise,
we figure out what the definition means by recursively looking up the words we

don’t know. This procedure will terminate if the dictionary is well defined but can

loop indefinitely if a word is either not defined or circularly defined.

Printing Out Numbers

Suppose we have a positive integer, N, that we wish to print out. Our routine will
have the heading PrintOut(N). Assume that the only 110 routines available will
take a single-digit number and output it to the terminal. We will call this routine

PrintDigit; for example, PrintDigit(4) will output a 4 to the terminal.


Recursion provides a very clean solution to this problem. To print out 76234,
we need to first print out 7623 and then print out 4. The second step is easily
accomplished with the statement PrintDigit(N% 10), but the first doesn’t seem any
simpler than the original problem. Indeed it is virtually the same problem, so we can
solve it recursively with the statement Pri,gOut(N/10).
This tells us how to solve the general problem, but we still need to make sure
that the program doesn’t loop indefinitely. Since we haven’t defined a base case yet,
it is clear that we still have something to do. Our base case will be. PrintDigit(N) if
0 N <10. Now PrintOut(N) is defined for every positive number from 0 to 9,
and larger numbers
are defined in terms of a smaller positive number. Thus, there is

no cycle. The procedure* is.shown in Figure 1.3.


entire
We have made effort to do this efficiently. We could
no have avoided using the
mod routine (which is very expensive) because N%10 N =
[Nib] *
10.t

Recursion and Induction

Let us
prove (somewhat) rigorously that the recursive number-prinzing program
works. To do so, we’ll use a proof by induction.

*The term procedure refers to a function that returns void.


t[Xj is the largest integer that is less than or equal to X.
CHAFFER IIINThODUCTION

void
Printout( unsigned ‘mt ‘N ) /* Print nonnegative N /

if(N>=10)
PrintOut( N / 10 );
I

PrintDigit( N % 10 );
}

FIgure 1.3 Recursive routine to print an integer

ThEOREM 1.1
-

The recursive number-printing algorithm is correct for N 0.

PROOF (BY INDUCTION ON IRE NUMBER OFDIGflY iNN):

First, if N hasone
digit, then the program is trivially corrects since it merely
makes a call to PrintDigit. Assume then that PrintOut works for all numbers
of k or fewer digits. A number of k + L digits is expressed by its first k digits
followed by its least significant digit. But the number formed by the first k digits
is exactly [Nib], which, by the inductive hypothesis, is correctly printed, and
the last digit is Nmod 10, so the program prints out any (k + 1)-digit number
correctly. Thus, by induction, all numbers are correctly printed.
This proof probably seems a little strange in that it is virtually identical to the
algorithm description, it illustrates that in designing a recUrsive program, all smaller
instances of the same problem (which are on the path to a base case) may be assumed
to work correctly. The recursive program needs only to combine solutions to smaller
problems, which are “magically”obtained by recursion, into a solution for the
current problem. The mathematical justification for this is proof by induction. This
gives the third rule of recursion:’
3. Design rule. Assume that all the recursive calls work.

This rule is important because it means that when designing recursive programs,
you generally don’tneed to know the details of the bookkeeping arrangements, and
you don’t have to try to trace through the myriad of recursive calls. Frequently, it is
extremely difficult to track down the actual sequence of recursive calls. Of course,

in many cases this is an indication of a good use of recursion, since the computer is
being allowed to work out the complicated details.
The main problem with’recursion is the hidden bookkeeping costs. Although
these costs are almost always justifiable, because recursive programs not only simplify
the algorithm design but also tend to give cleaner code, recursion should never be
used as a substitute for a simple for loop. We’ll discuss the overhead involved in
recursion in
more detail in Section 3.3.

When writing recursive routines, it is crucial to keep in mind the four basic
rules of recursion:

1. Base cases. You must always have some base cases, which can be solved
without recursion.
SUMMARY

2. Making progress. For the cases that are to be solved recursively, the recursive
must always be to a case that makes progress toward a base case.
call
3. Design rule. Assume that all the recursive calls work.
4. Compound interest rule. Never duplicate work by solving the same instance
ofa problem in separate recursive calls.

The fourth rule, which will be justified (along with its nickname) in later sections,
is the reason that it is generally a bad idea to use recursion to evaluate simple
mathematical functions, such as the Fibonacci numbers. As long as you keep these
rules in mind, recursive programming should be straightforward.

Summary

This chapter sets the stage for the rest of the book. The time taken by an algorithm
confronted with large amounts of input will be an important criterion for deciding if
it is agood algorithm. (Of course, correctness is most important.) Speed is relative.
What is fast for one problem on one machine might be slow for another problem or

a different machine. We will begin to address these issues in the next chapter and
will establish a formal mathematical model.

Exercises

1.1 Write a program to solve the selection problem. Let k =


N12. Draw a table

showing the running time of your program for various values of N.


1.2 Write a program to solve the word puzzle problem.
1.3 Write a procedure to output an arbitrary real number (which might be negative)
using only PrintDigit for 110.
1.4 C allows statements of the form

#include filename
which reads filename and inserts its contents in place of the include statement.

Include statements may be nested; in other words, the file filename may itself
contain an include statement, but, obviously, a file can’t include itself in any
chain. Write a program that reads in a file and outputs the file as modified by
the include statements.

1.5 Let F, be the Fibonacci numbers. Prove the following:


N-2
a. >ZF,=FN—2
i=1

b. FN <IN, with 4 =
(1 +
*
*c. Give a precise closed-form expression for FN.
CHAFFER I/INTRODUCTION

References

There are
many textbooks covering the basic mathematics that you may need
good
to better understand the following chapters. A small subset is [1], [2], [3], [9], [10],
and [11].Reference [9] is specifically geared toward the analysis of algorithms. It is
the first volume of three-volume series that will, be cited
a throughout this text.
More advanced material is covered in [5].

Throughout this book we will assume a knowledge of C [8]. Occasionally,


we add feature where necessary for clarity. We also assume familiarity with
a

pointers and recursion (the recursion summary in this chapter is meant to be a quick
review). We will attempt to provide hints on their use where appropriate throughout
the textbook. Readers familiar with these should consult [12]
not or any good
intermediate programming textbook.

General programming style is discussed in several books. Some of the classics


are [4], [6], and [7].
1. M. 0. Albertson and J. P. Hutchinson, Discrete Mathematics with Algorithms, John
Wiley & Sons, New York, 1988.
2. Z. Bavel, Math Companion for Computer Science, Reston Publishing Co., Reston, Va.,
1982.
3. R. A. Brualdi, Introductory Combinatorics, North-Holland, New York, 1977.

4. E. W. Dijkstra, A Discipline of Programming, Prentice Hall, Englewood Cliffs, N.J.,


1976.
5. R. L. Graham, D. E. Knuth, and 0. Patashnik, Concrete Mathematics, Addison-Wesley,
Reading, Mass., 1989.
6. D. Gries, The Science of Programming, Springer-Verlag, New York, 1981.
7. B. W. Kernighan and P. J. Plauger, The Elements of Programming Style, 2d ed., McGraw-
Hill, New York, 1978.
8. B. W. Kernighan and D. M. Rirchie, The C Programming Language, 2d ed., Prentice
Hall, Englewood Cliffs, NJ., 1988.
9. D. E. Knuth, The Art of Computer Programming, Vol. 1: Fundamental Algorithms, 2d
ed., Addison-Wesley, Reading, Mass., 1973.
10. F. S. Roberts, Applied Combinatorics, Prentice Hall, Englewood Cliffs, NJ., 1984.
11. A. Tucker, Applied Combinatorics, 2d ed., John Wiley & Sons, New York, 1984.
12. M. A. Weiss, Efficient C Programming: A Practical Approach, Prentice Hall, Englewood
Cliffs, N.J., 1995.
CHAPTER 2

Algorithm Analysis
An algorithm is a clearly specified set of simple instructions to be followed to solve
a problem. Once an algorithm is given for a problem and decided (somehow) to be
correct, an important step is to determine how much in the way of resources, such
as time or space, the algorithm willrequire. An algorithm that solves a problem but
requires a year is hardly of any use. Likewise, an algorithm that requires a gigabyte
of main memory is not (currently) useful on most machines.
In this chapter, we shall discuss

How to estimate the time required for a program.


How to reduce the running time of a program from days or years to fractions
of a second.

The results of careless use of recursion.

Very efficient algorithms to raise a number to a power and to compute the


greatest common divisor of two numbers.

2.1. Mathematical Background


The analysis required to estimate the resource use of an algorithm is generally a
theoretical issue, and therefore a formal framework is required. We begin with some
mathematical definitions.
Throughout the book we will use the following four definitions:

T(N)
DEFINITION: O(f(N)) if there are
positive constants c and n0 such that

T(N) cf(N)whenN n0.

DEFINITION:T(N) f(g(N)) if there


=
are positive constants c and n0 such that
T(N) cg(N) when N n0.

DEFINmON: T(N) =
O(h(N)) if and only if T(N) =
O(h(N)) and T(N) =

DEFJNmON: T(N) =
o(p(N)) if T(N) =
O(p(N)) and T(N) e(?(N)). 9
CIIAPTSR YALGORITIIM ANALYSIS

The idea of these definitions is to establish a relative order among functions. Given
two functions, there usually points where one function is smaller than the other
are

function, so sense to claim, for instance, ((N) <g(N). Thus,


it does not make
we compare their relative rates of growth. When we apply this to the analysis of

algorithms, we shall see why this is the important measure.


Although 1,000N is larger than N2 for small values of N, N2 grows at a
faster rate, and thus N2 will eventually be the larger function. The turning point is
N =
1,000 in this case. The first definition says that eventually there is some point
o past which c ((N) is always at least as large as T(N), so that if constant factors
are ignored, f(N) is at least as big as T(N). In our case, we have T(N) =
1,000N,
f(N) =
N2, no
=
1,000, and c =
1. We could also use n0 10 and
=
c =
100.
Thus, we can 0(N2) (order N-squared).
say that 1,000N =
This notation is
known Big-Oh notation. Frequently, instead of saying “order...,”one says
as

“Big-Oh....”
If we use the traditional inequality operators to compare growth rates, then
the first definition says that the growth rate of T(N) is less than or
equal to ()
that of ((N). The second definition, T(N) fl(g(N)) (pronounced “omega”),
=

says
that the growth rate of T(N) is greater than or
equal to () that of g(N). The
third definition, T(N) 0(h(N)) (pronounced “theta”),
=
says that the growth rate
of T(N) equals (“)the growth rate of h(N). The last definition, T(N)
o(p(N)) =

(pronounced “little-oh”),says that the growth rate of T (N) is less than (<) the
growth rate of p(N). This is different from Big-Oh, because
Big-Oh allows the
possibility that the growth rates are the same.
To
prove that some function T(N) O(f(N)), we usually do not apply these
=

definitions formally but instead use a repertoire of known results. In general, this
means that a proof (or determination that the assumption is incorrect) is a
very simple
calculation and should not involve calculus, except in extraordinary circumstances
(not likely to occur in an
algorithm analysis).
When we say that T(N) O(f(N)), we are guaranteeing that the function
=

T(N) grows at a rate no faster than ((N); thus ((N) is an upper bound on T(N).
Since this implies that [(N) fl(T(N)), we say that T(N) is a lower bound on
=

[(N).
As an example, N3 grows faster than
N2, so we can say that N2 0(N3) =

or N3 =
0(N2). ((N) =
N2 and g(N) 2N2 grow at the both
same rate, so

((N) =
O(g(N)) and f(N) =
fl(g(N)) are true. When twofunctions grow at
the same rate, then the decision of whether or not to signify this with 0() can
depend on the particular context. Intuitively, if g(N) 2N2, then g(N) 0(N4), = =

g(N) 0(N3), and g(N)


=

0(N2) are all technically correct, but the last option


=

is the best answer. Writing g(N) 0(N2) says not only that g(N) 0(N2), but
=
=

also that the result is as good (tight) as possible.


The important things to know are

RULE 1:

If T1(N) 0(f(N)) and T2(N)


=

O(g(N)), then =

(a) T1(N) + T2(N) max(O(f(N)),O(g(N))),


=

(b) Ti(N) * T2(N) 0(((N) * g(N)), =


2.1. MAThEMATICAL BACKGROUND

Function Name

c Constant

log N Logarithmic
log2 N Log-squared
N Linear
N logN
N2 Quadratic
N3 Cubic
Exponential

Iigure 2.1 Typical growth rates

RIJLE2:

If T (N) is a polynomial of degree k, then T (N) =


k)
€(N

RUI 3:

logk N =
0(N) for any constant k. This tells us that logarithms grow very
slowly.
This information is sufficient to arrange most of the common functions by
growth (see Figure 2.1).
rate
Several points are in order. First, it is very bad style to include constants or low-
order terms inside a Big-Oh. Do not say T(N) =
0(2N2) or 0(N2 N).
T(N) =
+
In both cases, the correct form is T(N) 0(N2). This means that in any
analysis
that will require a Big-Oh answer, all sorts of shortcuts are possible. Lower-order
terms can generally be ignored, and constants can be thrown away. Considerably

less precision is required in these cases.


Second, we can always deterthine the relative growth rates of two functions f(N)
and g(N) by computing lim f (N )/g(N), using L’H6pital’srule if necessary.*
The limit can have four possible values:

The limit is 0: This means that f(N) =


o(g(N)).
The limit is c 0: This means that [(N) =
€i(g(N)).
The limit is : This means that g(N) =
o(f(N)).
The limit oscillates: There is no relation (this will not happen in our Context).

Using this method almost always amounts to overkill. Usually the relation between

f(N) and g(N) can be derived


by simple algebra. For instance, if ((N) N logN =

and g(N) N15, then to decide which of ((N) and g(N) grows faster, one really
needs to determine which of log N and N o•grows faster. This is like determining

L’Hôpital’s rule states that if limN.,, [(N) =


and limN g(N) =
x, then f(N)/g(N) =

limN.. f’(N)/g(N), where [‘(N) and g’(N) are the derivatives of [(N) and g(N), respectively.
CHAPrER ZIALGORIThM ANALYSIS

which of log2 N or N grows faster. This is a simple problem, because it is already


known that N grows faster than any power of a log. Thus, g(N) grows faster than

f(N).
One stylistic note: It is bad to say [(N) O(g(N)), because the inequality is

implied by the definition. It is wrong to write [(N) O(g(N)), which does not

make sense.

2.2. Model

In order to analyze algorithms in a formal framework, we need a model of


computation. Our model is basically a normal computer, in which instructions are
executed sequentially. Our model has the standard repertoire of simple instructions,
such as addition, multiplication, comparison, and assignment, but, unlike the case
with real computers, it takes exactly one time unit to do anything (simple). To be
reasonable, we will assume that, like a modern computer, our model has fixed-size
(say, 32-bit) integers and that there are no fancy operations, such as matrix inversion
or sorting, that clearly cannot be done in one time unit. We also assume infinite
memory.
This model clearly has some weaknesses. Obviously, in real life, not all operations
take exactly the same time. In particular, in our model one disk read counts
the same as an addition, even though the addition is typically several orders of
magnitude faster. Also, by assuming infinite memory, we never worry about page
faulting, which can be a real problem, especially for efficient algorithms.

2.3. What to Analyze


The most important resource to analyze generally the running time. Several factors
is
affect the running time of a program. Some, such as the
compiler and computer
used, obviously beyond the scope
are of any theoretical model, so, although they are

important, we cannot deal with them here. The other main factors are the algorithm
used and the input to the algorithm.
Typically, the size of the input is the main consideration. We define two
functions, ?vg(N) and T0(N), as the average and worst-case running time,
respectively, used by an algorithm on input of size N. Clearly, Tavg(N) Tworst(N).
If there is more than one input, these functiqns may have more than one argument.

Generally, the quantity required is the worst-case time, unless otherwise specified.
One reason for this is that it provides a bound for all input, including
particularly bad input, which an average-case analysis does not provide. The other
reason is that average-case bounds are usually much more difficult to compute. In
some instances, the definition of “average”can affect the result. (For instance, what
is average input for the following problem?)
2.3. WHATTOANALYZE 13

As an example, in the next section, we shall consider the following problem:

MAXIMUM SUBSEQUENCE SUM PROBLEM:

Given (possibly negative) integers A1, A2, ,A, find the maximum value
. . .

all the
of Ak. (For convenience, the maximum subsequence sum is 0 if
integers are negative.)
Example:
For 11, —4,
input —2, —2,
13, —5, the answer is 20 (A2 through A4).
This problem is interesting mainly because there are so many algorithms to

solve it, and the of these algorithms varies drastically. We will discuss
performance
four algorithms to The running time on some computer (the
solve this problem.
exact computer is unimportant) for these algorithms is given in Figure 2.2.
There are several important things worth noting in this table. For a small
amount of input, the algorithms all run in a blink of the eye, so if only a small
amount of input is expected, it might be silly to expend a great deal of effort to

design a clever algorithm. On the other hand, there is a large market these days
for rewriting programs that were written five years ago based on a no-longer-valid

assumption of small input size. These programs are nowtoslow, beãause they used
poor algorithms. large For amounts of input, algorithm 4 is clearly the best choice
(although algorithm 3 is still usable).
Second, the times given do not include the time reqdired to readthe input. For

algorithm 4, the time merely to read in the input from a disk is likely to be an order
of magnitude larger than the time required to solve the problem. This is typical of
many efficient algorithms.. Reading the data is generally the bottleneck;: once the
data are read, the problem can be solved quickly. For inefficient algarithrns this
is not true, and significant computer resources must be used. Thus it is important
that, whenever possible, algorithms be efficient enough nqt to be the bottleneck of a
problem.
Figure 2.3 shows the growth rates of the running times of the four algorithms.
Even though this graph encompasses only values of N ranging from 10 to 100, the
relative growth rates are still evident. Although the graph for algorithm 3 seems
linear, it is easy to verify that it is not by using a straight-edge (or piece of paper).
Figure 2.4 shows the performance for larger values. It dramatically illustrates how
useless inefficient algorithms are for even moderately large amounts of input.

Figure 2.2 Running times of several algorithms for maximum subsequence sum

(in seconds)

Algorithm 1 2 3 4

Time 0(N2) 0(NlogN) 0(N)


Input N 10 0.00103 0.00045 0.00066 0.00034
Size N =
100 0.47015 0.01112 0.00486 0.00063
N =
1,000 448.77 1.1233 0.05843 0.00333
N =
10,000 NA 111.13 0.68631 0.03042
N =
100,000 NA NA 8.0113 0.29832
CHAPTER Z/ALGORFTHM ANALYSIS

Al 1. 0(N3) Al 2. 0(N2)
AIg 3. 0(NlogN)
4

1
A1g4.0(N)

00 10 20 30 40 50 60 70 80 90 100

FIgure 2.3 Plot (N vs. milliseconds) of various maximum subsequence sum

algorithms

Aig 1. 0(N3)
0(N2)
g 2.

0.5 Aig 3. 0(NlogN)

0.4

0.3

0.2

0.1

Al 4.0(N)

0.00 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000

FIgure 2.4 Plot (N vs. seconds) of various maximum subsequence sum algorithms

2.4. RunnIng Time Calculations

There are several ways to estimate the running time of a program. The previous
table was obtained empirically. If two programs are expected to take similar times,
probably the best way to decide which is faster is to code them both up and run

them!
2.4. RUNNING TIME CAlCUlATIONS

Generally, there are several algorithmic ideas, and we would like to eliminate
the bad ones early, so an analysis is usually required. Furthermore, the ability to do
an analysis usually provides insight into designing efficient algorithms.
The analysis
also generally pinpoints the bottlenecks, which are worth coding carefully.
To simplify the analysis, we will adopt the convention that there are no particular
units of time. Thus, we throw away leading constants. We will also throw away
low-order terms, so what we are essentially doing is computing a Big-Oh running
time. Since Big-Oh is an upper bound, we must be careful never to underestimate the

running time of the program. In effect, the answer provided is a guarantee that the
program will terminate within a certain time period. The program may stop earlier
than this, but never later.

2.4.1. A Simple Example


Here is a simple program fragment to calculate 2=i i3:
-mt
Sum( mt N )
{
mt 1, PartialSum;

/* 1*! PartialSum 0; =

1*2*1 for(i=1; I <=N; i++)


* *
/* 3*/ PartialSum += I I I;
/* 4*/ return PartialSum;
}

The analysis of this program is simple. The declarations count for no time.
Lines 1 and 4 count for one unit each. Line 3 counts for four units per time executed
(two multiplications, one addition, and one assignment) and is executed N times,
for a total of 4N units. Line 2 has the hidden costs of initializing i, testing i N,
and incrementing i. The total cost of all these is I to initialize, N + 1 for all the
tests, and N for all the increments, which is 2N + 2. We ignore the costs of calling
the function and returning, for a total of 6N + 4. Thus, we say that this function is
0(N).
If we had to perform all this work every time we needed to analyze a program,
the task would quickly become infeasible. Fortunately, since we are giving the
answer in terms of Big-Oh, there are lots of shortcuts that can be taken without
affecting the final answer. For instance, line 3 is obviously
statement (per an 0(1)
execution), so it is silly to precisely whether it is two, three, or four units; it
count

does not matter. Line 1 is obviously insignificant compared with the for loop, so it
is silly to waste time here. This leads to several general rules.

2.4.2. General Rules

R 1—FOR
LOOPS:

The running time of a for loop is at most the running time of the statements
inside the for loop (including tests) times the number of iterations.
CHAFflIR 2/ALGORITHM ANALYSIS

FOR LOOPS:
RULE 2—NESfED
inside
Analyze these inside out. The total running time of a statement group a

nested is the time of the statement multiplied by the product


of loops running
of the sizes of all the for loops.
As an example, the following program fragment is 0(N2):
for( i 0; i < N;
= i++ )
for( j 0; j
= < N; i-i-.,- )
k++;

SATEMENTS:
RULE 3—CONSECUTIVE

These just add (which means that the maximum is the one that counts; see

rule 1(a)on
page 16).

As an example, the following program fragment, which has 0(N) work followed
by 0(N2) work, is also 0(N2):
for( 0; i < N; i++
I =
)
A[ I
] 0; =

for( i 0; i < N; i-i-+


=
)
for(j =0; j <N; j++)
A[ i ] += A[ j ] + i + j;

RULE 4—IF/ElSE:

For the fragment


if( Condition )
Si
else
52

the running time of an if/else statement is never more than the running time of
the test plus the larger of the running times of Si and S2.
Clearly, this can be an overestimate in some cases, but it is never an underestimate.
Other rules are obvious, but a basic strategy of analyzing from the inside (or
deepest part) out works. If there are function calls, these must be analyzed first. If
there are recursive procedures, there are several options. If the recursion is really
just a thinly veiled for loop, the analysis is usually trivial. For instance, the following’
function is really just a simple loop and is 0(N):

long mt
Factorial( mt N )
{
if( N <= 1 )
return 1;
else
return N *
Factorial( N -

1 );
}
2.4. RUNNING TIME CALCUlATIONS

This example is really a poor use of recursion. When recursion is properly used,
it is difficult to convert the recursion into simple loop athis case, the structure. In

relation that needs be solved. To what might


analysis will involve a recurrence to see

consider the which turns out to be a horrible use of


happen, following program,
recursion:
-

long mt
Fib( mt N )
{
/* 1*! if( N <= 1 )
/* 2*! return 1;
else
/* 3*/ return Fib( N -

1 ) + Fib( N -

2 );

At first glance, this


seems like a very clever use of recursion. However, if the

program is coded up and run for values of N around 30, it becomes apparent that
this program is terribly inefficient. The analysis is fairly simple. Let T (N) be the
running time for the function Fib(N). If N 0 or N 1, then the running time is
= =

some constant value, which is the time to do the test at line 1 and return. We can

say that T(0) T(1) I because constants do not matter. The running time for
= =

other values of N is then measured relative to the running time of the base case. For
N > 2, the time to execute the function is the constant work at line I plus the work
at line 3. Line 3 consists of an addition and two function calls. Since the function
calls are not simple operations, they must be
themselves. The first analyzed by
function call is Fib(N 1) and hence, by the definition ofT, requires T(N 1) units
of time. A similar argument shows that the second function call requires T (N 2)
units of time. The total time required is then T(N 1) + T(N 2) + 2, where the
2 accounts for the work at line 1 plus the addition at line 3. Thus, for N 2, we

have the following formula for the running time of Fib(N):

T(N) =
T(N 1) + T(N —2)
+ 2

Since Fib(N) =
Fib(N 1) + Fib(N 2), it is easy to show by induction that
T(N) Fib(N). In Section 1.2.5, we showed that Fib(N) < (5/3)N A similar
calculation shows that (for N > 4) Fib(N) (3/2)N, and so the running time of
this program grows exponentially. This is about as bad as possible. By keeping a
simple array and using a for loop, the running time can be reduced substantially.
This program is slow because there is huge of redundant work being
a amount

performed, violating the fourth major rule of recursion (the compound interest rule),
which was presented in Section 1.3. Notice that the first call on line 3, Fib(N 1),
actually computes Fib(N 2) at some point. This information is thrown away
and recomputed by the second call on line 3. The amount of information thrown

away compounds recursively and results in the huge running time. This is perhaps
the finest example of the maxim “Don’tcompute anything more than once” and
should not scare you away from using recursion. Throughout this book, we shall
see outstanding uses of recursion.
CHAPTER 2/ALGORITHM ANALYSIS

2.4.3. Solidions for (be Maximum Subsequence Sum Problem


We will now present four algorithms to solve the maximum subsequence sum
problem posed earlier. The first algorithm, which merely exhaustively tries all
possibilities, is depicted in Figure 2.5. The indices in the for loop reflect the fact that
in C, arrays begin at 0, instead of 1. Also, the algorithm does not compute the actual
subsequences; additional code is required to do this.
Convince yourself that this algorithm works (this should not take much convincing).
The running time is 0(N3) and is entirely due to lines 5 and 6, which
consist of an 0(1) statement buried inside three nested for loops. The loop at line 2

is of size N.
The second loop has size N i which could be small but could also be of size
N. We must assume the worst, with the knowledge that this could make the final
bound a bit high. The third loop has size j i + 1, which, again, we must assume
is of size N. The total is 0(1 N .

N .

N) =
0(N3). Statement 1 takes only 0(1)
total, and statements 7 and 8 take only 0(N2) total, since they are easy expressions
inside only two loops.
It turns out that a more precise analysis, taking into account the actual size
of these loops, shows that the answer is 9(N3) and that our estimate above was a
factor of 6 too high (which is all right, because constants do not matter). This is
generally true in these kinds of problems. The precise analysis is obtained from the
sum 1, which tells how many times line 6 is executed. The sum

can be evaluated inside out. In particular, we will use the formulas for the sum of
the first N integers and first N squares. First we have

1 =
j
-

i + 1

FIgure 2.5 Algorithm I

-mt
MaxSubsequenceSum( const mt A[ 1, mt N )

mt ThisSum, MaxSu,n, I, j, k;

/* 1*! MaxSum 0; =

/* 2*! for( i 0; =
1 < N; i++ )
/* 3*/ for( j =
1; j < N; j++ )

/* 4*/ ThisSum 0; =

for(k=i;k<=j;k++)
/* 6*! ThisSum += A[ k 1;

/* 7*/ if( ThisSum > MaxSum )


/* 8*! MaxSum =
ThisSum;
}
!* 9*/ return MaxSum;
2.3. WHAT TO ANALYZE

Next we evaluate

N—I
(N i + 1)(N I)
2
This that it is just the of the first N i integers.
sum is computed by observing sum

To complete the calculation, we evaluate

(N—i+1)(N—i)
=—j+1—1+2)
=

—(N + +
(N2 +3N
+2)1
1N(N-i-1)(2N+1) N 3’N(N+1) + NZ+3N+2N
2 6 2 2

N3 + 3N2 + 2N
6
We avoid the cubic running time by removing a for loop. This is not
can

always possible, but in this case there are an awful lot of unnecessary computations
present in the algorithm. The inefficiency that the improved algorithm corrects can
be seen by noticing that A, A, + =

4Z
Ak, so the computation at lines 5
and 6 inalgorithm unduly expensive. Figure 2.6 shows an improved algorithm.
1 is
Algorithm 2 is clearly 0(N2); the analysis is even simpler than before.
There is a recursive and relatively complicated O(N log N) solution to this
problem, which we now describe. If there didn’t happen to be an 0(N) (linear)
Figure 2.6 Algorithm 2

mt
MaxSubSequenceSum( const mt A[ 1, int N )

mt ThisSum, MaxSum, i, j;

/* 1*! MaxSum =
0
/* 2*! for( i =
0; 1 < N; i--+ )
{
/* 3*/ ThisSum =
0;
/* 4*/ for( j =
1; j < N; j-i-+ )

/* 5*/ ThisSum += A[ j 1;

/* 6*! if( ThisSum > MaxSum )


/* 7*/ MaxSum =
ThisSum;

}
/* 8*! return MaxSum;
}
CHAPTER 2/ALGORITHM ANALYSIS

solution, this would be an excellent example of the power of recursion. The


algorithm uses a “divide-and-conquer”strategy. The idea is to split the problem
into two roughly equal subproblems, which are then solved recursively. This is the
the two solutions
“divide” part. The “conquer”stage consists of patching together
of the subproblems, and possibly doing a small amount of additional work, to arrive
at a solution for the whole problem.

In our case, the maximum subsequence sum can be in one of three places. Either
it occurs entirely in the left half of the input, or entirely in the right half, or it crosses
the middle and is in both halves. The first two cases can be solved recursively. The
last case can be obtained by finding the largest sum in the first half that includes
the last element in the first half, and the largest sum in the second half that includes
the first element in the second half. These two sums can then be added together. As
an example, consider the following input:

First Half Second Half

4 5
—3 —2 —12 6 —2

The maximum subsequence for the first half is 6 (elements A1 through A3) and
sum

for the second half is 8 (elements A6 through A7).


The maximum sum in the first half that includes the last element in the first
half is 4 (elements A1 through A4), and the maximum sum in the second half that
(elements A5 though A7). Thus, the
includes the first element in the second half is 7
maximum sum that spans both halves and goes through the middle is 4 + 7 11 =

(elements A1 through A7).


We see, then, that among the three ways to form a large maximum subsequence,
for our example, the best way is to include elements from both halves. Thus, the
answer is 11. Figure 2.7 shows an implementation of this strategy.
The code for algorithm 3 deserves some comment. The general form of the call
for the recursive procedure is to pass the input array along with the left and right
borders, which delimit the portion of the array that is operated upon. A one-line
driver program sets this up by passing the borders 0 and N I along with the array.
Lines 1 to 4 handle the base case. If Left Right, there is one element, and
= =

it is the maximum subsequence if the element is nonnegative. The case Left> Right
is not possible unless N is negative (although minor perturbations in the code could
mess this up). Lines 6 and 7 perform the two recursive calls. We can see that the

recursive calls are always on a smaller problem than the original, although minor
perturbations in the code could destroy this property. Lines 8 to 12 and 13 to 17
calculate the two maximum that touch the center divider. The sum of these
sums

two values is the maximum sum that spans both halves. The pseudoroutine Max3
returns the largest of the three possibilities.

Algorithm 3 clearly requires more effort to code than either of the two previous
algorithms. However, shorter code does not always mean better code. As we have
seen in the earlier table showing the
running times of the algorithms, this algorithm
is considerably faster than the other two for all but the smallest of input sizes.
2.4. RUNNING TIME CALCULATIONS

static mt
MaxSubSum( const mt AL 1. mt Left, mt Right )

mt MaxLeftsum, MaxRightSum;
I nt MaxLeftBorderSum, MaxRi ghtBorderSum;
I nt LeftBorderSum, RI ghtBorderSum;
mt Center, I;

/* 1*! If( Left Right )


==
/ Base Case !
1*2*1 if(A[Left]>0)
/* 3*/ return AL Left 1;
else
/* 4*/ return 0;

1* 5*/ Center =
C Left + Right ) / 2;
1* 6*! MaxLeftSum =
MaxSubsum( A, Left, Center );
1* 7*/ MaxRlghtSum =
MaxSubSum( A, Center + 1, RIght );

/* 8*! MaxLeftBorderSum =
0; LeftBorderSum = 0
1* 9*, for( I =
Center; I >= Left; I-- )
{
1*10*! LeftBorderSum += AL I 1;
1*11*! if( Leftsordersum > MaxLeftBorderSum )
1*12*! MaxLeftBordersum LeftBorderSum; =

1*13*! MaxRightBorderSum =
0; RlghtBorderSum =
0;
1*14*! for( I =
Center ÷ 1; 1 <= Right; 1++ )
{
1*15*! RlghtBorderSum += AL 1 1;
1*16*! If( RightBorderSum > MaxRIghtBordersum )
1*17*! MaxRightBorderSum RightBorderSum;
=

1*18*! return Max3( MaxLeftSum, MaxRightSum,


1*19*! MaxLeftBorderSum + MaxRight8orderSum );
1

mt
MaxSubsequenceSum( const mt AL ], mt N )
{
return MaxSubSum( A, 0, N -

1 );

FIgure 2.7 Algorithm 3


CIIAPThR 2/ALGORflThi ANALYSIS

The in much the same way as for the program that


running time is analyzed
computes the Fibonacci numbers. Let T(N) be the time it takes to solve a maximum
subsequence sum problem of size N. If N 1, then the program takes some constant
=

amount of time to execute lines 1 to 4, which we shall call one unit. Thus, T (1)
1. =

Otherwise, the program must perform two recursive calh, the two for loops between
lines 9 and 17, and some small amount of bookkeeping, such as lines 5 and 18.
The two for loops touch every element from A0 to AN-I, and there is
combine to

constant work inside the so the time expended in lines 9 to 17 is 0(N). The
loops,
code in lines 1 to 5, 8, 13, and 1$ is all a constant amount of work and can thus
be ignored compared with 0(N). The remainder of the work is performed in lines
6 and 7. These lines solve two subsequence problems of size N/2 (assuming N is
even). Thus, these lines take T(N/2) units of time each, for a total of 2T(N/2). The
total time for the algorithm then is 2T(N/2) + 0(N). This gives the equations

T(1) =
1

T(N) =
2T(N/2) + 0(N)
To simplify the calculations, we can replace the 0(N) term in the equation above
with N; since T (N) will be expressed in Big-Oh notation anyway, this will not affect
the answer. In Chapter 7, we shall see how to solve this equation rigorously. For now,
ifT(N) =
2T(N/2)+N,andT(1) =
1, then T(2) =
4 =
2*2,T(4) =
12 =
4*3,
T(8) =
32 8*4,andT(16)
=
80 = =
16*S.Thepatternthatisevident,andcanbe
derived, is that ifN 2,thenT(N)
= =
N*(k+1) NlogN+N
=
O(NlogN).
=

This analysis ‘assumes N is even, since otherwise N/2 is not defined. By the
recursive nature of the analysis, it is really valid only when N is a power of 2, since
otherwise we eventually get a subproblem that is not an even size, and the equation
is invalid. When N is not a power of 2, a somewhat more complicated analysis is
required, but the Big-Oh result remains unchanged.
In future chapters, we will see several clever applications of recursion. Here,
we present a fourth
algorithm to find the maximum subsequence sum. This algorithm
is simpler to implement than the recursive algorithm and also is more efficient. It is
shown in Figure 2.8.
It should be clear why the time bound is correct, but it takes a little thought to
see why the algorithm actually works; this is left to the reader. An extra advantage

of this algorithm is that it makes only one pass through the data, and once A[iJ is
read and processed, it does not need to be remembered. Thus, if the array is on a

disk tape, it can be read sequentially, and there is no need to store any part of
or

it in main memory. Furthermore, at any point in time, the algorithm can correctly
give an answer to the subsequence problem for the data it has already read (the
other algorithms do not share this property). Algorithms that can do this are called
on-line algorithms. An on-line algorithm that requires only constant space and runs
in linear time is just about as good as possible.

2.4.4. Logarithms in the Running Time


The most confusing aspect of analyzing algorithms probably centers around
the logarithm. We have already seenthat some divide-and-conquer algorithms will
2.4. RUNNING TIME C4LCIJMTIONS

i nt
MaxSubsequencesumC const mt A[ ), mt N )

mt Thissum, MaxSum, j;

/* 1*! ThisSum = MaxSum =


0;
1*2*! for(j =0; j <N; i-i--i-)
{
/* 3*/ ThisSum += AL j 1;

/* 4*/ if( ThisSum > MaxSum )


/* 5*/ MaxSum ThisSum;
=

/* 6*! else if( ThisSum < 0 )


/* 7*/ ThisSum 0; =

}
1* 8*! return MaxSum;
}

FIgure 2.8 Algorithm 4

run in O(N log N) time. Besides divide-and-conquer algorithms, the most frequent
appearance of logarithms centers around the following general rule: An algorithm is
O(log N) if it takes constant (0(1)) time to cut the problem size by a fraction (which
is usually ). On the other hand, if constant time is required to merely reduce the
problem by a constant amount (such as to make the problem smaller by 1), then the
algorithm is 0(N).
It should be obvious that only special kinds of problems can be 0(logN). For
instance, if the input is a list of N numbers, an algorithm must take 11(N) merely to
read the input in. Thus, when we talk about O(logN) algorithms for these kinds of
problems, we usually presume that the input is preread. We provide three examples
of logarithmic behavior.

Binary Search

The first example is usually referred to as binary search.

BINARY SEARCH:

Given an integer X and integers A0, A1, . . .


,AN-I, which are presorted and
already in memory, find i such that A1 =
X, or return i =
if X is not in the
—1

input.

The obvious solution consists of scanning through the list from left to right
and runs in linear time. However, this algorithm does not take advantage of the
fact that the list is sorted, and is thus not likely to be best. A better strategy is to
check if X is the middle element. If so, the answer is at hand. If X is smaller than the

middle element, we can apply the same strategy to the sorted subarray to the left of the
24 CHAPTER 2/ALC,oRmIibI ANALYSIS

mt
ElementType X, mt N
BinarySearch( const ElementType A[ 1,

mt Low, Mid, High;

/* 1*! Low 0; High


= N 1; =
-

/* 2*! while( Low <= High )

Mid=(Low+High)/2;
/* 4*/ if( A[ Mid I < X )
/* 5*/ Low Mid + 1;
=

else
1*6*1 if(A[Mid]>X)
/* 7*/ High Mid =
1; -

else
/* 8*! return Mid; 1* Found *1

/* 9*! return NotFound; 1* NotFound is defined as -1 */

Figure 2.9 Binary search

middle element; likewise, if X is larger than the middle element, we look to the right
half. (There is also the case of when stop.) Figure 2.9 shows the code for binary
to

search (the answer is Mid). As usual, the code reflects C’s convention that arrays
begin with index 0.
Clearly, all the work done inside the ioop takes 0(1) per iteration, so the analysis
requires determining the number of times around the loop. The ioop starts with
High Low N I and finishes with High Low —1. Every time through
the ioop the value High Low must be at least halved from its previous value; thus,
the number of times around the ioop is at most [log(N 1)1 + 2. (As an example, if
High Low =
128, thcn the maximum values of High Low after each iteration
are Thus, the running time is 0(logN). Equivalently,
64, 32, 16, 8, 4, 2, 1, 0, —1.)
we could write a recursive formula for the running time, but this kind of brute-force

approach is usually unnecessary when you understand what is really going on


and why.
Binary search can be viewed as our first data structure implementation. It
supports the Find operation in 0(log N) time, but all other operations (in particular
Insert) require 0(N) time. In applications where the data are static (that is, insertions
and deletions are not allowed), this could be very useful. The input would then need
to be sorted once, but afterward accesses would be fast. An example is a program

that needs to maintain information about the periodic table of elements (which
arises in chemistry and physics). This table is relatively stable, as new elements are
added infrequently. The element names could be kept sorted. Since there are only
about 110 elements, at most eight accesses would be required to find an element.
Performing a sequential search would require many more accesses.
2.4. RUNNING TIME CALCULATIONS

Euclid’s Algorithm

A second is Euclid’s algorithm for computing the greatest common divisor.


example
The greatest common divisor (Gcd) of two integers is the largest integer that divides
both. Thus, Gcd(50, 15) 5. The algorithm in Figure 2.10 computes Gcd(M,N),
=

assuming M (If N. loop swaps them.)


N > M, the first iteration of the
The algorithm works by continually computing remainders until 0 is reached.
The last nonzero remainder is the answer. Thus, if M 1,989 and N 1,590, then = =

the sequence of remainders is 399, 393, 6, 3, 0. Therefore, Gcd(1989, 1590) =


3.
As the example shows, this is a fast algorithm.
As before, estimating the entire running time of the algorithm depends on

determining how long the sequence of remainders is. Although log N seems like a

good answer, it is not at all obvious that the value of the remainder has to decrease
by a constant factor, since we see that the remainder went from 399 to only 393
in the example. Indeed, the remainder does not decrease by a constant factor in
one iteration. However, we can prove that after two iterations, the remainder is at

most half of its original value. This would show that the number of iterations is at
most 2 log N =
O(log N)
and establish the running time. This proof is easy, so we

include it here. It follows directly from the following theorem.

THEOREM 2.1.

IfM > N, then M modN <M/2.

PROOF:

There are two cases. If N M/2, then since the remainder is smaller than N,
the theorem is true for this case. The other case is N > M/2. But then N goes
into M once with a remainder M N <M12, proving the theorem.

One might wonder if this is the best bound possible, since 2 log N is about 20
for our example, and only seven operations were performed. It turns out that the

Figure 2.10 Euclid’s algorithm

unsigned mt
Gcd( unsigned mt N, unsigned mt N )

unsigned mt Rem;

1* 1*! while( N > 0 )

/ 2*/ Rem =
M % N;
1* 3*/ M =
N;
/* 4*/ N =
Rem;

/* 5*/ return N;
CHAFFER 2/ALGORITHM ANALYSIS

constant can be improved slightly, to roughly 1.44 log N, in the worst case (which
is achievable if M and N are consecutive Fibonacci numbers). The average-case

performance of Euclid’s algorithm requires pages and pages of highly sophisticated


mathematical analysis, and it turns out that the average number of iterations is
about (l2ln2lnN)/#2 + 1.47.

Exponentiaion
Our last example in this section deals with raising an integer to a power (which is
also an integer). Numbers that result from exponentiation are generally quite large,
so an analysis works only if we can assume that we have a machine that can store
such large integers (or a compiler that
simulate this). We will count the number
can

of multiplications as the of running time.


measurement

The obvious algorithm to compute XN uses N —1 multiplications. The recursive


algorithm in Figure 2.11 does better. Lines 1 to 4 handle the base case of the
recursion. Otherwise, if N is even, we have X” XNn and if N is odd,
= .

= . .
x.
For instance, to compute X62, the algorithm does the following calculations,
which involves only nine multiplications:
=
(X)X,X7 =
(X3)2X,X15 =
(X7)ZX =
(X15)2X,X62 (X31)2
The number of multiplications required is clearly at most 2 log N, because at most

two multiplications (if N is odd) are required to halve the problem. Again, a

recurrence formula can be written and solved. Simple


intuition obviates the need for
a brute-force approach.
It is sometimes interesting to see how much the code can be tweaked without
affecting correctness. In Figure 2.11, lines 3 to 4 are actually unnecessary, because
if N is 1, then line 7 does the right thing. Line 7 can also be rewritten as
/* 7*/ return Pow(X, N-i) *X;
without affecting the correctness of the program. Indeed, the program will still run

in O(log N), because the sequence of multiplications is the same as before. However,

Figure 2.11 Efficient exponentiation

long mt
long mt X, unsigned in N )
r0wC
if(N==O)
/* 2*! return 1;

/*3*/ if(N==i)
/* 4*/ return X;
/* 5*/ if( IsEven( N ) )
1*6*1 return Pow(X*X, N/2);
else
/* 7*/ return Pow(X*X, N/2 ) *X;
}
2.4. RUNNING TIME CALCUlATIONS 27

all of the following alternatives for line 6 are bad, even though they look correct:

/* 6a/ return Pow( Pow( X, 2 ), N / 2 );


1* 6b*/ return Pow( Pow( X, N / 2 ), 2 );
1* 6c/ return Pow( X, N / 2 ) * Pow( X, N / 2 );

Both lines 6a and 6b are incorrect because when N is 2, one of the recursive calls to
Pow has 2 as the second argument. Thus no progress is made, and an infinite loop

results (in an eventual crash).


Using line 6c affects the efficiency, because there are now two recursive calls
of size N/2 instead of only one. An analysis will show that the running time is
no longer O(logN). We leave it as an exercise to the reader to determine the new

running time.

2.4.5. Checking Your Analysis


Once an analysis has been performed, it is desirable to see if the answer is correct
and as the program and see if
good as possible. One way to do this is to code up
the empirically observed running time matches the running time predicted by the
analysis. When N doubles, the running time goes up by a factor of 2 for linear
programs, 4 for quadratic programs, and 8 for cubic programs. Programs that run
in logarithmictime take only an additive constant longer when N doubles, and
programs that run in O(N log N) take slightly more than twice as long to run under

the same circumstances. These increases can be hard to spot if the lower-order terms
have relatively large coefficients and N is
not large enough. An example is the jump

from N 10 to N
=
100 in the running time for the various implementations of
=

the maximum subsequence sum problem. It also can be very difficult to differentiate
linear programs from 0(N log N) programs purely on empirical evidence.
Another commonly used trick to verify that some program is 0(1(N)) is to
compute the values T(N)/f(N) for a range of N (usually spaced out by factors of
2), where T(N) is the empirically observed running time. If [(N) is a tight answer
for the running time, then the computed values converge to a positive constant. If
[(N) is an overestimate, the values converge to zero. If [(N) is an underestimate
and hence wrong, the values diverge.
As an example, the program fragment in Figure 2.12 computes the probability
that two distinct positive integers, less than or equal to N and chosen randomly, are
relatively prime. (As N gets large, the answer approaches 6/r2.)
You should be able to do the analysis for this program instantaneously. Figure
2.13 shows the actual observed running time for this routine on a real computer. The
table shows that the last column is most likely, and thus the analysis that you should
have gotten is probably correct. Notice that there is not a great deal of difference
between 0(N2) and O(N2 log N), since logarithms grow so slowly.

2.4.6. AGrainofSalt
Sometimes the analysis is shown empirically to be an overestimate. If this is the
case, then either the analysis needs to be tightened (usually by a clever observation),
or it may be that the
average running time is significantly less than the worst-case
CHAFFER 2/ALGORIThM ANALYSIS

Re] 0; Tot
=
0; =

for( I 1; i <= N;
= I-i-+ )
for( j i + 1; =
j <= N; j+÷ )
{
Tot+±;
if( Ccd( I, j ) ==
1)
Re]++;

printf( “Percentage of relatively prime pairs is %f\n”,


C double ) Rel / Tot );

Figure 2.12 Estimate the probability that two random


numbers are relatively prime

N CPU time (T) TIN2 TIN3 TIN2 logN

100 022 .002200 .000022000 .0004777


200 056 .001400 .000007000 .0002642
300 118 .001311 .000004370 .0002299
400 207 .001294 .000003234 .0002159
500 318 .001272 .000002544 .0002047

600 466 .001294 .000002157 .0002024


700 644 .001314 .000001877 .0002006
800 846 .001322 .000001652 .0001977
900 1,086 .001341 .000001490 .0001971
1,000 1,362 .001362 .000001362 .0001972

1,500 3,240 .001440 .000000960 .0001969


2,000 5,949 .001482 .000000740 .0001947
4,000 25,720 .001608 .000000402 .0001938

FIgure 2.13 Empirical running times for the previous routine

running time and no improvement in the bound is possible. For many complicated
algorithms the worst-case bound is achievable by some bad input but is usually an
overestimate in practice. Unfortunately, for most of these problems, an average-case
analysis is extremely complex (in many cases still unsolved), and a worst-case bound,
even
though overly pessimistic, is the best analytical result known.

Summary

This chapter gives some hints on how to analyze the complexity of programs.
Unfortunately, it is not a complete guide. Simple programs usually have simple
Another Random Scribd Document
with Unrelated Content
[921] Historia, dec. ii. lib. x, cap. 18.

[922] [Cf. the bibliography of these letters in chap. vi. The notes in
Brinton’s Floridian Peninsula are a good guide to the study of the
various Indian tribes of the peninsula at this time.—Ed.]

[923] [Cf. chap. vi. of the present volume.—Ed.]

[924] Vol. xxvi. pp. 77-135.

[925] Epis. June 20, 1524, in Opus epistolarum, pp. 471-476.

[926] Historia, lib. xxxiii. cap. 2, p. 263.

[927] Historia, dec. iii. lib. v. cap. 5. Cf. also Barcia, Ensayo
cronológico, p. 8, and Galvano (Hakluyt Society’s ed.), pp. 133,
153.

[928] Coleccion de documentos inéditos, x. 40-47; and the


“testimonio de la capitulacion” in vol. xiv. pp. 503-516.

[929] Vol. xxxiv. pp. 563-567; xxxv. 547-562.

[930] Vol. iii. p. 69. His conjectures and those of modern writers
(Stevens, Notes, p. 48), accordingly require no examination. As
the documents of the first voyage name both 33° 30´ and 35° as
the landfall, conjecture is idle.

[931] Dec. ii. lib. xi. cap. 6. This statement is adopted by many
writers since.

[932] Pedro M. Marquez to the King, Dec. 12, 1586.

[933] Gomara, Historia, cap. xlii.; Herrera, Historia, dec. iii. lib. v.
cap. 5.

[934] Vol. ii. lib. xxi. cap. 8 and 9.

[935] Ecija, Relacion del viage (June-September, 1609).

[936] Vol. iii. pp. 72-73. Recent American writers have taken
another view. Cf. Brevoort, Verrazano, p. 70; Murphy, Verrazzano,
p. 123.
[937] Historia, lib. xxxvii. cap. 1-4, in vol. iii. pp. 624-633.

[938] Documentos inéditos, iii. 347.

[939] Galvano (Hakluyt Society’s ed., p. 144) gives the current


account of his day.

[940] Cf. Vol. IV. p. 28. The capitulacion is given in the Documentos
inéditos, xxii. 74.

[941] [Harrisse, Bibl. Amer. Vet., no. 239; Sabin, vol. iii. no. 9,767.
There is a copy in the Lenox Library. Cf. the Relacion as given in
the Documentos inéditos, vol. xiv. pp. 265-279, and the
“Capitulacion que se tomó con Panfilo de Narvaez” in vol. xxii. p.
224. There is some diversity of opinion as to the trustworthiness
of this narrative; cf. Helps, Spanish Conquest, iv. 397, and
Brinton’s Floridian Peninsula, p. 17. “Cabeça has left an artless
account of his recollections of the journey; but his memory
sometimes called up incidents out of their place, so that his
narrative is confused.”—Bancroft: History of the United States,
revised edition, vol. i. p. 31.—Ed.]

[942] The Comentarios added to this edition were by Pero


Hernandez, and relate to Cabeza de Vaca’s career in South
America.

[943] [There are copies of this edition in the Carter-Brown


(Catalogue, vol. i. no. 197) and Harvard College libraries; cf.
Sabin, vol. iii. no. 9,768. Copies were sold in the Murphy (no.
441), Brinley (no. 4,360 at $34), and Beckford (Catalogue, vol. iii.
no. 183) sales. Rich (no. 28) priced a copy in 1832 at £4 4s.
Leclerc (no. 2,487) in 1878 prices a copy at 1,500 francs; and
sales have been reported at £21, £25, £39 10s., and £42.—Ed.]

[944] [Vol. i. no. 6. Cf. Carter-Brown, iii. 893; Field, Indian


Bibliography, no. 79.—Ed.]

[945] [Nova typis transacta navigatio Novi Orbis, 1621. Ardoino’s


Exámen apologético was first published separately in 1736
(Carter-Brown, iii. 545).—Ed.]

[946] Vol. iii. pp. 310-330.


[947] Following the 1555 edition, and published in his Voyages, at
Paris.

[948] Vol. iv. pp. 1499-1556.

[949] [Menzies Catalogue, no. 315; Field, Indian Bibliography, nos.


227-229.—Ed.]

[950] [Cf. Field, Indian Bibliog., no. 364.,—Ed.]

[951] Printed by Munsell at Albany, at the charge of the late Henry


C. Murphy. [Dr. Shea added to it a memoir of Mr. Smith, and Mr.
T. W. Field a memoir of Cabeza de Vaca.—Ed.]

[952] [The writing of his narrative, not during but after the
completion of his journey, does not conduce to making the
statements of the wanderer very explicit, and different
interpretations of his itinerary can easily be made. In 1851 Mr.
Smith made him cross the Mississippi within the southern
boundary of Tennessee, and so to pass along the Arkansas and
Canadian rivers to New Mexico, crossing the Rio Grande in the
neighborhood of thirty-two degrees. In his second edition he
tracks the traveller nearer the Gulf of Mexico, and makes him
cross the Rio Grande near the mouth of the Conchos River in
Texas, which he follows to the great mountain chain, and then
crosses it. Mr. Bartlett, the editor of the Carter-Brown Catalogue
(see vol. i. p. 188), who has himself tracked both routes, is not
able to decide between them. Davis, in his Conquest of New
Mexico, also follows Cabeza de Vaca’s route. H. H. Bancroft
(North Mexican States, i. 63) finds no ground for the northern
route, and gives (p. 67) a map of what he supposes to be the
route. There is also a map in Paul Chaix’ Bassin du Mississipi au
seizième siècle. Cf. also L. Bradford Prince’s New Mexico (1883),
p. 89.—Ed.] The buffalo and mesquite afford a tangible means of
fixing the limits of his route.

[953] Including the petition of Narvaez to the King and the royal
memoranda from the originals at Seville (p. 207), the instructions
to the factor (p. 211), the instructions to Cabeza de Vaca (p.
218), and the summons to be made by Narvaez (p. 215). Cf.
French’s Historical Collections of Louisiana, second series, ii. 153;
Historical Magazine, April, 1862, and January and August, 1867.
[954] Smith’s Cabeça de Vaca, p. 100; Torquemada (Monarquia
Indiana, 1723, iii. 437-447) gives Lives of these friars. Barcia says
Xuarez was made a bishop; but Cabeza de Vaca never calls him
bishop, but simply commissary, and the portrait at Vera Cruz has
no episcopal emblems. Torquemada in his sketch of Xuarez
makes no allusion to his being made a bishop. and the name is
not found in any list of bishops. We owe to Mr. Smith another
contribution to the history of this region and this time, in a
Coleccion de varios documentos para la historia de la Florida y
tierras adyacentes,—only vol. i. of the contemplated work
appearing at Madrid in 1857. It contained thirty-three important
papers from 1516 to 1569, and five from 1618 to 1794; they are
for the most part from the Simancas Archives. This volume has a
portrait of Ferdinand V., which is reproduced ante, p. 85. Various
manuscripts of Mr. Smith are now in the cabinet of the New York
Historical Society.

[955] Oviedo’s account is translated in the Historical Magazine, xii.


141, 204, 267, 347. [H. H. Bancroft (No. Mexican States, i. 62)
says that the collation of this account in Oviedo (vol. iii. pp. 582-
618) with the other is very imperfectly done by Smith. He refers
also to careful notes on it given by Davis in his Spanish Conquest
of New Mexico, pp. 20-108. Bancroft (pp. 62, 63) gives various
other references to accounts, at second hand, of this expedition.
Cf. also L. P. Fisher’s paper in the Overland Monthly, x. 514.
Galvano’s summarized account will be found in the Hakluyt
Society’s edition, p. 170.—Ed.]

[956] Bancroft, United States, i. 27.

[957] Cabeça de Vaca, p. 58; cf. Fairbanks’s Florida, chap. ii.

[958] Cabeça de Vaca, pp. 20, 204.

[959] [Tampa is the point selected by H. H. Bancroft (No. Mexican


States, i. 60); cf. Brinton’s note on the varying names of Tampa
(Floridian Peninsula, p. 113).—Ed.]

[960] B. Smith’s De Soto, pp. 47, 234.

[961] Nouvelle France, iii. 473.

[962] Barcia, p. 308. The Magdalena may be the Apalachicola, on


which in the last century Spanish maps laid down Echete; cf.
Leroz, Geographia de la America (1758).

[963] The manuscript is in the Hydrographic Bureau at Madrid. The


Lisbon Academy printed it in their (1844) edition of the Elvas
narrative. Cf. Smith’s Soto, pp. 266-272; Historical Magazine, v.
42; Documentos inéditos, xxii. 534. [It is dated April 20, 1537. In
the following August Cabeza de Vaca reached Spain, to find that
Soto had already secured the government of Florida; and was
thence turned to seek the government of La Plata. It was
probably before the tidings of Narvaez’ expedition reached Spain
that Soto wrote the letter regarding a grant he wished in Peru,
which country he had left on the outbreak of the civil broils. This
letter was communicated to the Historical Magazine (July, 1858,
vol. ii. pp. 193-223) by Buckingham Smith, with a fac-simile of
the signature, given on an earlier page (ante, p. 253).—Ed.]

[964] [Rich in 1832 (no. 34) cited a copy at £31 10s., which at that
time he believed to be unique, and the identical one referred to
by Pinelo as being in the library of the Duque de Sessa. There is
a copy in the Grenville Collection, British Museum, and another is
in the Lenox Library (B. Smith’s Letter of De Soto, p. 66). It was
reprinted at Lisbon in 1844 by the Royal Academy at Lisbon
(Murphy, no. 1,004; Carter-Brown, vol. i. no. 596). Sparks says of
it: “There is much show of exactness in regard to dates; but the
account was evidently drawn up for the most part from memory,
being vague in its descriptions and indefinite as to localities,
distances, and other points.” Field says it ranks second only to
the Relation of Cabeza de Vaca as an early authority on the
Indians of this region. There was a French edition by Citri de la
Guette in 1685, which is supposed to have afforded a text for the
English translation of 1686 entitled A Relation of the Conquest of
Florida by the Spaniards (see Field’s Indian Bibliography, nos.
325, 340). These editions are in Harvard College Library. Cf.
Sabin, Dictionary, vi. 488, 491, 492; Stevens, Historical
Collections, i. 844; Field, Indian Bibliography, no. 1,274; Carter-
Brown, vol. iii. nos. 1,324, 1,329; Arana, Bibliografía de obras
anónimas (Santiago de Chile, 1882), no. 200. The Gentleman of
Elvas is supposed by some to be Alvaro Fernandez; but it is a
matter of much doubt (cf. Brinton’s Floridian Peninsula, p. 20).
There is a Dutch version in Gottfried and Vander Aa’s Zee-und
Landreizen (1727), vol. vii. (Carter-Brown, iii. 117).—Ed.]
[965] [Carter-Brown, vol. ii. no. 86; Murphy, no. 1,118. Rich (no.
110) priced it in 1832 at £2 2s.—Ed.]

[966] Field, Indian Bibliography, no. 1,338.

[967] [It is also in Vander Aa’s Versameling (Leyden, 1706). The


Relaçam of the Gentleman of Elvas has, with the text of
Garcilasso de la Vega and other of the accredited narratives of
that day, contributed to the fiction which, being published under
the sober title of Histoire naturelle et morale des Iles Antilles
(Rotterdam, 1658), passed for a long time as unimpeached
history. The names of César de Rochefort and Louis de Poincy are
connected with it as successive signers of the introductory matter.
There were other editions of it in 1665, 1667, and 1681, with a
title-edition in 1716. An English version, entitled History of the
Caribby Islands, was printed in London in 1666. Cf. Duyckinck,
Cyclopædia of American Literature, supplement, p. 12; Leclerc,
nos. 1,332-1,335, 2,134-2,137.—Ed.]

[968] [A copy of the original Spanish manuscript is in the Lenox


Library.—Ed.]

[969] Recueil des pièces sur la Floride.

[970] In the volume already cited, including Hakluyt’s version of the


Elvas narrative. It is abridged in French’s Historical Collections of
Louisiana, apparently from the same source.

[971] Pages 47-64. Irving describes it as “the confused statement


of an illiterate soldier.” Cf. Documentos inéditos, iii. 414.

[972] [Carter-Brown, vol. ii. no. 42; Sunderland, vol. v. no. 12,815;
Leclerc, no. 881, at 350 francs; Field, Indian Bibliography no.
587; Brinley, no. 4,353. Rich (no. 102) priced it in 1832 at £2 2s.
—Ed.]

[973] [Brinton (Floridian Peninsula, p. 23) thinks Garcilasso had


never seen the Elvas narrative; but Sparks (Marquette, in
American Biography, vol. x.) intimates that it was Garcilasso’s
only written source.—Ed.]

[974] [Theodore Irving, The Conquest of Florida by Hernando de


Soto, New York, 1851. The first edition appeared in 1835, and
there were editions printed in London in 1835 and 1850. The
book is a clever popularizing of the original sources, with main
dependence on Garcilasso (cf. Field, Indian Bibliography, no.
765), whom its author believes he can better trust, especially as
regards the purposes of De Soto, wherein he differs most from
the Gentleman of Elvas. Irving’s championship of the Inca has not
been unchallenged; cf. Rye’s Introduction to the Hakluyt Society’s
volume. The Inca’s account is more than twice as long as that of
the Gentleman of Elvas, while Biedma’s is very brief,—a dozen
pages or so. Davis (Conquest of New Mexico, p. 25) is in error in
saying that Garcilasso accompanied De Soto.—Ed.]

[975] [There was an amended edition published by Barcia at Madrid


in 1723 (Carter-Brown, iii. 328; Leclerc, no. 882, at 25 francs);
again in 1803; and a French version by Pierre Richelet, Histoire
de la conquête de la Floride, was published in 1670, 1709, 1711,
1731, 1735, and 1737 (Carter-Brown, vol. ii. no. 1,050; vol. iii.
nos. 132, 470; O’Callaghan Catalogue, no. 965). A German
translation by H. L. Meier, Geschichte der Eroberung von Florida,
was printed at Zelle in 1753 (Carter-Brown, vol. iii. no. 997) with
many notes, and again at Nordhausen in 1785. The only English
version is that embodied in Bernard Shipp’s History of Hernando
de Soto and Florida (p. 229, etc.),—a stout octavo, published in
Philadelphia in 1881. Shipp uses, not the original, but Richelet’s
version, the Lisle edition of 1711, and prints it with very few
notes. His book covers the expeditions to North America between
1512 and 1568, taking Florida in its continental sense; but as De
Soto is his main hero, he follows him through his Peruvian career.
Shipp’s method is to give large extracts from the most accessible
early writers, with linking abstracts, making his book one mainly
of compilation.—Ed.]

[976] Letter of Hernando de Soto, and Memoir of Hernando de


Escalante Fontaneda. [The transcript of the Fontaneda Memoir is
marked by Muñoz “as a very good account, although it is by a
man who did not understand the art of writing, and therefore
many sentences are incomplete. On the margin of the original [at
Simancas] are points made by the hand of Herrera, who
doubtless drew on this for that part [of his Historia general]
about the River Jordan which he says was sought by Ponce de
Leon.” This memoir on Florida and its natives was written in Spain
about 1575. It is also given in English in French’s Historical
Collection of Louisiana (1875), p. 235, from the French of
Ternaux; cf. Brinton’s Floridian Peninsula, p. 26. The Editor
appends various notes and a comparative statement of the
authorities relative to the landing of De Soto and his subsequent
movements, and adds a list of the original authorities on De
Soto’s expedition and a map of a part of the Floridian peninsula.
The authorities are also reviewed by Rye in the Introduction to
the Hakluyt Society’s volume. Smith also printed the will of De
Soto in the Hist. Mag. (May, 1861), v. 134.—Ed.]

[977] [A memorial of Alonzo Vasquez (1560), asking for privileges


in Florida, and giving evidences of his services under De Soto, is
translated in the Historical Magazine (September, 1860), iv. 257.—
Ed.]

[978] [Buckingham Smith has considered the question of De Soto’s


landing in a paper, “Espiritu Santo,” appended to his Letter of De
Soto (Washington, 1854), p. 51.—Ed.]

[979] [Colonel Jones epitomizes the march through Georgia in


chap. ii. of his History of Georgia (Boston, 1883). In the Annual
Report of the Smithsonian Institution, 1881, p. 619, he figures
and describes two silver crosses which were taken in 1832 from
an Indian mound in Murray County, Georgia, at a spot where he
believed De Soto to have encamped (June, 1540), and which he
inclines to associate with that explorer. Stevens (History of
Georgia, i. 26) thinks but little positive knowledge can be made
out regarding De Soto’s route.—Ed.]

[980] [Pages 25-41. Pickett in 1849 printed the first chapter of his
proposed work in a tract called, Invasion of the Territory of
Alabama by One Thousand Spaniards under Ferdinand de Soto in
1540 (Montgomery, 1849). Pickett says he got confirmatory
information respecting the route from Indian traditions among
the Creeks.—Ed.]

[981] “We are satisfied that the Mauvila, the scene of Soto’s bloody
fight, was upon the north bank of the Alabama, at a place now
called Choctaw Bluff, in the County of Clarke, about twenty-five
miles above the confluence of the Alabama and Tombigbee”
(Pickett, i. 27). The name of this town is written “Mauilla” by the
Gentleman of Elvas, “Mavilla” by Biedma, but “Mabile” by Ranjel.
The u and v were interchangeable letters in Spanish printing, and
readily changed to b. (Irving, second edition, p. 261).
[982] Bancroft, United States, i. 51; Pickett, Alabama, vol. i.;
Martin’s Louisiana, i. 12; Nuttall’s Travels into Arkansas (1819), p.
248; Fairbanks’s History of Florida, chap. v.; Ellicott’s Journal, p.
125; Belknap, American Biography, i. 192. [Whether this passage
of the Mississippi makes De Soto its discoverer, or whether
Cabeza de Vaca’s account of his wandering is to be interpreted as
bringing him, first of Europeans, to its banks, when on the 30th
of October, 1528, he crossed one of its mouths, is a question in
dispute, even if we do not accept the view that Alonzo de Pineda
found its mouth in 1519 and called it Rio del Espiritu Santo
(Navarrete, iii. 64). The arguments pro and con are examined by
Rye in the Hakluyt Society’s volume. Cf., besides the authorities
above named, French’s Historical Collections of Louisiana;
Sparks’s Marquette; Gayarré’s Louisiana; Theodore Irving’s
Conquest of Florida; Gravier’s La Salle, chap. i., and his “Route du
Mississipi” in Congrès des Américanistes (1877), vol. i.; De Bow’s
Commercial Review, 1849 and 1850; Southern Literary
Messenger, December, 1848; North American Review, July, 1847.
—Ed.]

[983] Jaramillo, in Smith’s Coleccion, p. 160.

[984] [See chap. vii. on “Early Explorations of New Mexico.”—Ed.]

[985] Pioneers of France in the New World; cf. Gaffarel, La Floride


Française, p. 341.

[986] There is a French version in Ternaux’ Recueil de la Floride,


and an English one in French’s Historical Collections of Louisiana
and Florida (1875), ii. 190. The original is somewhat diffuse, but
is minute upon interesting points.

[987] Cf. Sparks, Ribault, p. 155; Field, Indian Bibliography, p. 20.


Fairbanks in his History of St. Augustine tells the story, mainly
from the Spanish side.

[988] Edited by Charles Deane for the Maine Historical Society, pp.
20, 195, 213.

[989] Life of Ribault, p. 147.

[990] [This original English edition (a tract of 42 pages) is


extremely scarce. There is a copy in the British Museum, from
which Rich had transcripts made, one of which is now in Harvard
College Library, and another is in the Carter-Brown Collection (cf.
Rich, 1832, no. 40; Carter-Brown, i. 244). The text, as in the
Divers Voyages, is reprinted in French’s Historical Collections of
Louisiana and Florida (1875), p. 159. Ribault supposed that in
determining to cross the ocean in a direct westerly course, he
was the first to make such an attempt, not knowing that
Verrazano had already done so. Cf. Brevoort, Verrazano, p. 110;
Hakluyt, Divers Voyages, edition by J. W. Jones, p. 95. See also
Vol. III. p. 172.—Ed.]

[991] [This is the rarest of Hakluyt’s publications, the only copy


known in America being in the Lenox Library (Sabin, vol. x. no.
39,236)—Ed.]

[992] [Brinton, Floridian Peninsula, p. 39. The original French text


was reprinted in Paris in 1853 in the Bibliothèque Elzévirienne;
and this edition is worth about 30 francs (Field, Indian
Bibliography, no. 97; Sabin, vol. x. no. 39,235). The edition of
1586 was priced by Rich in 1832 at £5 5s., and has been sold of
late years for $250, £63, and 1,500 francs. Cf. Leclerc, no. 2,662;
Sabin, vol. x. no. 39,234; Carter-Brown, i. 366; Court, nos. 27,
28; Murphy, no. 1,442; Brinley, vol. iii. no. 4,357; Field, Indian
Bibliography, p. 24. Gaffarel in his La Florida Française (p. 347)
gives the first letter entire, and parts of the second and third,
following the 1586 edition.—Ed.]

[993] Cf. Stevens Bibliotheca historica (1870,) p. 224; Brinton,


Floridian Peninsula, p. 32.

[994] Brevis narratio eorum quæ in Florida Americæ provīcia Gallis


acciderunt, secunda in illam Navigatione, duce Renato de
Laudoñiere classis Præfecto: anno MDLXIIII. Quæ est secunda
pars Americæ. Additæ figuræ et Incolarum eicones ibidem ad
vivū expressæ, brevis etiam declaratio religionis, rituum,
vivendique ratione ipsorum. Auctore Iacobo Le Moyne, cui
cognomen de Morgues, Laudoñierum in ea Navigatione Sequnto.
[There was a second edition of the Latin (1609) and two editions
in German (1591 and 1603), with the same plates. Cf. Carter-
Brown, vol. i. nos. 399, 414; Court, no. 243; Brinley, vol. iii, no.
4,359. The original Latin of 1591 is also found separately, with its
own pagination, and is usually in this condition priced at about
100 francs. It is supposed to have preceded the issue as a part of
De Bry (Dufossé, 1878, nos. 3,691, 3,692).
The engravings were reproduced in heliotypes; and with the
text translated by Frederick B. Perkins, it was published in Boston
in 1875 as the Narrative of Le Moyne, an Artist who accompanied
the French Expedition to Florida under Laudonnière, 1564. These
engravings have been in part reproduced several times since their
issue, as in the Magazin pittoresque, in L’univers pittoresque, in
Pickett’s Alabama, etc.—Ed.]

[995] Sabin, vol. x. no. 39,631-32; Carter-Brown, i. 262.

[996] [Sabin, vol. x. no. 39,634; Carter-Brown, vol. i. no. 263. An


English translation, following the Lyons text, was issued in
London in 1566 as A True and Perfect Description of the Last
Voyage of Ribaut, of which only two copies are reported by
Sabin,—one in the Carter-Brown Library (vol. i. no. 264), and the
other in the British Museum. This same Lyons text was included
in Ternaux’ Reçueil de pièces sur la Floride and in Gaffarel’s La
Floride Française, p. 457 (cf. also pp. 337-339), and it is in part
given in Cimber and Danjon’s Archives curieuses de l’histoire de
France (Paris, 1835), vi. 200. The original Dieppe text was
reprinted at Rouen in 1872 for the Société Rouennaise de
Bibliophiles, and edited by Gravier under the title: Deuxième
voyage du Dieppois Jean Ribaut à la Floride en 1565, précédé
d’une notice historique et bibliographique. Cf. Brinton, Floridian
Peninsula, p. 30.—Ed.]

[997] [O’Callaghan, no. 463; Rich (1832), no. 60. There was an
edition at Cologne in 1612 (Stevens, Nuggets, no. 2,300; Carter-
Brown, ii. 123). Sparks (Life of Ribault, p. 152) reports a De
navigatione Gallorum in terram Floridam in connection with an
Antwerp (1568) edition of Levinus Apollonius. It also appears in
the same connection in the joint German edition of Benzoni,
Peter Martyr, and Levinus printed at Basle in 1582 (Carter-Brown,
vol. i. no. 344). It may have been merely a translation of Challeux
or Ribault (Brinton, Floridian Peninsula, p. 36)—Ed.].

[998] Murphy, nos. 564, 2,853.

[999] Sabin, vol. x. no. 39,630; Carter-Brown, vol. i. no. 330;


Dufossé, no. 4,211.

[1000] This petition is known as the Epistola supplicatoria, and is


embodied in the original text in Chauveton’s French edition of
Benzoni. It is also given in Cimber and Danjon’s Archives
curieuses, vi. 232, and in Gaffarel’s Floride Française, p. 477; and
in Latin in De Bry, parts ii. and vi. (cf. Sparks’s Ribault, appendix).
[There are other contemporary accounts or illustrations in the
“Lettres et papiers d’état du Sieur de Forquevaulx,” for the most
part unprinted, and preserved in the Bibliothèque Nationale in
Paris, which were used by Du Prat in his Histoire d’Élisabeth de
Valois (1859), and some of which are printed in Gaffarel, p. 409.
The nearly contemporary accounts of Popellinière in his Trois
mondes (1582) and in the Histoire universelle of De Thou,
represent the French current belief. The volume of Ternaux’
Voyages known as Recueil de pièces sur la Floride inédites,
contains, among eleven documents, one called Coppie d’une
lettre venant de la Floride, ... ensemble le plan et portraict du fort
que les François y ont faict (1564), which is reprinted in Gaffarel
and in French’s Historical Collections of Louisiana and Florida, vol.
iii. This tract, with a plan of the fort on the sixth leaf, recto, was
originally printed at Paris in 1565 (Carter-Brown, i. 256). None of
the reprints give the engravings. It was seemingly written in the
summer of 1564, and is the earliest account which was printed.—
Ed.]

[1001] Ensayo cronológico.

[1002] [Parkman, however, inclines to believe that Barcia’s


acceptance is a kind of admission of its “broad basis of truth.”—
Ed.]

[1003] Page 340. Cf. Manuscrits de la Bibliothèque du Roi, iv. 72.

[1004] [They are: a. Preserved in the Château de Vayres, belonging


to M. de Bony, which is presumably that given as belonging to
the Gourgues family, of which a copy, owned by Bancroft, was
used by Parkman. It was printed at Mont-de-Marsan, 1851, 63
pages.
b. In the Bibliothèque Nationale, no. 1,886. Printed by
Ternaux-Compans in his Recueil, etc., p. 301, and by Gaffarel, p.
483, collated with the other manuscripts and translated into
English in French’s Historical Collections of Louisiana and Florida,
ii. 267. This copy bears the name of Robert Prévost; but whether
as author or copyist is not clear, says Parkman (p. 142).
c. In the Bibliothèque Nationale, no. 2,145. Printed at
Bordeaux in 1867 by Ph. Tamizey de Larroque, with preface and
notes, and giving also the text marked e below.
d. In the Bibliothèque Nationale, no. 3,384. Printed by
Taschereau in the Revue rétrospective (1835), ii. 321.
e. In the Bibliothèque Nationale, no. 6,124. See c above.
The account in the Histoire notable is called an abridgment by
Sparks, and of this abridgment there is a Latin version in De Bry,
part ii.,—De quarta Gallorum in Floridam navigatione sub
Gourguesio. See other abridgments in Popellinière, Histoire des
trois mondes (1582), Lescarbot, and Charlevoix.]

[1005] Floridian Peninsula, p. 35.

[1006] Such as Wytfliet’s Histoire des Indes; D’Aubigné’s Histoire


universelle (1626); De Laet’s Novus orbis, book iv.; Lescarbot’s
Nouvelle France; Champlain’s Voyages; Brantôme’s Grands
capitaines François (also in his Œuvres). Faillon (Colonie
Française, i. 543) bases his account on Lescarbot.

[1007] Cf. Shea’s edition with notes, where (vol. i. p. 71) Charlevoix
characterizes the contemporary sources; and he points out how
the Abbé du Fresnoy, in his Méthode pour étudier la géographie,
falls into some errors.

[1008] American Biography, vol. vii. (new series).

[1009] Boston, 1865. Mr. Parkman had already printed parts of this
in the Atlantic Monthly, xii. 225, 536, and xiv. 530.

[1010] Paris, 1875. He gives (p. 517) a succinct chronology of


events.

[1011] Cf., for instance, Bancroft’s United States, chap. ii.; Gay’s
Popular History of the United States, chap. viii.; Warburton’s
Conquest of Canada, app. xvi.; Conway Robinson’s Discoveries in
the West, ii. chap. xvii. et seq.; Kohl’s Discovery of Maine;
Fairbanks’s Florida; Brinton’s Floridian Peninsula,—among
American writers; and among the French,—Guérin, Les
navigateurs Français (1846); Ferland, Canada; Martin, Histoire de
France; Haag, La France protestante; Poussielgue, “Quatre mois
en Floride,” in Le tour du monde, 1869-1870; and the Lives of
Coligny by Tessier, Besant, and Laborde. There are other
references in Gaffarel, p. 344.
There is a curious article, “Dominique de Gourgues, the
Avenger of the Huguenots in Florida, a Catholic,” in the Catholic
World, xxi. 701.

[1012] The Acts of the Apostles, xxviii. 2-6.

[1013] [See Chapter I.—Ed.]

[1014] Llorente adds that he had a personal acquaintance with a


branch of the family at Calahorra, his own birthplace, and that
the first of the family went to Spain, under Ferdinand III., to fight
against the Moors of Andalusia. He also traces a connection
between this soldier and Las Cases, the chamberlain of Napoleon,
one of his councillors and companions at St. Helena, through a
Charles Las Casas, one of the Spanish seigneurs who
accompanied Blanche of Castile when she went to France, in
1200, to espouse Louis VIII.

[1015] There is a variance in the dates assigned by historians for


the visits of both Las Casas and his father to the Indians. Irving,
following Navarrete, says that Antoine returned to Seville in 1498,
having become rich (Columbus iii. 415). He also says that
Llorente is incorrect in asserting that Bartholomew in his twenty-
fourth year accompanied Columbus in his third voyage, in 1498,
returning with him in 1500, as the young man was then at his
studies at Salamanca. Irving says Bartholomew first went to
Hispaniola with Ovando in 1502, at the age of about twenty-
eight. I have allowed the dates to stand in the text as given by
Llorente, assigning the earlier year for the first voyage of Las
Casas to the New World as best according with the references in
writings by his own pen to the period of his acquaintance with
the scenes which he describes.

[1016] The administration of affairs in the Western colonies of


Spain was committed by Ferdinand, in 1511, to a body composed
chiefly of clergy and jurists, called “The Council for the Indies.”
Its powers originally conferred by Ferdinand were afterward
greatly enlarged by Charles V. These powers were full and
supreme, and any information, petition, appeal, or matter of
business concerning the Indies, though it had been first brought
before the monarch, was referred by him for adjudication to the
Council. This body had an almost absolute sway alike in matters
civil and ecclesiastical, with supreme authority over all
appointments and all concerns of government and trade. It was
therefore in the power of the Council to overrule or qualify in
many ways the will or purpose or measures of the sovereigns,
which were really in favor of right or justice or humane
proceedings in the affairs of the colonies. For it naturally came
about that some of its members were personally and selfishly
interested in the abuses and iniquities which it was their rightful
function and their duty to withstand. At the head of the Council
was a dignitary whose well-known character and qualities were
utterly unfavorable for the rightful discharge of his high trust.
This was Juan Rodriguez de Fonseca, successively Bishop of
Badajoz, Valencia, and Burgos, and constituted “Patriarch of the
Indies.” He had full control of colonial affairs for thirty years, till
near his death in 1547. He bore the repute among his associates
of extreme worldliness and ambition, with none of the graces and
virtues becoming the priestly office, the duties of which engaged
but little of his time or regard. It is evident also that he was of an
unscrupulous and malignant disposition. He was inimical to
Columbus and Cortés from the start. He tried to hinder, and
succeeded in delaying and embarrassing, the second westward
voyage of the great admiral. (Irving’s Columbus, iii.; Appendix
XXXIV.) He was a bitter opponent of Las Casas, even resorting to
taunting insults of the apostle, and either openly or crookedly
thwarting him in every stage and effort of his patient
importunities to secure the intervention of the sovereigns in the
protection of the natives. The explanation of this enmity is found
in the fact that Fonseca himself was the owner of a repartimiento
in Hispaniola, with a large number of native slaves.

[1017] There is an extended Note on Las Casas in Appendix XXVIII.


of Irving’s Columbus. That author most effectively vindicates Las
Casas from having first advised and been instrumental in the
introduction of African slavery in the New World, giving the dates
and the advisers and agents connected with that wrong previous
to any word on the subject from Las Casas. The devoted
missionary had been brought to acquiesce in the measure on the
plausible plea stated in the text, acting from the purest spirit of
benevolence, though under an erroneous judgment. Cardinal
Ximenes had from the first opposed the project.

[1018] As will appear farther on in these pages, Las Casas stands


justly chargeable with enormous exaggerations of the number or
estimate of the victims of Spanish cruelty. But I have not met
with a single case in any contemporary writer, nor in the
challengers and opponents of his pleadings at the Court of Spain,
in which his hideous portrayal of the forms and methods of that
cruelty, its dreadful and revolting tortures and mutilations, have
been brought under question. Mr. Prescott’s fascinating volumes
have been often and sometimes very sharply censured, because
in the glow of romance, chivalric daring, and heroic adventure in
which he sets the achievements of the Spanish “Conquerors” of
the New World he would seem to be somewhat lenient to their
barbarities. In the second of his admirable works he refers as
follows to this stricture upon him: “To American and English
readers, acknowledging so different a moral standard from that
of the sixteenth century, I may possibly be thought too indulgent
to the errors of the Conquerors;” and he urges that while he has
“not hesitated to expose in their strongest colors the excesses of
the Conquerors, I have given them the benefit of such mitigating
reflections as might be suggested by the circumstances and the
period in which they lived” (Preface to the Conquest of Mexico).
It is true that scattered over all the ably-wrought pages of Mr.
Prescott’s volumes are expressions of the sternest judgment and
the most indignant condemnation passed upon the most signal
enormities of these incarnate spoilers, who made a sport of their
barbarity. But those who have most severely censured the author
upon the matter now in view have done so under the conviction
that cruelty unprovoked and unrelieved was so awfully dark and
prevailing a feature in every stage and incident of the Spanish
advance in America, that no glamour of adventure or chivalric
deeds can in the least lighten or redeem it. The underlying
ground of variance is in the objection to the use of the terms
“Conquest” and “Conquerors,” as burdened with the relation of
such a pitiful struggle between the overmastering power of the
invaders and the abject helplessness of their victims.
As I am writing this note, my eye falls upon the following
extract from a private letter written in 1847 by that eminent and
highly revered divine, Dr. Orville Dewey, and just now put into
print: “I have been reading Prescott’s Peru. What a fine
accomplishment there is about it! And yet there is something
wanting to me in the moral nerve. History should teach men how
to estimate characters; it should be a teacher of morals; and I
think it should make us shudder at the names of Cortez and
Pizarro. But Prescott does not; he seems to have a kind of
sympathy with these inhuman and perfidious adventurers, as if
they were his heroes. It is too bad to talk of them as the soldiers
of Christ; if it were said of the Devil, they would have better fitted
the character” (Autobiography and Letters of Orville Dewey, D.D.
p. 190).

[1019] Juan Ginez de Sepulveda, distinguished both as a theologian


and an historian, was born near Cordova in 1490, and died in
1573. He was of a noble but impoverished family. He availed
himself of his opportunities for obtaining the best education of his
time in the universities of Spain and Italy, and acquired an
eminent reputation as a scholar and a disputant,—not, however,
for any elevation of principles or nobleness of thought. In 1536
he was appointed by Charles V. his historiographer, and put in
charge of his son Philip. Living at Court, he had the repute of
being crooked and unscrupulous, his influence not being given on
the side of rectitude and progressive views. His writings
concerning men and public affairs give evidence of the faults
imputed to him. He was vehement, intolerant, and dogmatic. He
justified the most extreme absolutism in the exercise of the royal
prerogative, and the lawfulness and even the expediency of
aggressive wars simply for the glory of the State. Melchior Cano
and Antonio Ramirez, as well as Las Casas, entered into
antagonism and controversy with his avowed principles. One of
his works, entitled Democrates Secundus, seu de justis belli
causis, may be pronounced almost brutal in the license which it
allowed in the stratagems and vengefulness of warfare. It was
condemned by the universities of Alcala and Salamanca. He was
a voluminous author of works of history, philosophy, and
theology, and was admitted to be a fine and able writer. Erasmus
pronounced him the Spanish Livy. The disputation between him
and Las Casas took place before Charles in 1550. The monarch
was very much under his influence, and seems to some extent to
have sided with him in some of his views and principles.
Sepulveda was one of the very few persons whom the monarch
admitted to interviews and intimacy in his retirement to the
Monastery at Yuste.
It was this formidable opponent—a personal enemy also in
jealousy and malignity—whom Las Casas confronted with such
boldness and earnestness of protest before the Court and
Council. It was evidently the aim of Sepulveda to involve the
advocate of the Indians in some disloyal or heretical questioning
of the prerogatives of monarch or pope. It seemed at one time as
if the noble pleader for equity and humanity would come under
the clutch of the Holy Office, then exercising its new-born vigor
upon all who could be brought under inquisition for constructive
or latent heretical proclivities. For Las Casas, though true to his
priestly vows, made frequent and bold utterances of what
certainly, in his time, were advanced views and principles.

[1020] Juan Antonio Llorente, eminent as a writer and historian,


both in Spanish and French, was born near Calahorra, Aragon, in
1756, and died at Madrid in 1823. He received the tonsure when
fourteen years of age, and was ordained priest at Saragossa in
1779. He was of a vigorous, inquisitive, and liberal spirit, giving
free range to his mind, and turning his wide study and deep
investigations to the account of his enlargement and
emancipation from the limitations of his age and associates. He
tells us that in 1784 he had abandoned all ultramontane
doctrines, and all the ingenuities and perplexities of
scholasticism. His liberalism ran into rationalism. His secret or
more or less avowed alienation from the prejudices and
obligations of the priestly order, while it by no means made his
position a singular or even an embarrassing one under the
influences and surroundings of his time, does at least leave us
perplexed to account for the confidence with which functions and
high ecclesiastical trusts were committed to and exercised by
him. He was even made Secretary-General of the Inquisition, and
was thus put in charge of the enormous mass of records, with all
their dark secrets, belonging to its whole history and processes.
This charge he retained for a time after the Inquisition was
abolished in 1809. It was thus by a singular felicity of opportunity
that those terrible archives should have been in the care, and
subject to the free and intelligent use, of a man best qualified of
all others to tell the world their contents, and afterward prompted
and at liberty to do so from subsequent changes in his own
opinions and relations. To this the world is indebted for a History
of the Inquisition, the fidelity and sufficiency of which satisfy all
candid judgments. He was restive in spirit, provoked strong
opposition, and was thus finally deprived of his office. After
performing a variety of services not clerical, and moving from
place to place, he went to Paris, where, in 1817-1818, he
courageously published the above-mentioned History. He was
interdicted the exercise of clerical functions. In 1822, the same
year in which he published his Biography and French translation
of the principal works of Las Casas, he published also his Political
Portraits of the Popes. For this he was ordered to quit Paris,—a
deep disappointment to him, causing chagrin and heavy
depression. He found refuge in Madrid, where he died in the
following year.

[1021] Mr. Ticknor, however, says that these two treatises “are not
absolutely proved” to be by Las Casas.—History of Spanish
Literature, i. 566.

[1022] Conquest of Mexico, i. 80, n. Of his Short Account of the


Destruction of the Indies, this historian says: “However good the
motives of its author, we may regret that the book was ever
written.... The author lent a willing ear to every tale of violence
and rapine, and magnified the amount to a degree which borders
on the ridiculous. The wild extravagance of his numerical
estimates is of itself sufficient to shake confidence in the accuracy
of his statements generally. Yet the naked truth was too startling
in itself to demand the aid of exaggeration.” The historian truly
says of himself, in his Preface to the work quoted: “I have not
hesitated to expose in their strongest colors the excesses of the
conquerors.”

[1023] Llorente, i. 365, 386.

[1024] [Helps (Spanish Conquest) says: “Las Casas may be


thoroughly trusted whenever he is speaking of things of which he
had competent knowledge.” Ticknor (Spanish literature, ii. 31)
calls him “a prejudiced witness, but on a point of fact within his
own knowledge one to be believed.” H. H. Bancroft (Early
American Chroniclers, p. 20; also Central America, i. 274, 309; ii.
337) speaks of the exaggeration which the zeal of Las Casas
leads him into; but with due abatement therefor, he considers
him “a keen and valuable observer, guided by practical sagacity,
and endowed with a certain genius.”—Ed.]

[1025] Sabin’s Works of Las Casas, and his Dictionary, iii. 388-402,
and x. 88-91; Field’s Indian Bibliography; Carter-Brown
Catalogue; Harrisse’s Notes on Columbus, pp. 18-24; the Huth
Catalogue; Brunet’s Manuel, etc.

[1026] [Field says it was written in 1540, and submitted to the


Emperor in MS.; but in the shape in which it was printed it seems
to have been written in 1541-1542. Cf. Field, Indian Bibliography,
nos. 860, 870; Sabin, Works of Las Casas, no. 1; Carter-Brown
Catalogue, i. 164; Ticknor, Spanish Literature, ii. 38; and
Catalogue, p. 62. The work has nineteen sections on as many
provinces, ending with a summary for the year 1546. This
separate tract was reprinted in the original Spanish in London, in
1812, and again in Philadelphia, in 1821, for the Mexican market,
with an introductory essay on Las Casas. Stevens, Bibliotheca
historica, 1105; cf. also Coleccion de documentos inéditos
(España), vol. vii.
The Cancionero spiritual, printed at Mexico in 1546, is not
assigned to Bartholomew Las Casas in Ticknor’s Spanish
Literature, iii. 44, but it is in Gayangos and Vedia’s Spanish
translation of Ticknor. Cf. also Sabin, vol. x. no. 39,122; Harrisse,
Bib. Am. Vet., Additions, No. 159.—Ed.]

[1027] [Field does not give it a date; but Sabin says it was written
in 1552. Cf. Field, nos. 860, 870, note; Sabin, no. 2; Carter-
Brown, i. 165; Ticknor Catalogue, p. 62.—Ed.]

[1028] [Field says it was written “soon after” no. 1; Sabin places it
in 1543. Cf. Field, no. 862, 870, note; Carter-Brown, i. 166;
Sabin, 3; Stevens, Bibl. Geog., no. 595; Ticknor Catalogue, p. 62.
—Ed.]

[1029] [Sabin says it was written in America in 1546-1547. Field,


nos. 863, 870, note; Carter-Brown, i. 167; Sabin, no. 6.—Ed.]

[1030] [There seems, according to Field (nos. 864, 865), to have


been two distinct editions in 1552, as he deduces from his own
copy and from a different one belonging to Mr. Brevoort, there
being thirty-three variations in the two. Quaritch has noted (no.
11,855, priced at £6 6s.) a copy likewise in Gothic letter, but with
different woodcut initials, which he places about 1570. Cf. Field,
p. 217; Carter-Brown, i. 168; Sabin, no. 8; Ticknor Catalogue, p.
62.
The initial work of Sepulveda, Democrates Secundus,
defending the rights of the Crown over the natives, was not
published, though he printed his Apologia pro libro de justis belli
causis, Rome, 1550 (two copies of which are known), of which
there was a later edition in 1602; and some of his views may be
found in it. Cf. Ticknor, Spanish Literature, ii. 37; Harrisse, Notes
on Columbus, p. 24, and Bib. Amer. Vet., no. 303; and the
general histories of Bancroft, Helps, and Prescott. The Carter-
Brown Catalogue, no. 173, shows a MS. copy of Sepulveda’s
book. It is also in Sepulveda’s Opera, Cologne, 1602, p. 423;
Carter-Brown, vol. ii. no. 15.—Ed.]
[1031] [Sabin dates it in 1543. Cf. Field, nos. 866, 870, note; Sabin,
no. 4; Carter-Brown, i. 170.—Ed.]

[1032] [Sabin says it was written in Spain in 1548 Cf. Field, nos.
867, 870, note; Sabin, no. 7; Carter-Brown, i. 171.—Ed.]

[1033] [Field, nos. 868, 870, note; Sabin, no. 9; Carter-Brown, i.


169.—Ed.]

[1034] [This is the longest and one of the rarest of the series. Sabin
says it was written about 1543. There were two editions of the
same date, having respectively 80 and 84 leaves; but it is
uncertain which is the earlier, though Field supposes the fewer
pages to indicate the first. Field, nos. 869, 870, note; Sabin, no.
5; Carter Brown, i. 172.—Ed.]

[1035] [It is only of late years that the entire series has been
described. De Bure gives only five of the tracts; Dibdin
enumerates but seven; and Llorente in his edition omits three, as
was done in the edition of 1646. Rich in 1832 priced a set at £12
12s. A full set is now worth from $100 to $150; but Leclerc (nos.
327, 2,556) has recently priced a set of seven at 700 francs, and
a full set at 1,000 francs. An English dealer has lately held one at
£42. Quaritch has held four parts at £10, and a complete set at
£40. Single tracts are usually priced at from £1 to £5. Recent
sales have been shown in the Sunderland (no. 2,459, 9 parts);
Field (no. 1,267); Cooke (vol. iii. no. 369, 7 parts); Stevens, Hist.
Coll. (no. 311, 8 parts); Pinart (no. 536); and Murphy (no. 487)
catalogues. The set in the Carter-Brown Library belonged to
Ternaux; that belonging to Mr. Brevoort came from the
Maximillian Library. The Lenox Library and Mr. Barlow’s Collection
have sets. There are also sets in the Grenville and Huth
collections.
The 1646 reprint, above referred to, has sometimes a
collective title, Las Obras, etc., but most copies, like the Harvard
College copy, lack it. As the titles of the separate tracts (printed
in this edition in Roman) retained the original 1552 dates, this
reprint is often called a spurious edition. It is usually priced at
from $15 to $30. Cf. Sabin, no. 13; Field, p. 216; Quaritch, no.
11,856; Carter-Brown, i. 173; ii. 584; Stevens, Hist. Coll., i. 312;
Cooke, iii. 370.
Some of the Tracts are included in the Obras escogidas de
filósofos, etc. Madrid, 1873.—Ed.]
[1036] [Field, no. 870, and note; Sabin, no. 11; the Carter-Brown
Collection lacks it. It was reprinted at Tübingen, and again at
Jena, in 1678. It has never been reprinted in Spain, says Stevens
(Bibl. Hist., no. 1,096).—Ed.]

[1037] [“Not absolutely proved to be his,” says Ticknor (Spanish


Literature, ii. 37).—Ed.]

[1038] [There were a hundred copies of these printed. They are:—


1. Memorial de Don Diego Colon sobre la conversion de las
gentes de las Yndias. With an Epistle to Dr. Reinhold Pauli. It is
Diego Colon’s favorable comment on Las Casas’s scheme of
civilizing the Indians, written at King Charles’s request. Cf.
Stevens, Hist. Coll., i. 881.
2. Carta, dated 1520, and addressed to the Chancellor of
Charles, in which Las Casas urges his scheme of colonization of
the Indians. Mr. Stevens dedicates it to Arthur Helps in a letter.
Cf. Stevens, Hist. Coll., i. 882; the manuscript is described in his
Bibl. Geog., no. 598.
3. Paresçer o determinaciō de los señores theologos de
Salamanca, dated July 1, 1541. This is the response of the
Faculty of Salamanca to the question put to them by Charles V., if
the baptized natives could be made slaves. Mr. Stevens dedicates
the tract to Sir Thomas Phillipps. Cf. Stevens, Hist. Coll., i. 883.
4. Carta de Hernando Cortés. Mr. Stevens, in his Dedication to
Leopold von Ranke, supposes this to have been written in 1541-
1542. It is Cortes’ reply to the Emperor’s request for his opinions
regarding Encomiendas, etc., in Mexico. Cf. Stevens, Hist. Coll., i.
884.
5. Carta de Las Casas, dated Oct. 22, 1545, with an abstract in
English in the Dedication to Colonel Peter Force. It is addressed
to the Audiencia in Honduras, and sets forth the wrongs of the
natives. Cf. Stevens, Hist. Coll., i. 885. The manuscript is now in
the Huth Collection, Catalogue, v. 1,681.
6. Carta de Las Casas to the Dominican Fathers of Guatemala,
protesting against the sale of the reversion of the Encomiendas.
Mr. Stevens supposes this to have been written in 1554, in his
Dedication to Sir Frederick Madden. Cf. Stevens, Hist. Coll., i.
886. A set of these tracts is worth about $25. The set in the
Cooke Sale (vol. iii. no. 375) is now in Harvard College Library;
another set is shown in the Murphy Catalogue, no. 488, and there
is one in the Boston Public Library.—Ed.]
[1039] Field, p. 219.

[1040] Vol. i. p. 160.

[1041] [Harrisse, Notes on Columbus, says volumes i. and ii. are in


the Academy; but volume iii. is in the Royal Library. Cf., however,
the “Advertencia preliminar” of the Madrid (1875) edition of the
Historia on this point, as well as regards the various copies of the
manuscript existing in Madrid.—Ed.]

[1042] [Such is Quintana’s statement; but Helps failed to verify it,


and says he could only fix the dates 1552, 1560, 1561 as those of
any part of the writing. Life of Las Casas, p. 175.—Ed.]

[1043] [I trace no copy earlier than one Rich had made. Prescott
had one, which was probably burned in Boston (1872). Helps
used another. There are other copies in the Library of Congress,
in the Lenox Library, and in H. H. Bancroft’s Collection.—Ed.]

[1044] [Harrisse, Bibl. Amer. Vet., p. 119, says the purpose of the
Academy at one time was to annotate the manuscript, so as to
show Las Casas in a new light, using contemporary writers.—Ed.]

[1045] [It is worth from $30 to $40. It is called Historia de las


Indias, ahora por primera vez dada á luz por el Marqués de la
Fuensanta del Valle y José Sancho Rayon. It contains, beginning
in vol. v. at p. 237, the Apologética historia which Las Casas had
written to defend the Indians against aspersions upon their lives
and character. This latter work was not included in another
edition of the Historia printed at Mexico in two volumes in 1877-
1878. Cf. Vigel, Biblioteca Mexicana. Parts of the Apologética are
given in Kingsborough’s Mexico, vol. viii. Cf. on the Historia,
Irving’s Columbus, App.; Helps’s Spanish Conquest (Am. ed.), i.
23, and Life of Las Casas, p. 175; Ticknor, Spanish Literature, ii.
39; Humboldt’s Cosmos (Eng. tr.), ii. 679; H. H. Bancroft, Central
America, i. 309; Prescott’s Mexico, i. 378; Quintana’s Vidas, iii.
507.—Ed.]

[1046] [Llorente’s version is not always strictly faithful, being in


parts condensed and paraphrastic. Cf. Field, no. 889; Ticknor,
Spanish Literature, ii. 38, and Catalogue, p. 62; Sabin, nos. 14,
50; H. H. Bancroft, Central America, i. 309. This edition, besides
a life of Las Casas, contains a necrology of the Conquerors, and
other annotations by the editor.—Ed.]
[1047] [This earliest version is a tract of 70 leaves, printed probably
at Brussels, and called Seer cort Verhael vande destructie van
d’Indien. Cf. Sabin, no. 23; Carter-Brown, i. 320; Stevens, Bibl.
Hist., no. 1,097. The whole series is reviewed in Tiele’s Mémoire
bibliographique (who gives twenty-one editions) and in Sabin’s
Works of Las Casas (taken from his Dictionary); and many of
them are noted in the Carter-Brown Catalogue and in Muller’s
Books on America, 1872 and 1877. This 1578 edition was
reissued in 1579 with a new title, Spieghel der Spaenscher
Tirannije, which in some form continued to be the title of
subsequent editions, which were issued in 1596, 1607, 1609,
1610, 1612 (two), 1620 (two), 1621, 1627 (?), 1634, 1638, 1663,
1664, etc. Several of these editions give De Bry’s engravings,
sometimes in reverse. A popular chap-book, printed about 1730,
is made up from Las Casas and other sources.—Ed.]

[1048] [This included the first, second, and sixth of the tracts of
1552. In 1582 there was a new edition of the Tyrannies, etc.,
printed at Paris; but some copies seem to have had a changed
title, Histoire admirable des horribles insolences, etc. It was again
reissued with the original title at Rouen in 1630. Cf. Field, 873,
874; Sabin, nos. 41, 42, 43, 45; Rich (1832); Stevens, Bibl. Hist.,
no. 1,098; Leclerc, nos. 334, 2,558; Carter-Brown, i. 329, 345,
347; O’Callaghan, no. 1,336; a London catalogue (A. R. Smith,
1874) notes an edition of the Histoire admirable des horribles
Insolences, Cruautez et tyrraines exercées par les Espagnols, etc.,
Lyons, 1594.—Ed.]

[1049] [It is a tract of sixty-four leaves in Gothic letter, and is very


rare, prices being quoted at £20 and more. Cf. Sabin, no. 61;
Carter-Brown, i. 351; Stevens, Bibl. Geog., 596, Huth Catalogue,
i. 271. Cf. William Lightfoote’s Complaints of England, London,
1587, for English opinion at this time on the Spanish excesses
(Sabin, vol. x. no. 41,050), and the Foreign Quarterly Review
(1841), ii. 102.—Ed.]

[1050] [Field, p. 877; Carter-Brown, ii. 804; Sabin, no. 60. The first
tract is translated in Purchas’s Pilgrimes, iv. 1,569.—Ed.]

[1051] [Some copies read, Account of the First Voyages, etc. Cf.
Field, no. 880; Carter-Brown, vol. ii. no. 1,556; Sabin, no. 63;
Stevens, Bibl. Geog., no. 603; and Prince Library Catalogue, p.
34. Another English edition, London, 1689, is called Popery truly
display’d in its Bloody Colours. Cf. Carter-Brown, vol. ii. no. 1,374;
Sabin, no. 62. Another London book of 1740, Old England for
Ever, is often called a Las Casas, but it is not his. Field, no. 888.
—Ed.]

[1052] [Sabin, no. 51; Carter-Brown, i. 510; Stevens, Hist. Coll., i.


319. It has no place. Muller calls a Warhafftiger Bericht of 1599,
with no place, the earliest German edition, with De Bry’s,
engravings,—which were also in the Oppenheim edition of 1613,
Warhafftiger und gründlicher Bericht, etc. Cf. Sabin, no. 54;
Carter-Brown, ii. 146. A similar title belongs to a Frankfort edition
of 1597 (based on the Antwerp French edition of 1579), which is
noted in Sabin, no. 52, and in Bib. Grenvilliana, ii. 828, and was
accompanied by a volume of plates (Sabin, no,. 53).
There seem to be two varieties of the German edition of 1665,
Umbständige warhafftige Beschreibung der Indianischen Ländern.
Cf. Carter-Brown, ii. 957; Sabin, no. 55; Field, no. 882. Sabin (no.
56) also notes a 1790 and other editions.—Ed.]

[1053] [It followed the French edition of 1579, and was reissued at
Oppenheim in 1614. Cf. Field, p. 871; Carter-Brown, i. 453, 524;
ii. 164; Sabin, nos. 57, 58.
The Heidelberg edition of 1664, Regionum Indicarum per
Hispanos olim devastatarum descriptio, omits the sixteen pages
of preliminary matter of the early editions; and the plates,
judging from the Harvard College and other copies, show wear.
Sabin, no. 59; Carter-Brown, ii. 944.—Ed.]

[1054] [As in the Istoria ò brevissima relatione, Venice, 1626, 1630,


and 1643, a version of the first tract of 1552, made by Castellani.
It was later included in Marmocchi’s Raccolta di viaggi. Cf. Sabin,
nos. 16, 17, 18; Carter-Brown, ii. 311, 360, 514; Leclerc, no. 331;
Field, no. 885; Stevens, Hist. Coll., i. 315; Bibl. Hist., no. 1,100.
The sixth tract was translated as Il supplice schiavo Indiano, and
published at Venice in 1635, 1636, and 1657. Cf. Carter-Brown, ii.
434, 816; Field, no. 886; Sabin, nos. 20, 21. It was reissued in
1640 as La libertà pretesa. Sabin, no. 19; Field, no. 887; Carter-
Brown, ii. 473. The eighth and ninth tracts appeared as Conquista
dell’Indie occidentali, Venice, 1645. Cf. Field, no. 884; Sabin, no.
22; Carter-Brown, ii. 566.—Ed.]

[1055] In Harvard College Library, with also the Ordenanzas reales


del Conseio de las Indias, of the same date.
[1056] There are convenient explanations and references
respecting the functions of the Casa de la Contratacion, the
Council of the Indies, the Process of the Audiencia, and the
duties of an Alcalde, in Bancroft’s Central America, vol. i. pp. 270,
280, 282, 297, 330.

[1057] See chap. iii. p. 203, ante.

[1058] At Medellin, in Estremadura, in 1485.

[1059] They are given in Pacheco’s Coleccion, xii. 225, Prescott’s


Mexico, app. i., and elsewhere. Cf. H. H. Bancroft, Mexico, i. 55.

[1060] There is much conflict of testimony on the respective share


of Cortés and Velasquez in equipping the expedition. H. H.
Bancroft (Mexico, i. 57) collates the authorities.

[1061] Prescott makes Cortés sail clandestinely; Bancroft makes his


departure a hurried but open one; and this is Helps’s view of the
authorities.

[1062] The authorities are not in unison about all these figures. Cf.
H. H. Bancroft, Mexico, i. 70.

[1063] See the long note comparing some of these accounts in H.


H. Bancroft’s Mexico, i. 102, etc.

[1064] Marina did more. She impressed Cortés, who found her
otherwise convenient for a few years; and after she had borne
him children, married her to one of his captains. What purports to
be a likeness of her is given in Cabajal’s México, ii. 64.

[1065] Prescott (Mexico, revised edition, i. 345) points out how this
site was abandoned later for one farther south, where the town
was called Vera Cruz Vieja; and again, early in the seventeenth
century, the name and town were transferred to another point
still farther south,—Nueva Vera Cruz. These changes have caused
some confusion in the maps of Lorenzana and others. Cf. the
maps in Prescott and H. H. Bancroft.

[1066] There is some discrepancy in the authorities here as regards


the openness or stealth of the act of destroying the fleet. See the
authorities collated in Prescott, Mexico, new edition, i. 369, 370.
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade

Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.

Let us accompany you on the journey of exploring knowledge and


personal growth!

ebookultra.com

You might also like