Introduction To Quantum Algorithms Johannes A Buchmann pdf download
Introduction To Quantum Algorithms Johannes A Buchmann pdf download
Buchmann download
https://ebookbell.com/product/introduction-to-quantum-algorithms-
johannes-a-buchmann-56632724
https://ebookbell.com/product/introduction-to-quantum-algorithms-via-
linear-algebra-second-edition-richard-j-lipton-kenneth-w-
regan-56402476
https://ebookbell.com/product/an-introduction-to-quantum-computing-
algorithms-progress-in-computer-science-and-applied-
logic-19-softcover-reprint-of-the-original-1st-
ed-2000-pittenger-55202036
https://ebookbell.com/product/introduction-to-quantum-field-theory-
classical-mechanics-to-gauge-field-theoriessolutions-manual-for-
teachers-ethan-n-carragher-46591366
https://ebookbell.com/product/introduction-to-quantum-technologies-
alto-osada-rekishu-yamazaki-47283296
Introduction To Quantum Groups 1st Edition Teo Banica
https://ebookbell.com/product/introduction-to-quantum-groups-1st-
edition-teo-banica-48812246
https://ebookbell.com/product/introduction-to-quantum-groups-teo-
banica-49169390
https://ebookbell.com/product/introduction-to-quantum-mechanics-2nd-
edition-s-m-blinder-50483800
https://ebookbell.com/product/introduction-to-quantum-fields-on-a-
lattice-jan-smit-50904810
https://ebookbell.com/product/introduction-to-quantum-computing-river-
publishers-series-in-rapids-in-computing-and-information-science-and-
technology-1st-edition-ahmed-banafa-51343730
4
64
Introduction
to Quantum
Algorithms
Johannes A. Buchmann
Introduction
to Quantum
Algorithms
Pure and Applied
Sally
The UNDERGRADUATE TEXTS • 64
SERIES
Introduction
to Quantum
Algorithms
Johannes A. Buchmann
EDITORIAL COMMITTEE
Daniel P. Groves John M. Lee
Tara S. Holm Maria Cristina Pereyra
Giuliana Davidoff (Chair)
Copying and reprinting. Individual readers of this publication, and nonprofit libraries acting
for them, are permitted to make fair use of the material, such as to copy select pages for use
in teaching or research. Permission is granted to quote brief passages from this publication in
reviews, provided the customary acknowledgment of the source is given.
Republication, systematic copying, or multiple reproduction of any material in this publication
is permitted only under license from the American Mathematical Society. Requests for permission
to reuse portions of AMS publication content are handled by the Copyright Clearance Center. For
more information, please visit www.ams.org/publications/pubpermissions.
Send requests for translation rights and licensed reprints to [email protected].
c 2024 by the American Mathematical Society. All rights reserved.
The American Mathematical Society retains all rights
except those granted to the United States Government.
Printed in the United States of America.
∞ The paper used in this book is acid-free and falls within the guidelines
established to ensure permanence and durability.
Visit the AMS home page at https://www.ams.org/
10 9 8 7 6 5 4 3 2 1 29 28 27 26 25 24
Contents
Preface ix
The advent of quantum computing ix
The goal of the book x
The structure of the book xi
What is not covered xiv
For instructors xiv
Acknowledgements xv
v
vi Contents
Bibliography 363
Index 365
Preface
ix
x Preface
work of Peter Shor [Sho94] on quantum polynomial time factoring and discrete log-
arithm algorithms. Shor’s work alarmed the world as it revealed the vulnerability of
all known public-key cryptography, one of the fundamental pillars of cybersecurity, to
quantum computer attacks.
Another early advancement in quantum computing that garnered significant at-
tention was Lov Grover’s algorithm [Gro96], offering a quadratic speedup for unstruc-
tured search problems. This breakthrough further fueled the growing interest in quan-
tum computing. Grover’s algorithm captured widespread interest because of its ability
to solve a very generic problem, making it useful across a wide range of applications.
In the decades following these early developments, many more quantum algo-
rithms have been discovered. An example is the HHL algorithm [HHL08], which can
be used to find properties of solutions of large sparse linear systems with certain prop-
erties, providing an exponential speedup over classical solvers like Gauss elimination.
Since linear algebra is one of the most important tools in all areas of science and engi-
neering, the HHL algorithm has wide applications, including machine learning, which
is one of the most significant techniques in computer science today.
This progress should not deceive us, as the development of quantum algorithms
remains a significant challenge. Sometimes, there is the impression that quantum com-
puting allows all computations to be parallelized and significantly accelerated. How-
ever, that is not the case. In reality, each new quantum algorithm requires a unique
idea. Consequently, such algorithms can currently only accelerate a few computa-
tion problems. Moreover, only very few of these improvements come with exponential
speedups.
All the algorithms mentioned in this book are designed for universal, gate-based
quantum computers, which are the most widely recognized and extensively researched
type of quantum computers. In addition to universal quantum computers, there are
more specialized types of quantum computers, such as quantum annealers and quan-
tum simulators. Quantum annealers utilize annealing techniques to solve optimiza-
tion problems by finding the lowest-energy state of a physical system. On the other
hand, quantum simulators are specifically designed to simulate quantum systems and
study quantum phenomena. However, this book focuses on universal quantum com-
puters due to their versatility and because they are the most interesting from a com-
puter science perspective.
subsequent chapters. I also adopted this approach in writing the book due to my own
experience. Despite having degrees in mathematics and physics and being a computer
science professor for over 30 years, I found myself needing to refresh my memory on
several required concepts and to learn new material. Therefore, my objective is to make
the presentation understandable with a minimum of prerequisites, ensuring clarity for
both myself and the readers.
My approach of covering all the details will lead to the situation that some readers
may already possess knowledge covered in the introductory chapters. However, even
they are likely to encounter new and vital information in these chapters, essential for
understanding quantum algorithms. For example, Chapter 1 gives an introduction to
the theory of reversible computation, which is not typically part of the standard com-
puter science education. Chapter 2 introduces mathematicians to the Dirac notation,
commonly used by physicists. Chapter 3 further expands the understanding of physi-
cists by applying the quantum mechanics postulates to quantum gates and circuits.
Therefore, I encourage those with prior knowledge to read these sections, taking note
of the notation used in the book and of unfamiliar results. This is vital for grasping the
intricacies of my explanation of quantum algorithms.
valuable tools for subsequent discussions. Moving forward, the chapter familiarizes the
reader with significant operators in quantum mechanics, such as Hermitian, unitary,
and normal operators. Of particular significance is the spectral theorem, a fundamen-
tal result that offers profound insights into these operators and their characteristics.
The consequences of the spectral theorem are also explored to enrich the reader’s un-
derstanding. Furthermore, the chapter delves into the concept of tensor products of
finite-dimensional Hilbert spaces, a crucial notion in quantum computing. The dis-
cussion culminates with an elucidation of the Schmidt decomposition theorem, which
plays a pivotal role in characterizing the entanglement of quantum systems.
Chapter 3 constitutes the third foundational pillar of quantum computing required
in this book, encompassing the essential background of quantum mechanics. This
chapter introduces the relevant quantum mechanics postulates. To illustrate their rele-
vance, the chapter applies these postulates to introduce fundamental concepts of quan-
tum computing, including quantum bits, registers, gates, and circuits. Simple exam-
ples of quantum computation are provided to enhance the reader’s understanding of
the connection between the postulates and quantum algorithms. In addition, the chap-
ter provides the foundation for the geometric interpretation of quantum computation.
It achieves this by establishing the correspondence between states of individual quan-
tum bits and points on the Bloch sphere, a pivotal concept in quantum computing vi-
sualization. Moreover, the chapter presents an alternative description of the relevant
quantum mechanics using density operators. This approach enables the modeling of
the behavior of components within composed quantum systems.
The foundational groundwork laid out in the initial three chapters, including the
domains of computer science, mathematics, and physics, sets the stage for a compre-
hensive exploration of quantum algorithms in Chapter 4. This chapter embarks on
this transformative journey by shedding light on pivotal quantum gates, which serve
as the fundamental constituents of quantum circuits. We start by introducing single-
qubit gates, demonstrating that their operations can be perceived as rotations within
three-dimensional real space. Subsequently, we delve into the realm of multiple-qubit
operators, with a particular focus on controlled operators. In addition, this chapter
familiarizes readers with the significance of ancillary and erasure gates, which play
a vital role in the augmentation and removal of quantum bits. Leveraging analogous
outcomes from classical reversible circuits, the chapter shows that every Boolean func-
tion can be realized through a quantum circuit. In contrast to the classical scenario, the
quantum case does not adhere to the notion that a finite set of quantum gates suffices to
implement any quantum operator. Instead, finite sets of quantum gates are presented,
enabling the approximation of all quantum circuits. Lastly, the chapter ushers in the
concept of quantum complexity theory, using the analogy between classical probabilis-
tic algorithms and quantum algorithms. It introduces the complexity class BQP, which
stands for bounded-error quantum polynomial time.
The following four chapters focus on specific quantum algorithms.
Chapter 5 introduces early algorithms designed to illustrate the quantum comput-
ing advantage. We begin with the Deutsch algorithm, as presented in David Deutsch’s
seminal paper from 1985 [Deu85], and its generalization by David Deutsch and Richard
The structure of the book xiii
of the appendix lists essential trigonometric identities and inequalities that play a cru-
cial role in the main part of the book. Appendix B focuses on linear algebra. Its first
part briefly reviews important concepts and results. The second part covers the con-
cept of tensor products, which is of significant importance in quantum computing and
is typically not included in introductory courses in linear algebra. Lastly, Appendix C
contains the required notions and results from probability theory. This knowledge is
essential for the analysis of probabilistic and quantum algorithms.
For instructors
This book is suitable for self-study. It is also intended and has been used for teaching
introductory courses on quantum algorithms. My recommendation to instructors for
such a course is as follows: If most of the participants are already familiar with the
required basics of algorithms and complexity, linear algebra, algebra, and probability
theory, the course should cover Chapters 3, 4, 5, 6, 7, and 8 in this order, exploring dif-
ferent aspects of quantum algorithms. Individual students lacking some basic knowl-
edge can familiarize themselves with those topics using the detailed explanations in
the respective parts of the book. If the majority of the participants in the course is un-
familiar with certain basic topics, the instructor may want to briefly cover them either
in a preliminary lecture or when they are used during the course.
Depending on the instructor’s intentions and the available time, the course may
focus more on theoretical explanations and proofs or on the practical aspects of how
quantum algorithms work. In both situations, students who desire more background
Acknowledgements xv
than is covered in the course can supplement their knowledge through self-study of
the corresponding book parts.
Acknowledgements
I express my sincere gratitude to the following individuals who have been instrumental
in supporting me throughout the process of writing this book. Their invaluable advice,
discussions, and comments have played a pivotal role in shaping the content and qual-
ity of this work: Gernot Alber, Gerhard Birkl, Jintai Ding, Samed Düzlü, Fritz Eisen-
brand, Marc Fischlin, Mika Göös, Iryna Gurevich, Matthieu Nicolas Haeberle, Taketo
Imaizumi, Michael Jacobson, Nigam Jigyasa, Norbert Lutkenhaus, Alastair Kay, Ju-
liane Krämer, Gen Kimura, Michele Mosca, Jigyasa Nigam, Rom Pinchasi, Ahamad-
Reza Sadeghi, Masahide Sasaki, Alexander Sauer, Florian Schwarz, Tsuyoshi Takagi,
Shusaku Uemura, Thomas Walther, Yuntao Wang, and Ho Yun. Their dedication to
sharing their expertise and knowledge has been truly invaluable, and I am deeply grate-
ful for their willingness to engage in insightful discussions and provide constructive
feedback throughout this journey.
I also extend my heartfelt gratitude to Ina Mette, the responsible person at AMS.
Her belief in the potential of this book and her continuous encouragement to pursue
this project have been instrumental in its realization. I am deeply grateful for her un-
wavering support and guidance throughout the writing process. I am also grateful to
Arlene O’Sean and her team at AMS for their great job in carefully proofreading the
book and making its appearance so nice.
I learned a lot from the books on quantum computing by Michael A. Nielsen and
Isaac L. Chuang [NC16] and by Phillip Kaye, Raymond Laflamme, and Michele Mosca
[KLM06].
The writing of this book would not have been possible without the invaluable con-
tributions of several open-source LaTeX packages, which greatly facilitated the presen-
tation of complex concepts. I extend my gratitude to the creators and maintainers of
these packages:
• The powerful TikZ library1 and its extension circuitikz2 were instrumental in
visualizing circuits and diagrams throughout the book.
• I used the open source TikZ code for illustrating the right-hand rule,3
• For the clear representation of quantum circuits, I relied on quantikz.4
• To illustrate quantum states on Bloch spheres, I used the blochsphere package.5
• The packages algorithm and algorithmpseudocode6 were indispensable in pre-
senting algorithms in a structured and easily understandable format.
• For handling the Dirac notation with ease, I benefited from the physics package.7
1
https://tikz.net/
2
https://ctan.org/pkg/circuitikz
3
https://tikz.net/righthand_rule/
4
https://ctan.org/pkg/quantikz
5
https://ctan.org/pkg/blochsphere
6
https://www.overleaf.com/learn/latex/Algorithms
7
https://www.ctan.org/pkg/physics
xvi Preface
I am sincerely grateful to the open-source community for making these and many more
tools available, enhancing the quality of this work and simplifying its creation.
Finally, I would like to acknowledge the support provided by ChatGPT8 in improv-
ing many formulations of my presentation. As I am not a native speaker of the English
language, this assistance was of great help.
8
https://chat.openai.com/
Chapter 1
Classical Computation
1
2 1. Classical Computation
uses similar modeling is the book by Thomas H. Cormen, Charles E. Leiserson, Ronald
L. Rivest, and Clifford Stein [CLRS22].
1.1.1. Basics. To explain our model, we introduce some basic concepts and re-
sults. We begin by defining alphabets.
Definition 1.1.1. An alphabet is a finite nonempty set.
Example 1.1.2. The simplest alphabet is the unary alphabet {I}, which contains only
the symbol I. The most commonly used alphabet in computer science is the binary
alphabet {0, 1}, where each element is referred to as a bit. Other commonly used al-
phabets in computer science include the set {0, 1, 2, 3, 4, 5, 6, 7, 8, 9} of decimal digits,
the set {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 𝐴, 𝐵, 𝐶, 𝐷, 𝐸, 𝐹} of hexadecimal digits, and the Latin al-
phabet ℛ = {a, . . . , z, A, . . . , Z, ␣} that includes lowercase and uppercase Latin letters,
as well as the space symbol ␣.
As we have seen, every finite sequence of bits that starts with the bit ‘1’ is assigned
to a uniquely determined positive integer as its binary expansion. Now, we also assign
uniquely determined nonnegative integers to all sequences in {0, 1}∗ including those
that start with the bit 0 using the following definition.
Definition 1.1.12. For all 𝑛 ∈ ℕ and 𝑏 ⃗ = (𝑏0 , . . . , 𝑏𝑛−1 ) ∈ {0, 1}∗ we set
𝑛−1
(1.1.3) stringToInt(𝑏)⃗ = ∑ 𝑏𝑖 2𝑛−𝑖−1
𝑖=0
Note that nonnegative integers are represented by infinitely many strings in {0, 1}∗ .
Specifically, they are represented by all strings that result from prepending a string
consisting only of zeros to their binary expansions. Also, the number 0 is represented
by all finite sequences of the bit “0”.
Exercise 1.1.13. Use Proposition 1.1.7 to show that the map (1.1.3) is a bijection.
Another common data type is that of a floating point number, which represents an
approximation to a real number. However, analyzing algorithms that use this data
type is more complex, as it requires considering error propagation. For our purposes,
we do not need to take this data type into account.
Objects of these elementary data types are represented and stored using bits. The
encoding of these objects can be defined in a straightforward manner for both bits and
Roman characters. Integers are encoded using the binary expansion of their absolute
value along with the indication of their sign. The size of these encodings refers to the
number of bits used. For a data type object 𝑎, the size of its encoding is denoted by
size 𝑎. It may vary depending on the specific computing platform, programming lan-
guage, or other relevant factors. However, detailed discussions regarding these specific
implementations are beyond the scope of this book, and we present only a summary of
the different sizes of the elementary data types in Table 1.1.1.
In our model, advanced data types include vectors and matrices over some data
type, which are described in Sections B.1 and B.4. For example, (1, 2, 3) is an integer
0 1
vector. Similarly, ( ) is a bit matrix. Vectors and matrices are also encoded by bits,
1 0
using the encoding method of their data type. The encodings of vectors and matrices
possess the following properties. For any 𝑘 ∈ ℕ, the size of the encoding of a vector
𝑠 ⃗ = (𝑠0 , . . . , 𝑠𝑘−1 ) over some data type satisfies
𝑘−1
(1.1.4) size(𝑠)⃗ = O ( ∑ size 𝑠𝑖 ) .
𝑖=0
Similarly, for any 𝑘, 𝑙 ∈ ℕ, the size of the encoding of a matrix 𝑀 = (𝑚𝑖,𝑗 )𝑖∈ℤ𝑘 ,𝑗∈ℤ𝑙 over
some data type satisfies
𝑎 𝑏
19 “bit”
Figure 1.1.1. The variables 𝑎 and 𝑏 represent memory cells that contain 19 and “bit”, respectively.
(1.1.6) 𝑎 ← 𝑏.
It sets the value of a variable 𝑎 of a certain data type to an element 𝑏 of this data type
or to the value of a variable 𝑏 of the same data type as 𝑎. The running time and space
requirement for this operation is O(size 𝑏).
Algorithms may also assign the result of an operation to a variable. An example of
such an instruction is
(1.1.7) 𝑎 ← 𝑏 + 3.
The right-hand side of this assignment is the arithmetic expression 𝑏 + 3. When such
an instruction is executed, the arithmetic expression is evaluated first. In the example,
6 1. Classical Computation
Table 1.1.2. Permitted operations on integers, their running times, and space require-
ments for operands of size O(𝑛).
it depends on the value of the variable 𝑏. Then the result is assigned to 𝑎. It is permit-
ted that the variable on the left side also appears on the right side. For instance, the
instruction
(1.1.8) 𝑐←𝑐+1
increments a counter 𝑐 by 1.
Next, we present the operations that may be used in expressions on the right side
of an assign instruction. The permitted operations on integers are listed in Table 1.1.2,
including their time and space requirements. The results of the operations absolute
value, floor, ceiling, next integer, add, subtract, multiply, divide, and remainder are
integers. The results of the comparisons are the bits 0 or 1 where 1 stands for “true”
and 0 stands for “false”. For a description and analysis of these algorithms see [AHU74]
and [Knu82].
In most programming languages, only integers with limited bit lengths are avail-
able, such as 64-bit integers. When there is a need to work with integers of arbitrary
length, specialized algorithms are required to handle operations on such numbers. To
simplify our descriptions, we assume that operations on integers of arbitrary length are
available as basic operations. These operations can be realized using the running time
and memory space as listed in Table 1.1.2 but there exist much more efficient algo-
rithms for integer multiplication and division with remainder. For instance, a highly
efficient integer multiplication algorithm developed by David Harvey and Joris van
der Hoeven [HvdH21] has running time O(𝑛 log 𝑛) for 𝑛-bit operands. Additionally,
it is known that for any integer multiplication algorithm with a running time of 𝑀(𝑛),
there exist division with remainder and square root algorithms with a running time of
O(𝑀(𝑛)) (see [AHU74, Theorem 8.5]). However, in practice, these faster algorithms
1.1. Deterministic algorithms 7
are only advantageous for handling very large numbers. For more typical integer sizes,
classical algorithms may still be more efficient in terms of practical performance.
On the bits 0 and 1 our algorithms can perform the logic operations that are listed
in Table 1.1.3. They implement the functions shown in Table 1.1.4. All permitted logic
operations run in time O(1) and require space O(1).
Algorithms may also use the branch statements for, while, repeat, and if. They
initiate the execution of a sequence of instructions if some branch condition is satis-
fied. In the analysis of algorithms, we will assume that the time and space required to
execute an algorithm part that uses a branch instruction is the time and space required
to evaluate the branch condition and the corresponding sequence of instructions, pos-
sibly several times. Branch instructions together with the corresponding instruction
sequence are referred to as loops.
We now provide more detailed explanations of the branch instructions using the
examples shown in Figures 1.1.2 and 1.1.3 utilizing pseudocode that is further de-
scribed in Section 1.1.5.
A for statement appears at the beginning of an instruction sequence and is ended
by an end for statement. This instruction sequence is executed for all values of a spec-
ified variable, as indicated in the for statement. In the for loop in Figure 1.1.2, the
variable is 𝑖, and the instruction 𝑝 ← 2𝑝 is executed for all 𝑖 from 𝑖 = 1 to 𝑖 = 𝑒. After
𝑖 iterations of this instruction, the value of 𝑝 is 2𝑖 . So after completion of the for loop,
the value of 𝑝 is 2𝑒 .
Also, while statements appear at the beginning of an instruction sequence after
which there is an end while statement. The instruction sequence is executed as long
as the condition in the while statement is true. For instance, the while loop in Figure
8 1. Classical Computation
1.1.2 also computes 2𝑒 . For this, the counting variable 𝑖 is initialized to 1 and the vari-
able 𝑝 is initially set to 1. Before each round of the while loop, the logic expression 𝑖 ≤ 𝑒
is evaluated. If it is true, then the instruction sequence in the while loop is executed.
In the example, 𝑝 is set to 2𝑝 and the counting variable 𝑖 is increased by 1. After the
𝑘th iteration of the while loop, the value of 𝑝 is 2𝑘 and the counting variable is 𝑖 = 𝑘.
Hence, after the 𝑒th iteration of the while loop we have 𝑝 = 2𝑒 and 𝑖 = 𝑒 + 1. So the
while condition is violated and the computation continues with the first instruction
following the while loop.
Next, repeat statements are also followed by an instruction sequence that is ended
by an until statement that contains a condition. If this condition is satisfied, the com-
putation continues with the first instruction after the until statement. Otherwise, the
instruction sequence is executed again. In the example in Figure 1.1.2, the instruction
𝑝 ← 2𝑝 is executed until the counting variable 𝑖 is equal to 𝑒. Note that the instruction
sequence is executed at least once. Therefore, this repeat loop cannot compute 20 .
Exercise 1.1.15. Find for, while, and repeat loops that compute the integer repre-
sented by a bit sequence 𝑠0 𝑠1 ⋯ 𝑠𝑛−1 ∈ {0, 1}𝑛 where 𝑛 ∈ ℕ.
Now we explain if statements. The three different ways to use if are shown in
Figure 1.1.3. Such a statement is followed by a sequence of instructions that is ended
by an end if statement. The instruction sequence may be interrupted by an else state-
ment or by one or more else if statements. The code segment on the left side of Figure
1.1.3 checks whether 𝑎 < 0 is true, in which case the instruction 𝑎 ← −𝑎 is executed.
Otherwise, the computation continues with the instruction following the end if state-
ment. This code segment computes the absolute value of 𝑎 because 𝑎 is set to −𝑎 if 𝑎 is
negative and otherwise remains unchanged. The code segment in the middle of Figure
1.1.3 checks whether 𝑎 is divisible by 11. If so, the variable 𝑠 is set to 1 and otherwise
to 0. Finally, the code segment on the right side of Figure 1.1.3 first checks if 𝑎 > 0 in
which case the variable 𝑠 is set to 1. Next, if 𝑎 = 0, 𝑠 is set to 0. Finally, 𝑠 is set to −1 if
𝑎 < 0. The result is the sign 𝑠 of 𝑎.
A computation terminates when the return instruction is used. This instruction
makes the result available for further use and takes the form
(1.1.9) return(𝑎)
where 𝑎 is an element or variable of some data type or a sequence of such objects. The
time and space requirement for this operation is O(𝑆), where 𝑆 represents the sum of
the sizes of the objects in the return statement.
1.1. Deterministic algorithms 9
Algorithms can also return the result of expressions. For instance, a return in-
struction may be of the form
In this case, the expression is evaluated first, and then the result is returned. The time
and space requirements of this instruction are O(𝑡) and O(𝑠), respectively, where 𝑡 and
𝑠 denote the time and space required to evaluate the corresponding expression.
Finally, a call to a subroutine is also considered a valid instruction in our model.
It takes the form
(1.1.11) 𝑎 ← 𝐴(𝑏)
Here, the call 𝗉𝗈𝗐𝖾𝗋(𝑎, 𝑒) invokes the subroutine, returning the result of 𝑎𝑒 , where 𝑎
and 𝑒 are both nonnegative integers.
The time and space requirements associated with a subroutine call are O(𝑡) and
O(𝑠), respectively, where 𝑡 and 𝑠 represent the time and space required by the subrou-
tine to execute. Section 1.4 provides an explanation of how these requirements are
determined.
But in the next sections, we formally define the properties of algorithms. These defini-
tions can be made precise if a formal model of computation is used, such as the Turing
machine model.
We illustrate our algorithm model using the Euclidean algorithm. The correspond-
ing pseudocode is shown in Algorithm 1.1.16.
(1) Each run of the algorithm with a permitted input carries out a return instruction.
This means that the algorithm terminates on any input 𝑎 ∈ Input(𝐴).
(2) When the algorithm performs a return instruction, the return value is correct;
i.e., it has the property specified in the Output statement.
(3) Executing the return instruction is the only way the algorithm can terminate.
This means that after executing a statement that is not a return instruction there
is always a next instruction that the algorithm carries out.
Example 1.1.17. We describe the run of the Euclidean algorithm with input (𝑎, 𝑏) =
(100, 35). The instructions in lines 2 and 3 replace 𝑎 and 𝑏 by their absolute values. For
the chosen input, they have no effect. Since 𝑏 = 35, the while condition is satisfied.
Hence, the Euclidean algorithm executes 𝑟 ← 100 mod 35 = 30, 𝑎 ← 𝑏 = 35, and
𝑏 ← 𝑟 = 30. After this, the while condition is still satisfied since 𝑏 = 30. So the
Euclidean algorithm executes 𝑟 ← 35 mod 30 = 5, 𝑎 ← 𝑏 = 30, and 𝑏 ← 𝑟 = 5. Also,
after this iteration of the while loop, the while condition is still satisfied since 𝑏 = 5.
The Euclidean algorithm executes 𝑟 ← 30 mod 5 = 0, 𝑎 ← 𝑏 = 5, and 𝑏 ← 𝑟 = 0. Now,
the while condition is violated. So the while loop is no longer executed. Instead, the
return instruction following end while is carried out. This means that the algorithm
returns 5 which is gcd(100, 35).
Table 1.1.5. Beginning of the run of the Euclidean algorithm with input (100, 35).
Table 1.1.6. End of the run of the Euclidean algorithm with input (100, 35).
algorithm”. For instance, consider State 3 in Table 1.1.5. The value of 𝑏 is 35. So the
while condition is satisfied. The execution of the while instruction does not change
the values of 𝑎, 𝑏, or 𝑟 and causes the next instruction to be 𝑟 ← 𝑎 mod 𝑏. So State 4 is
uniquely determined by State 3.
Since we require deterministic algorithms to always terminate, the same state can-
not occur repeatedly in an algorithm run. Otherwise, the algorithm would enter an
infinite loop. In other words, the states in algorithm runs are pairwise different.
It is important to prove the correctness of an algorithm. This means that on input of
any 𝑎 ∈ Input(𝐴) the algorithm terminates and its output has the specified properties.
In Example 1.1.18, we present the correctness proof of the Euclidean algorithm.
Example 1.1.18. We prove the correctness of the Euclidean algorithm. First, note that
after 𝑏 is replaced by its absolute value, the sequence of values of 𝑏 is strictly decreasing
since starting from the second 𝑏, any such value is the remainder of a division by the
previous value of 𝑏. So at some point, we must have 𝑏 = 0 which means that the
algorithm terminates. Next, as Exercise 1.1.19 shows, the value of gcd(𝑎, 𝑏) in line 4
is always the same. But when the algorithm terminates, we have 𝑏 = 0 and therefore
gcd(𝑎, 𝑏) = gcd(𝑎, 0) = 𝑎. The fact that gcd(𝑎, 𝑏) does not change is called an algorithm
invariant. Such invariants are frequently used in correctness proofs of algorithms.
Exercise 1.1.19. Show that in line 4 of the Euclidean algorithm, the value of gcd(𝑎, 𝑏)
is always the same.
Exercise 1.1.22. Let 𝑎 = 35. Determine the first three and the last three states of the
run of Algorithm 1.1.21.
There is a close connection between decision and more general algorithms. For ex-
ample, as shown in Example 1.4.21, an algorithm that decides whether an integer has
a proper divisor below a given bound can be transformed into an integer factoring al-
gorithm with almost the same efficiency. This can be generalized to many algorithmic
problems.
1.1.7. Time and space complexity. Let 𝐴 be an algorithm. Its efficiency de-
pends on the time complexity and the memory requirements of 𝐴 which we discuss in
this section.
Definition 1.1.25. (1) The running time or time complexity of 𝐴 for a particular in-
put 𝑎 ∈ Input(𝐴) is the sum of the time required for reading the input 𝑎 which
is O(size(𝑎)) and the running times of the instructions executed during the algo-
rithm run with input 𝑎.
(2) The worst-case running time or worst-case time complexity of 𝐴 is the function
(1.1.13) wTime𝐴 ∶ ℕ → ℝ≥0
that sends a positive integer 𝑛 which is the size of an input of 𝐴 to the maximum
running time of 𝐴 over all inputs of size 𝑛. If 𝑛 is not the size of an input of 𝑎, then
we set wTime𝐴 (𝑛) = 0.
Using the Definitions 1.1.25 and 1.1.26, we define the asymptotic time and space
complexity of deterministic algorithms.
Definition 1.1.27. Let 𝑓 ∶ ℕ → ℝ>0 be a function. We say that 𝐴 has asymptotic worst-
case running time or space complexity O(𝑓) if wTime𝐴 = O(𝑓) or wSpace𝐴 = O(𝑓),
respectively. The words “asymptotic” and “worst-case” may also be omitted.
It is common to use special names for certain time and space complexities. Several
of these names are listed in Table 1.1.7.
Exercise 1.1.28. Show that quasilinear complexity can also be written as 𝑛1+o(1) , poly-
nomial complexity as 𝑛O(1) or 2O(log 𝑛) , subexponential complexity as 2o(𝑛) , and expo-
O(1)
nential complexity as 2𝑛 .
Example 1.1.29. We analyze the time and space complexity of the Euclidean Algo-
rithm 1.1.16. Let 𝑎, 𝑏 ∈ ℤ be the input of the algorithm, and let 𝑛 be the maximum
of size(𝑎) and size(𝑏). The time to read the input 𝑎, 𝑏 is O(𝑛). After the operations in
lines 2 and 3 we have 𝑎, 𝑏 ≥ 0. The time and space complexity of these instructions
is O(𝑛). If 𝑏 = 0, then the while loop is not executed and 𝑎 is returned, which takes
time O(𝑛). If 𝑏 ≠ 0 and 𝑎 ≤ 𝑏, then after the first iteration of the while loop, we have
𝑏 < 𝑎. It follows from Exercise 1.1.30 that the total number of executions of the while
loop is O(𝑛). Also, by this exercise, the size of the operands used in the executions of
the while loop is O(𝑛). So, the running time of each iteration is O(𝑛2 ) and the space
requirement is O(𝑛). This shows that the worst-case running time of the Euclidean
algorithm is O(𝑛3 ) and the worst-case space complexity is O(𝑛). Thus, the Euclidean
algorithm has cubic running time. Using more complicated arguments, it can even be
shown that this algorithm has quadratic running time (see Theorem 1.10.5 in [Buc04]).
What is the practical relevance of worst-case running times when comparing algo-
rithms? Let us take two algorithms, 𝐴 and 𝐴′ , both designed to solve the same problem,
such as computing the greatest common divisors. It is essential to note that if algorithm
𝐴 has a smaller asymptotic running time than algorithm 𝐴′ , it does not automatically
make 𝐴 superior to 𝐴′ in practice. This comparison only indicates that 𝐴 is faster than
𝐴′ for inputs greater than a certain length. However, this input length can be so large
that it becomes irrelevant for most real-world use cases.
For example, in [AHU74] it is shown that for any integer multiplication algorithm
with a worst-case time complexity of 𝑀(𝑛), there exists a gcd algorithm with a worst-
case time complexity of O(𝑀(𝑛) log(𝑛)). Additionally, [HvdH21] presents an integer
multiplication algorithm with a worst-case running time of O(𝑛 log 𝑛). As a result,
2
there is a corresponding gcd algorithm with a worst-case running time of O(𝑛 log 𝑛).
However, this improved complexity only outperforms the O(𝑛2 ) algorithm for very
large integers, which may not occur in most common input sizes.
Exercise 1.1.30. Let 𝑎, 𝑏 ∈ ℕ, 𝑎 > 𝑏, be the input of the Euclidean algorithm. Let
𝑟0 = 𝑎 and 𝑟1 = 𝑏. Denote by 𝑘 the number of iterations of the while loop executed
in the algorithm and denote by 𝑟2 , 𝑟3 , . . . , 𝑟 𝑘+1 the sequence of remainders 𝑟 which are
computed in line 5 of the Euclidean algorithm. Prove that the sequence (𝑟 𝑖 )0≤𝑖≤𝑘+1 is
strictly decreasing and that 𝑟 𝑖+2 < 𝑟 𝑖 /2 for all 𝑖 ∈ ℤ𝑘 . Conclude that 𝑘 = O(size 𝑎).
Example 1.1.31. We determine the worst-case time and space complexity of the De-
terministic Factoring Algorithm 1.1.21. Let 𝑛 = bitLength 𝑎. The number of iterations
of the for loop in this algorithm is O(2𝑛/2 ). Each iteration of the for loop requires time
O(𝑛2 ) and space O(𝑛). Hence, the worst-case time complexity of Algorithm 1.1.21 is
O(𝑛2 2𝑛/2 ) = 2O(𝑛) and the worst-case space complexity is O(𝑛). So, the algorithm has
exponential running time and linear space complexity.
16 1. Classical Computation
(1) The probabilistic algorithm 𝐴 may call the subroutine coinToss. It returns 0 or 1,
1
both with probability 2 .
(2) The probabilistic algorithm 𝐴 may call other probabilistic algorithms subroutines
if they satisfy the following condition. Given a permitted input, they terminate
and return one of finitely many possible outputs according to a probability distri-
bution.
(3) The run of 𝐴 on input of some 𝑎 ∈ Input(𝐴) may depend on 𝑎 and the return
values of the probabilistic subroutines called during the run of the algorithm.
Therefore, in contrast to deterministic algorithms, this run may not be uniquely
determined by 𝑎.
(4) 𝐴 may not terminate, since termination may depend on certain return values of
some probabilistic subroutine that may never occur.
(5) Let 𝑎 ∈ Input(𝐴) and suppose that 𝐴 terminates on input of 𝑎 with output 𝑜.
Then 𝑜 may not be uniquely determined by 𝑎, but it may also depend on the return
values of the probabilistic subroutine calls during the run of 𝐴. Also, we may have
𝑜 ∈ Output(𝐴, 𝑎), 𝑜 = “Failure”, which indicates that the algorithm did not find
a correct output or 𝑜 has neither of these properties.
(6) Due to the special meaning of the return value “Failure”, it must never be a correct
output.
Algorithm 1.2.2. Selecting a uniformly distributed random bit string of fixed length
Input: 𝑘 ∈ ℕ
Output: 𝑠 ∈ {0, 1}𝑘
1: randomString(𝑘)
2: for 𝑖 = 0 to 𝑘 − 1 do
3: 𝑠𝑖 ← coinToss
4: end for
5: return 𝑠 ⃗ = (𝑠0 , . . . , 𝑠𝑘−1 )
6: end
numbers. Moreover, as shown in [AGP94], there are infinitely many Carmichael num-
bers. Since Carmichael numbers are composite, the Fermat test will return 0 for these
inputs, making the algorithm non-error-free.
Exercise 1.2.5. (1) Write pseudocode for the Fermat test described in Example 1.2.4.
(2) Find a composite number 𝑎 such that on input of 𝑎 the algorithm of Example 1.2.4
sometimes returns 0 and sometimes 1.
Example 1.2.9. Algorithm 1.2.10 is a Las Vegas factoring algorithm which calls
𝗆𝖼𝖥𝖺𝖼𝗍𝗈𝗋 until a proper divisor of 𝑎 is found. This may take forever. But if the al-
gorithm terminates, then it is successful.
1.2. Probabilistic algorithms 19
The approach used in Algorithm 1.2.10 can be extended to create a more general
version, allowing any error-free Monte Carlo algorithm 𝐴 to be transformed into a Las
Vegas algorithm. This transformation is achieved through Algorithm 1.2.11. When
given an input 𝑎 ∈ Input(𝐴), this algorithm repeatedly executes 𝐴(𝑎) until a successful
outcome is obtained. As this algorithm is akin to performing a Bernoulli experiment,
we refer to it as the Bernoulli algorithm associated with 𝐴.
Algorithm 1.2.11. Bernoulli algorithm associated with an error-free Monte Carlo al-
gorithm 𝐴
Input: 𝑎 ∈ Input(𝐴)
Output: 𝑏 ∈ Output(𝐴, 𝑎)
1: 𝖻𝖾𝗋𝗇𝗈𝗎𝗅𝗅𝗂𝐴 (𝑎)
2: 𝑏 ← “Failure”
3: while 𝑏 = “Failure” do
4: 𝑏 ← 𝐴(𝑎)
5: end while
6: return 𝑏
7: end
On the other hand, every Las Vegas algorithm can indeed be transformed into an
error-free Monte Carlo algorithm. This conversion entails monitoring the number of
calls made to the probabilistic subroutines while the algorithm runs. The algorithm
terminates if the Las Vegas algorithm produces a successful outcome or if the count of
subroutine calls exceeds a predetermined threshold value, which may vary depending
on the specific input of the algorithm. In the event of success, the algorithm returns the
output of the Las Vegas algorithm. However, if the threshold is surpassed, it returns
the result “Failure.”
Exercise 1.2.12. Change Algorithm 1.2.10 to an error-free Monte Carlo algorithm that,
on input of 𝑎 ∈ ℤ>1 , performs at most bitLength(𝑎) coin tosses.
Exercise 1.2.13. Change Algorithm 1.2.10 to an error-free Monte Carlo algorithm that,
on input of 𝑎 ∈ ℤ>1 , performs at most bitLength(𝑎) coin tosses.
membership of 𝑠 ⃗ ∈ {0, 1}∗ in a language 𝐿 ⊂ {0, 1}∗ . Such an algorithm always re-
turns 1 or 0. It satisfies Output(𝐴, 𝑠)⃗ = {1} for all 𝑠 ⃗ ∈ 𝐿 and Output(𝐴, 𝑠)⃗ = {0} for all
𝑠 ⃗ ∈ {0, 1}∗ ⧵ 𝐿. However, recall that runs of probabilistic decision algorithms do not
have to be successful. So, the algorithm may return 0 if 𝑠 ⃗ ∈ 𝐿 and 1 if 𝑠 ⃗ ∈ {0, 1}∗ ⧵ 𝐿.
There are three different types of probabilistic decision algorithms. To define them,
let 𝐴 be a probabilistic algorithm that decides a language 𝐿.
(1) 𝐴 is called true-biased if it never returns false positives. So, if on input of 𝑠 ⃗ ∈ {0, 1}∗
the algorithm returns 1, then 𝑠 ⃗ ∈ 𝐿.
(2) 𝐴 is called false-biased if it never returns false negatives. So, if at the input of
𝑠 ⃗ ∈ {0, 1}∗ the algorithm returns 0, then 𝑠 ⃗ ∉ 𝐿.
(3) If 𝐴 is true-biased or false-biased, then it is also called an algorithm with one-sided
error.
(4) 𝐴 is called an algorithm with two-sided error if it can return false positives and
false negatives.
Note that a false-biased algorithm can always be transformed into a true-biased
algorithm. We only need to replace the language to be decided by its complement in
{0, 1}∗ and change the outputs 0 and 1.
Example 1.2.14. Consider Algorithm 1.2.15 that decides whether or not the integer
that corresponds to a string in {0, 1}∗ is composite or not. On the input of 𝑠 ⃗ ∈ {0, 1}∗ , the
algorithm computes the corresponding integer 𝑎 and calls 𝗆𝖼𝖥𝖺𝖼𝗍𝗈𝗋. If this subroutine
returns a proper divisor of 𝑎, then the algorithm returns 1. Otherwise, it returns 0.
This is a true-biased Monte Carlo decision algorithm. If it returns 1, then 𝑠 ⃗ represents
a composite integer. But if it returns 0, then the integer represented by 𝑠 ⃗ may or may
not be composite.
returns a false negative answer if 𝑎 is composite and coinToss and 𝗆𝖼𝖥𝖺𝖼𝗍𝗈𝗋 both return
0. Also, it returns a false positive answer if 𝑎 is a prime number and coinToss gives 1.
Algorithm 1.2.17. Monte Carlo compositeness decision algorithm with two-sided er-
ror
Input: 𝑠 ⃗ ∈ {0, 1}∗
Output: 1 if stringToInt(𝑠)⃗ is composite and 0 otherwise
1: 𝗆𝖼𝖢𝗈𝗆𝗉𝗈𝗌𝗂𝗍𝖾2(𝑎)
2: 𝑎 ← stringToInt(𝑠)⃗
3: 𝑐 ← coinToss
4: 𝑏 ← 𝗆𝖼𝖥𝖺𝖼𝗍𝗈𝗋(𝑎)
5: if 𝑐 = 1 ∨ 𝑏 ∈ ℕ then
6: return 1
7: else
8: return 0
9: end if
10: end
1.3.1. A discrete probability space. Our first goal is to define a discrete prob-
ability space that is the basis of the analyses. In this section, 𝐴 denotes a probabilistic
algorithm. We first introduce some notation.
Consider a run 𝑅 of 𝐴 with input 𝑎 ∈ Input(𝐴) and let 𝑙 ∈ ℕ0 ∪ {∞} be the num-
ber of probabilistic subroutine calls in 𝑅. For instance, in Algorithm 1.2.2 we have
Input(𝐴) = ℕ and for 𝑎 ∈ ℕ it holds that 𝑙 = 𝑎. In contrast, in Algorithm 1.2.10, the
number 𝑙 of probabilistic subroutine calls may be infinite.
For all 𝑘 ∈ ℕ, 𝑘 ≤ 𝑙, let 𝑎𝑘 be the input of the 𝑘th probabilistic subroutine call in 𝑅 if
this subroutine requires an input, let 𝑟 𝑘 be its output, and let 𝑝 𝑘 be the probability that
on input of 𝑎𝑘 the output 𝑟 𝑘 occurs. These quantities are well-defined since we require
that probabilistic algorithms may only use probabilistic subroutines that on any input
terminate and return one of finitely many possible outputs according to some proba-
bility distribution. For example, for the probabilistic subroutine coinToss there is no
1
input, the output is 0 or 1, and the probability of both outputs is 2 . We call 𝑟 ⃗ = (𝑟 𝑘 )𝑘≤𝑙
the random sequence of the run 𝑅. So the random sequence of a run of randomInt with
input 𝑎 ∈ ℕ is in {0, 1}𝑎 .
We denote the set of all random sequences of runs of 𝐴 with input 𝑎 by Rand(𝐴, 𝑎)
and the set of finite strings in Rand(𝐴, 𝑎) by FRand(𝐴, 𝑎). So for 𝐴 = randomInt
and 𝑎 ∈ ℕ we have Rand(𝐴, 𝑎) = FRand(𝐴, 𝑎) = {0, 1}𝑎 . We note that for any
𝑎 ∈ Input(𝐴), each 𝑟 ⃗ ∈ Rand(𝐴, 𝑎) is the random sequence of exactly one run of 𝐴. We
22 1. Classical Computation
call it the run of 𝐴 corresponding to 𝑟.⃗ This run terminates if and only if 𝑟 ⃗ ∈ FRand(𝐴, 𝑎)
in which case we write 𝐴(𝑎, 𝑟)⃗ for the return value of this run.
Finally, let 𝑘 ∈ ℕ0 , 𝑘 ≤ 𝑙, and let 𝑟 ⃗ = (𝑟0 , . . . , 𝑟 𝑘−1 ) be a prefix of a random sequence
of a run of 𝐴 with input 𝑎. Also, for 0 ≤ 𝑖 < 𝑘 denote by 𝑝 𝑖 the probability for the return
value 𝑟 𝑖 to occur. Then we set
𝑘−1
(1.3.1) Pr𝐴,𝑎 (𝑟)⃗ = ∏ 𝑝 𝑖 .
𝑖=0
This is the probability that 𝑟 ⃗ occurs as the prefix of the random sequence of a run of 𝐴
with input 𝑎. For instance, if 𝐴 = randomInt and 𝑎 ∈ ℕ, then for all 𝑘 ∈ ℕ0 with 𝑘 ≤ 𝑎
1
and all 𝑟 ⃗ ∈ {0, 1}𝑘 , we have Pr𝐴,𝑎 (𝑟)⃗ = 2𝑘 .
Exercise 1.3.1. Determine Rand(𝐴, 𝑎), FRand(𝐴, 𝑎), and Pr𝐴,𝑎 for 𝐴 = 𝗅𝗏𝖥𝖺𝖼𝗍𝗈𝗋 spec-
ified in Algorithm 1.2.10 and 𝑎 ∈ Input(𝐴) = ℕ>1 .
The next lemma allows the definition of the probability distribution that we are
looking for.
Lemma 1.3.2. Let 𝑎 ∈ Input(𝐴). The (possibly infinite) sum
(1.3.2) ∑ Pr(𝑟)⃗
⃗
𝑟∈FRand(𝐴,𝑎)
converges, its limit is in the interval [0, 1], and it is independent of the ordering of the terms
in the sum.
Proof. First, note the following. If the sum in (1.3.2) is convergent, then it is absolute
convergent since the terms in the sum are nonnegative. So Theorem C.1.4 implies that
its limit is independent of the ordering of the terms in the sum.
To prove the convergence of the sum, set
(1.3.3) 𝑡𝑘 = ∑ Pr𝐴,𝑎 (𝑟)⃗
⃗
𝑟∈FRand, |𝑟|⃗ ≤𝑘
for all 𝑘 ∈ ℕ0 . Then the sum in (1.3.2) is convergent if and only if the sequence (𝑡 𝑘 ) con-
verges. For 𝑘 ∈ ℕ0 let Rand𝑘 be the set of all prefixes of length at most 𝑘 of sequences
in Rand(𝐴, 𝑎). We will prove below that
(1.3.4) ∑ Pr𝐴,𝑎 (𝑟)⃗ = 1
⃗
𝑟∈Rand 𝑘
for all 𝑘 ∈ ℕ0 . Since the elements of the sequence (𝑡 𝑘 ) are nondecreasing this proves
the convergence of (𝑡 𝑘 ) and thus of the infinite sum (1.3.2).
We will now prove (1.3.4) by induction on 𝑘. Since Rand0 only contains the empty
sequence, (1.3.4) holds for 𝑘 = 0. For the inductive step, assume that 𝑘 ∈ ℕ0 and
′
that (1.3.4) holds for 𝑘. Denote by Rand𝑘 the set of all sequences of length at most 𝑘
″
in Rand(𝐴, 𝑎) and denote by Rand𝑘 the set of sequences of length 𝑘 that are proper
1.3. Analysis of probabilistic algorithms 23
″
prefixes of strings in Rand(𝐴, 𝑎). For 𝑟 ⃗ ∈ Rand𝑘 let 𝑚(𝑟)⃗ be the number of possible
outputs of the (𝑘 + 1)st call of a probabilistic subroutine when the sequence of return
values of the previous calls was 𝑟,⃗ let 𝑟 𝑖 (𝑟)⃗ be the 𝑖th of these outputs, and let 𝑝 𝑖 (𝑟)⃗ be its
probability. These quantities exist by the definition of probabilistic algorithms. Then
we have
′ ″
(1.3.6) Rand𝑘+1 = Rand𝑘 ∪{𝑟‖𝑟
⃗ 𝑖 (𝑟)⃗ ∶ 𝑟 ⃗ ∈ Rand𝑘 and 1 ≤ 𝑖 ≤ 𝑚(𝑟)}.
⃗
Also, we have
𝑚(𝑟)⃗
(1.3.7) ∑ 𝑝 𝑖 (𝑟)⃗ = 1
𝑖=1
″
for all 𝑟 ⃗ ∈ Rand𝑘 . This implies
𝑚(𝑟)⃗
∑ Pr𝐴,𝑎 (𝑟)⃗ = ∑ Pr𝐴,𝑎 (𝑟)⃗ + ∑ ∑ Pr𝐴,𝑎 (𝑟||𝑠
⃗ 𝑖 (𝑟))
⃗
⃗
𝑟∈Rand ′ ″ 𝑖=1
𝑘+1 ⃗
𝑟∈Rand𝑘 ⃗
𝑟∈Rand 𝑘
𝑚(𝑟)⃗
= ∑ Pr𝐴,𝑎 (𝑟)⃗ + ∑ Pr𝐴,𝑎 (𝑟)⃗ ∑ 𝑝 𝑖 (𝑟)⃗
′ ″ 𝑖=1
⃗
𝑟∈Rand𝑘 ⃗
𝑟∈Rand𝑘
= ∑ Pr𝐴,𝑎 (𝑟)⃗ = 1. □
⃗
𝑟∈Rand𝑘
Lemma 1.3.2 allows the definition of the probability distribution that we are look-
ing for. This is done in the following proposition.
Then (FRand(𝐴, 𝑎) ∪ {∞}, Pr𝐴,𝑎 ) is a discrete probability space. Also, if Pr𝐴,𝑎 (∞) = 0,
then (FRand(𝐴, 𝑎), Pr𝐴,𝑎 ) is a discrete probability space.
For 𝑎 ∈ Input(𝑎) and 𝑟 ⃗ ∈ FRand(𝐴, 𝑎), the value Pr𝐴,𝑎 (𝑟)⃗ is the probability that
𝑟 ⃗ is the random sequence of a run of 𝐴 with input 𝑎. Also, Pr𝐴,𝑎 (∞) is the probability
that on input of 𝑎, the algorithm 𝐴 does not terminate.
An important type of algorithms 𝐴 that satisfy Pr𝐴,𝑎 (∞) = 0 for all 𝑎 ∈ Input(𝑎)
is Monte Carlo algorithms. We now show that they are exactly the probabilistic algo-
rithms that, according to the specification in Section 1.2.1, can be called by probabilistic
algorithms as subroutines.
24 1. Classical Computation
Proposition 1.3.5. Let 𝐴 be a Monte Carlo algorithm and let 𝑎 ∈ Input(𝐴). Then the
following hold.
(1) The running time of 𝐴 on input of 𝑎 is bounded by some 𝑘 ∈ ℕ that may depend on
𝑎.
(2) On input of 𝑎, algorithm 𝐴 returns one of finitely many possible outputs according
to a probability distribution.
Proof. We first show that the length of all 𝑟 ⃗ ∈ FRand(𝐴, 𝑎) is bounded by some 𝑘 ∈ ℕ.
This shows that there are only finitely many possible runs of 𝐴 on input of 𝑎 which
implies the first assertion.
Assume that no such upper bound exists. We inductively construct prefixes 𝑟 𝑘⃗ =
(𝑟0 , . . . , 𝑟 𝑘 ), 𝑘 ∈ 𝑁0 , of an infinite sequence 𝑟 ⃗ = (𝑟0 , 𝑟1 , . . .) that are also prefixes of
arbitrarily long strings in Rand(𝐴, 𝑎); that is, for all 𝑘 ∈ ℕ0 and 𝑙 ∈ ℕ the sequence 𝑟 𝑘⃗
is a prefix of a sequence in Rand(𝐴, 𝑎) of length at least 𝑙. Then 𝑟 ⃗ is an infinite sequence
in Rand(𝐴, 𝑎) that contradicts the assumption that 𝐴 is a Monte Carlo algorithm.
For the base case, we set 𝑟0⃗ = (). This is a prefix of all strings in Rand(𝐴, 𝑎) that, by
our assumption, may be arbitrarily long. For the inductive step, assume that 𝑘 ∈ ℕ and
that we have constructed 𝑟 𝑘−1 ⃗ = (𝑟0 , . . . , 𝑟 𝑘−1 ). By the definition of probabilistic algo-
rithms, there are finitely many possibilities to select 𝑟 𝑘 in such a way that the sequence
(𝑟0 , . . . , 𝑟 𝑘 ) is the prefix of a string in Rand(𝐴, 𝑎). For at least one of these choices, this
sequence is a prefix of arbitrarily long strings in Rand(𝐴, 𝑎) because, by the induction
hypothesis, 𝑟 𝑘−1 ⃗ has this property. We select such an 𝑟 𝑘 and this concludes the induc-
tive construction and the proof of the first assertion.
Together with Proposition 1.3.3, the first assertion of the proposition implies the
second one. □
and call this quantity the success probability of 𝐴 on input of 𝑎. Also, the value
(1.3.11) 𝑞𝐴 (𝑎) = 1 − 𝑝𝐴 (𝑎)
is called the failure probability of 𝐴 on input of 𝑎.
Exercise 1.3.7. Prove that for all 𝑎 ∈ Input(𝐴), the sum in (1.3.10) is convergent and
its limit is independent of the ordering of the terms in the sum.
Example 1.3.8. Let 𝐴 = 𝗆𝖼𝖥𝖺𝖼𝗍𝗈𝗋 specified in Algorithm 1.2.8 and let 𝑎 ∈ Input(𝐴) =
ℕ>1 . Then Randsucc (𝐴, 𝑎) is the set of all sequences (𝑏) where 𝑏 is a proper divisor of
1.3. Analysis of probabilistic algorithms 25
𝑎 of bitlength at most m(𝑎). By Exercise 1.2.7, this set is not empty. Therefore, the
success probability 𝑝𝐴 (𝑎) of 𝐴 on input of 𝑎 is at least 1/2m(𝑎) .
We can use the definition of the success probability to show that Bernoulli algo-
rithms terminate with probability 1.
Proposition 1.3.9. Let 𝐴 be a Bernoulli algorithm. Then we have Pr𝐴,𝑎 (∞) = 0 for all
𝑎 ∈ Input(𝐴).
Proof. Denote by 𝐴′ the error-free Monte Carlo algorithm used in 𝐴. Let 𝑎 ∈ Input(𝐴′ ).
Then FRand(𝐴, 𝑎) consists of all strings 𝑟 ⃗ = 𝑟1⃗ || ⋯ ||𝑟 𝑘⃗ where 𝑘 ∈ ℕ, 𝑟 𝑖⃗ ∈ Rand(𝐴′ , 𝑎)
for 1 ≤ 𝑖 ≤ 𝑘, 𝐴′ (𝑎, 𝑟 𝑖⃗ ) = “Failure” for 1 ≤ 𝑖 < 𝑘, and 𝐴′ (𝑎, 𝑟 𝑘⃗ ) ≠ “Failure”. So we
obtain
∞
𝑝𝐴′ (𝑎)
(1.3.12) ∑ Pr𝐴,𝑎 (𝑠) = 𝑝𝐴′ (𝑎) ∑ (1 − 𝑝𝐴′ (𝑎))𝑘 = = 1.
⃗
𝑟∈FRand(𝐴,𝑎) 𝑘=0
𝑝𝐴′ (𝑎)
Example 1.3.11. Let 𝐴 = 𝗆𝖼𝖥𝖺𝖼𝗍𝗈𝗋 which is specified in Algorithm 1.2.8 and let 𝑎 ∈
Input(𝐴). Then FRand(𝐴, 𝑎) is the set of all one-element sequences (𝑏), where 𝑏 is
an integer that can be represented by a bit string of length m(𝑎). So | FRand(𝐴, 𝑎)| ≤
2m(𝑎) . Also, by Proposition 1.3.5 we have Pr𝐴,𝑎 (∞) = 0. So eTime𝐴 (𝑎) is defined. Since
2
each run of 𝐴 on input 𝑎 has running time O(size 𝑎), we have
2 1 2
(1.3.14) eTime𝐴 (𝑎) = O (size 𝑎 ∑ ) = O(size 𝑎).
⃗
𝑟∈FRand(𝐴,𝑎)
2m(𝑎)
The next proposition determines the expected running time of Bernoulli algorithms.
Proposition 1.3.12. Let 𝐴 be an error-free Monte Carlo algorithm, let 𝑎 ∈ Input(𝐴),
and let 𝑡 be an upper bound on the running time of 𝐴 with input of 𝑎. Then the expected
running time of 𝖻𝖾𝗋𝗇𝗈𝗎𝗅𝗅𝗂𝐴 (𝑎) specified in Algorithm 1.2.11 is O(𝑡/𝑝𝐴 (𝑎)).
26 1. Classical Computation
Proof. We use the fact that for all 𝑐 ∈ ℝ with 0 ≤ 𝑐 < 1 we have
∞
𝑐
(1.3.15) ∑ 𝑘𝑐𝑘 = .
𝑖=0
(1 − 𝑐)2
So the expected number of calls of 𝐴 until 𝖻𝖾𝗋𝗇𝗈𝗎𝗅𝗅𝗂𝐴 (𝑎) is successful is
∞
𝑝𝐴 (𝑎) 1
(1.3.16) 𝑝𝐴 (𝑎) ∑ 𝑘𝑞𝐴 (𝑎)𝑘 = 2
= .
𝑘=0
𝑝𝐴 (𝑎) 𝑝𝐴 (𝑎)
The statement about the expected running time is an immediate consequence of this
result. □
Example 1.3.13. Proposition 1.3.12 allows the analysis of 𝗅𝗏𝖥𝖺𝖼𝗍𝗈𝗋 specified in Algo-
rithm 1.2.10. Let 𝑛 ∈ ℕ and let 𝑎 ∈ ℕ>1 be an input of size 𝑛. It follows from Example
1.3.8 that the success probability of 𝗆𝖼𝖥𝖺𝖼𝗍𝗈𝗋(𝑎) is at least 1/2m(𝑎) ≥ 1/2𝑛/2+1 . Also,
the worst-case running time of 𝗆𝖼𝖥𝖺𝖼𝗍𝗈𝗋(𝑎) is O(𝑛2 ). It therefore follows from Proposi-
tion 1.3.12 that the expected running time of 𝗆𝖼𝖥𝖺𝖼𝗍𝗈𝗋(𝑎) is O(𝑛2 2𝑛/2 ). So the expected
running time is exponential which shows that this probabilistic algorithm has no ad-
vantage over the deterministic Algorithm 1.1.21.
Definition 1.3.15. Let 𝑎 ∈ Input(𝐴). We denote the success probability of 𝗋𝖾𝗉𝖾𝖺𝗍𝐴 (𝑎, 𝑘)
by 𝑝𝐴 (𝑎, 𝑘) and the failure probability of this call by 𝑞𝐴 (𝑎, 𝑘) = 1 − 𝑝𝐴 (𝑎, 𝑘).
The next corollary shows how to choose 𝑘 in order to obtain a desired success prob-
ability. It also gives a lower bound for 𝑘 that corresponds to a given success probability.
Corollary 1.3.17. Let 𝑎 ∈ Input(𝐴) with 𝑝𝐴 (𝑎) > 0 and let 𝜀 ∈ ℝ with 0 < 𝜀 ≤ 1.
(1) If 𝑘 ≥ log(1/𝜀)/𝑝𝐴 (𝑎), then 𝑝𝐴 (𝑎, 𝑘) ≥ 1 − 𝜀.
(2) If 𝑝𝐴 (𝑎, 𝑘) ≥ 1 − 𝜀, then 𝑘 ≥ log(1/𝜀)𝑞𝐴 (𝑎)/𝑝𝐴 (𝑎).
Exercise 1.3.18. Prove Corollary 1.3.17.
Example 1.3.19. Consider 𝐴 = 𝗆𝖼𝖥𝖺𝖼𝗍𝗈𝗋 as specified in Algorithm 1.2.8. In Example
1.3.8, we have seen that 𝑝𝐴 (𝑎) ≥ 1/2m(𝑎) > 0 for all 𝑎 ∈ ℤ>1 . So 𝗋𝖾𝗉𝖾𝖺𝗍𝐴 can be used
to amplify this probability. For example, if we choose 𝜀 = 1/3 and 𝑘 ≥ (log 3)2m(𝑎) ≥
2
log(1/𝜀)/𝑝𝐴 (𝑎), then Corollary 1.3.17 implies 𝑝𝐴 (𝑎, 𝑘) ≥ 3 . Since
m(𝑎) ≥ bitLength(𝑎)/2,
this number 𝑘 of calls to 𝐴 is exponential in size 𝑎. Therefore, again, this algorithm
does not give an asymptotic advantage over the deterministic Algorithm 1.1.21.
We can also amplify the success probability of decision algorithms with errors.
Consider a true-biased decision algorithm 𝐴 that decides a language 𝐿. We can mod-
ify this algorithm to make it an error-free Monte Carlo algorithm. For this, we set
Output(𝐴, 𝑠)⃗ = {1} for all 𝑠 ⃗ ∈ 𝐿, Output(𝐴, 𝑠)⃗ = ∅ for all 𝑠 ⃗ ∈ {0, 1}∗ ⧵ 𝐿 and we re-
place the return value 0 by “Failure”. So, the success probability of 𝐴 can be amplified
using Algorithm 1.3.14. Analogously, the success probability of false-biased decision
algorithms can be amplified.
Next, we consider a Monte Carlo decision algorithm 𝐴 with two-sided error that
decides a language 𝐿. Such an algorithm never gives certainty about whether an input
𝑠 ⃗ ∈ {0, 1}∗ belongs to 𝐿 or not. However, the probability of success can be increased
by using a majority vote. To do this, we run the algorithm 𝑘 times with input 𝑠 ⃗ for
some 𝑘 ∈ ℕ and count the number of positive responses 1 and the number of negative
28 1. Classical Computation
answers 0 and return 1 or 0 depending on which answer has the majority. This is done
in Algorithm 1.3.20.
Algorithm 1.3.20. Success probability amplifier for a Monte Carlo decision algorithm
𝐴 with two-sided error
Input: 𝑠 ⃗ ∈ {0, 1}∗ , 𝑘 ∈ ℕ
Output: 1 if 𝑠 ⃗ ∈ 𝐿 and 0 if 𝑠 ⃗ ∈ {0, 1}∗ ⧵ 𝐿 where 𝐿 is the language decided by the Monte
Carlo decision algorithm 𝐴 that is used as a subroutine
1: 𝗆𝖺𝗃𝗈𝗋𝗂𝗍𝗒𝖵𝗈𝗍𝖾𝐴 (𝑠,⃗ 𝑘)
2: 𝑙=0
3: for 𝑖 = 1 to 𝑘 do
4: 𝑙 ← 𝑙 + 𝐴(𝑠)⃗
5: end for
6: if 𝑙 > 𝑘/2 then
7: return 1
8: else
9: return 0
10: end if
11: end
We will show that under certain conditions, Algorithm 1.3.20 amplifies the success
probability of decision algorithms with two-sided error. For this, we need the following
definition.
Definition 1.3.21. Assume that a Monte Carlo algorithm 𝐴 decides a language 𝐿, let
𝑠 ⃗ ∈ 𝐿, and let 𝑏 ∈ {0, 1}. Then we write Pr(𝐴(𝑠) = 𝑏) for the probability that on input
of 𝑠 ⃗ the algorithm 𝐴 returns 𝑏.
Proposition 1.3.22. Let 𝐴 be a Monte Carlo algorithm that decides a language 𝐿, let
1
𝑠 ⃗ ∈ 𝐿, and let 𝜀 ∈ ℝ>0 such that Pr(𝐴(𝑠)⃗ = 1) ≥ 2 + 𝜀. Then for all 𝑘 ∈ ℕ we have
2
(1.3.23) Pr(𝗆𝖺𝗃𝗈𝗋𝗂𝗍𝗒𝖵𝗈𝗍𝖾𝐴 (𝑠,⃗ 𝑘) = 1) > 1 − 𝑒−2𝑘𝜀 .
for all 𝑘 ∈ ℕ.
Exercise 1.3.24. Use the result in Example 1.4.7 to determine 𝑘 such that
2
Pr(𝗆𝖺𝗃𝗈𝗋𝗂𝗍𝗒𝖵𝗈𝗍𝖾𝐴 (𝑠,⃗ 𝑘) = 1) ≥ .
3
Example 1.4.3. By the square root problem we mean the triplet (ℕ, ℤ, 𝑅) where 𝑅 =
{(𝑎, 𝑏) ∈ ℕ × ℤ ∶ 𝑏2 = 𝑎 or 𝑏 = 0 if 𝑎 is not a square in ℕ}. An instance of the square
root problem is 4. It has the two solutions −2 and 2. Another instance is 2. It has the
solution 0 that indicates that 2 is not a square in ℕ. We can also define this problem
differently by only allowing problem instances that are squares.
30 1. Classical Computation
Example 1.4.9. As seen in Example 1.1.29, the gcd problem from Example 1.4.5 can
be solved in deterministic time O(𝑛3 ). As noted in this example, the gcd problem can
2
even be solved in deterministic time O(𝑛2 ) or O(𝑛 log 𝑛) and linear space. Thus this
problem can be solved in polynomial time or, more precisely, cubic, quadratic, or even
quasilinear time.
Example 1.4.10. As seen in Example 1.1.31, the integer factorization problem can be
solved in deterministic exponential time and linear space.
1.4.3. Complexity classes. In this section, we delve into the definition of com-
plexity classes, which serve to group languages that satisfy specific complexity con-
ditions. The foundation of this concept was laid in the early 1970s. Over the years,
complexity theory has witnessed the introduction of numerous complexity classes, and
extensive research has been conducted to study their interrelationships. For the scope
of this discussion, we will focus on a select few complexity classes that hold relevance
to our context.
We begin with the definition of the most basic complexity classes.
Definition 1.4.14. Let 𝑓 ∶ ℕ → ℝ>0 be a function.
(1) The complexity class DTIME(𝑓) is the set of all languages 𝐿 for which there is a
deterministic algorithm that decides 𝐿 and has time complexity O(𝑓).
(2) The complexity class DSPACE(𝑓) is the set of all languages 𝐿 for which there is a
deterministic algorithm that decides 𝐿 and has space complexity O(𝑓).
Exercise 1.4.16. Consider the language 𝐿 of all strings that correspond to squares in
ℕ. Show that 𝐿 is in P.
Example 1.4.17. As shown in 2002 by Manindra Agrawal, Neeraj Kayal, and Nitin
Saxena [AKS04], the language 𝐿 of all bit strings that correspond to composite integers
is in P. Therefore, it can be decided in polynomial time whether a positive integer is
a prime number composite. However, if the algorithm of Agrawal, Kayal, and Saxena
finds that a positive integer is composite, it does not give a proper divisor of this number.
Finding such a divisor appears to be a much harder problem (see Example 1.4.13).
There are many other languages 𝐿 that have a property analogous to that of the
Goldbach language presented in Example 1.4.19. Abstractly speaking, this property is
the following. For 𝑠 ⃗ ∈ {0, 1}∗ it may be hard to decide whether 𝑠 ⃗ ∈ 𝐿. But for each 𝑠 ⃗ ∈ 𝐿
there is a certificate 𝑡 which allows us to verify in polynomial time in |𝑠|⃗ that 𝑠 ⃗ ∈ 𝐿. For
1.4. Complexity theory 33
the Goldbach language, the certificate is the prime number 𝑝 such that 𝑎 − 𝑝 is a prime
number. The set of all languages with this property is denoted by NP, which stands for
nondeterministic polynomial time. This name comes from another NP modeling that
we do not discuss here (see [LP98]). Here is a formal definition of NP.
Definition 1.4.20. (1) The complexity class NP is the set of all languages 𝐿 with the
following properties.
(a) There is a deterministic polynomial time algorithm 𝐴 with Input(𝐴) = {0, 1}∗
× {0, 1}∗ such that 𝐴(𝑠,⃗ 𝑡)⃗ = 1 implies 𝑠 ⃗ ∈ 𝐿 for all 𝑠,⃗ 𝑡 ⃗ ∈ Σ∗ .
(b) There is 𝑐 ∈ ℕ that may depend on 𝐿 so that for all 𝑠 ⃗ ∈ 𝐿 there is 𝑡 ⃗ ∈ {0, 1}∗
with |𝑡|⃗ ≤ |𝑠|⃗ 𝑐 and 𝐴(𝑠,⃗ 𝑡)⃗ = 1.
If 𝑠 ⃗ ∈ 𝐿 and 𝑡 ⃗ ∈ {0, 1}∗ such that 𝐴(𝑠,⃗ 𝑡)⃗ = 1, then 𝑡 ⃗ is called a certificate for the
membership of 𝑠 ⃗ in 𝐿.
(2) The complexity class Co-NP is the set of all languages 𝐿 such that {0, 1}∗ ⧵ 𝐿 ∈ NP.
One of the big open research problems in computer science is finding out whether
P is equal to NP. It is one of the seven Millennium Prize Problems. They are well-
known mathematical problems that were selected by the Clay Mathematics Institute
in the year 2000. The Clay Institute has pledged a US$1 million prize for the correct
solution to any of the problems.
The complexity theory that we have explained so far only refers to solving language
decision problems but not to more general computational problems such as finding
proper divisors of composite integers. But, as illustrated in the next example, there is
a close connection between these two problem classes.
ORGANIC ACT.
AN ACT TO PROVIDE A TEMPORARY GOVERNMENT FOR THE
TERRITORY OF COLORADO.
Sec. 5. And be it further enacted, That every free white male citizen
of the United States, above the age of twenty-one years, who shall
have been a resident of said Territory at the time of the passage of
this act, including those recognized as citizens by the treaty with the
Republic of Mexico, concluded February two, eighteen hundred and
forty-eight, and the treaty negotiated with the same country on the
thirtieth of December, eighteen hundred and fifty-three, shall be
entitled to vote at the first election, and shall be eligible to any office
within the said Territory; but the qualifications of voters and of
holding office at all subsequent elections shall be such as shall be
prescribed by the Legislative Assembly.
Sec. 14. And be it further enacted, That when the land in the said
Territory shall be surveyed, under the direction of the Government of
the United States, preparatory to bringing the same into market,
sections numbered sixteen and thirty-six in each town in said
Territory and be and the same are hereby reserved for the purpose
of being applied to schools in the States hereafter to be erected out
of the same.
Sec. 17. And be it further enacted, That the President of the United
States, by and with the advice and consent of the Senate, shall be
and is hereby authorized to appoint a Surveyor General for Colorado,
who shall locate his office at such place as the Secretary of the
Interior shall from time to time direct, and whose duties, powers,
obligations, responsibilities, compensation, and allowances for clerk
hire, office rent, fuel, and incidental expenses, shall be the same as
those of the Surveyor General of New Mexico, under the direction of
the Secretary of the Interior, and such instructions as he may from
time to time deem it advisable to give him.
Sec. 2. And be it further enacted, That every bill which shall have
passed the Legislative Assembly shall, before it become a law, be
presented to the Governor of the Territory; if he approve, he shall
sign it; but if not he shall return it, with his objections, to the house
in which it originated, who shall enter the objections at large on
their journal, and proceed to reconsider it. If, after such
reconsideration, two-thirds of that house shall agree to pass the bill,
it shall be sent, together with the objections, to the other house, by
which it shall likewise be reconsidered, and if approved by two-thirds
of that house, it shall become a law. But in all such cases the votes
of both houses shall be determined by yeas and nays, to be entered
on the journal of each house respectively. If any bill shall not be
returned by the Governor within three days (Sundays excepted)
after it shall have been presented to him, the same shall be a law in
like manner as if he had signed it, unless the Assembly, by
adjournment, prevents its return, in which case it shall not be law.
SCHUYLER COLFAX,
Speaker of the House of Representatives.
LAFAYETTE S. FOSTER,
President of the Senate, pro tempore.
Sec. 7. And be it further enacted, That from and after the first day of
April next the salary of each of the judges of the several Supreme
Courts, in each of the organized Territories (except Montana and
Idaho), shall be two thousand five hundred dollars.
Sec. 8. And be it further enacted, That all acts and parts of acts
inconsistent with this act are hereby repealed.
Sec. 3. That from and after the first day of July, eighteen hundred
and seventy-three, the annual salaries of the Governors of the
several Territories of the United States shall be three thousand five
hundred dollars, and the salaries of the Secretaries of said Territories
shall be two thousand five hundred dollars each.
Sec. 4. That the provisions of this act shall not apply to the District of
Columbia; Provided, That no law of any Territorial Legislature shall
be made or enforced by which any officer of a Territory herein
provided for, or the officers or members of any Territorial Legislature
shall be paid any compensation other than that provided by the laws
of the United States.
Approved January 23, 1873.
CHAPTER III.
In 1861 the Kansas officials had disappeared. The Provisional
government continued to exercise its partially acknowledged
authority until the arrival of the United States appointees for the new
Territory. These arrived on the 29th of May. The Federal officers,
who then came, with their sealed commissions, were: William Gilpin,
Governor; Lewis Ledyard Weld, Secretary; B. F. Hall, Chief Justice; S.
N. Pettis, and Charles Lee Armor, Associate Justices; Copeland
Townsend, Marshal; James E. Dalliba, Attorney General; F. M. Case,
Surveyor General. The Provisional government now ceased. Its laws
had been published, but not enforced, and its officers had the honor,
but not the pay, of the positions they held.
The year 1864 was the gloomy period of the rebellion. Speculation
became a mania. Stock companies of the most gigantic character
were organized on the basis of Colorado gold mines. It need hardly
be said that thousands lost large sums by reckless investments in
gold mining stocks. In the spring the Indians of the plains,
composed of Sioux, Cheyennes, and Arapahoes, combined to carry
on a bloody and exterminating war against the whites. They
attacked the coaches; murdered, scalped, and mutilated the
passengers. Exposed dwellings were surrounded and the inmates
massacred. Emboldened by little or no resistance, they admitted no
pause to their savage butcheries. This thoroughly roused the people
to punish the hostile fiends. Twelve hundred men, under the
command of Col. J. M. Chivington, hurried forward to meet the
merciless savages, and arrest their work of horrors. They found and
suddenly assailed a large troop of Cheyennes, about seven hundred
in number, and with hearts steeled against mercy, dealt swift
retribution, sparing neither age nor sex, until nearly all were
destroyed. This stunning blow checked the Indian outrages. A
temporary quiet ensued, and the roads were again animated with
coaches and wagon trains. Much credit is due to Captain Tyler, who,
with his brave company, opened and protected the line of
communication with the States. On May 19th, a more disastrous
calamity befell Denver than the fire of the preceding year. An
appalling flood swept down Cherry Creek, overwhelming large
buildings, and sweeping them and their contents down its
destructive current. Twenty persons perished in the ruthless waters.
The damages were computed to be a million dollars. Near the close
of the Thirty-eighth Congress, a bill was passed, in response to a
petition of the Colorado Legislature, enabling the territory to
organize a State government and enter the Union. Under its
provisions a convention met in Denver July 4th, 1864, and framed a
constitution. This was rejected by the people on the second Tuesday
of October. In the fall of 1864, A. A. Bradford was elected delegate
to Congress.
The Denver Pacific Railway Company had been formed in 1867. Its
principal object, as stated in the articles of incorporation, was to
build a railroad and telegraph line to Cheyenne, and there connect
with the Union Pacific road. After some delay the funds were
secured, the construction of the road hastened, and on the 23d day
of June, 1870, the first train arrived in Denver. On the 15th day of
August, the Kansas Pacific Railway, 640 miles in length, was
completed. In September, seventeen miles of the Colorado Central
were finished. The connection of Golden with Denver was thereby
effected. During this year successful efforts were made to plant
colonies in choice sections of the Territory. The Meeker-Greeley
colony was organized in New York in the winter of 1869-70, and
located in the spring. It now has a population of 2,000, happy and
prosperous, and distinguished by prohibition laws and devotion to
temperance. The town site of the colony is a delta formed by the
Cache-a-la-Poudre and South Platte rivers. The Chicago-Colorado
colony is largely composed of Western men, and is animated by a
liberal progressive spirit. The location of the colony is in every way
most desirable. The energy and enterprise of the colonists excite
great admiration. They have already a beautiful town, and a large
extent of country under cultivation. The German colony may also be
mentioned. It occupied Wet Mountain Valley, which lies in Pueblo
and Fremont counties. Thus, by rapid transit, the Territory was
brought into close communication with the States, and began to fill
up with thousands, who, independently, or in co-operative
association, settled for the purposes of agriculture. No sign so
cheering as a settlement of a country by intelligent, enterprising
farmers. Hardy, industrious miners had already crowded into the
mountains, and skillful, energetic farmers now collected on the
plains, intent to reap from them rich and abundant harvests. In the
fall, Jerome B. Chaffee was elected delegate to Congress.
This year, 1871, dates the settlement of the Colorado Springs colony,
distinguished for its rapid and prosperous growth. Situated seventy-
five miles south from Denver, it became the temporary terminus of
the Denver and Rio Grande railway, which was completed to that
point during this year. The railroad has since passed on to Pueblo,
and thence to La Veta, with branches to El Moro, and Canon City.
During the year 1874, the South Park railroad was finished to
Morrison.
ENABLING ACT.
Be it enacted by the Senate and House of Representatives of the
United States of America in Congress assembled, That the
inhabitants of the Territory of Colorado included in the boundaries
hereinafter designated be, and they are hereby, authorized to form
for themselves, out of said Territory, a State government, with the
name of the State of Colorado; which State, when formed, shall be
admitted into the Union upon an equal footing with the original
States in all respects whatsoever, as hereinafter provided.
Sec. 2. That the said State of Colorado shall consist of all the
territory included within the following boundaries, to wit:
Commencing on the thirty-seventh parallel of north latitude where
the twenty-fifth meridian of longitude west from Washington crosses
the same; thence north on said meridian, to the forty-first parallel of
north latitude; thence along said parallel west to the thirty-second
meridian of longitude west from Washington; thence south on said
meridian, to the thirty-seventh parallel of north latitude; thence
along said thirty-seventh parallel of north latitude, to the place of
beginning.
Sec. 3. That all persons qualified by law to vote for representatives
to the general assembly of said Territory, at the date of the passage
of this act, shall be qualified to be elected, and they are hereby
authorized to vote for and choose representatives to form a
convention under such rules and regulations as the governor of said
Territory, the chief justice, and the United States attorney thereof
may prescribe; and also to vote upon the acceptance or rejection of
such constitution as may be formed by said convention, under such
rules and regulations as said convention may prescribe; and the
aforesaid representatives to form the aforesaid convention shall be
apportioned among the several counties in said Territory in
proportion to the vote polled in each of said counties at the last
general election as near as may be; and said apportionment shall be
made for said Territory by the governor, United States district
attorney, and chief justice thereof, or any two of them; and the
governor of said Territory shall, by proclamation, order an election of
the representatives aforesaid to be held throughout the Territory at
such time as shall be fixed by the governor, chief justice, and United
States attorney, or any two of them, which proclamation shall be
issued within ninety days next after the first day of September,
eighteen hundred and seventy-five, and at least thirty days prior to
the time of said election; and such election shall be conducted in the
same manner as is prescribed by the laws of said Territory regulating
elections therein for members of the house of representatives; and
the number of members to said convention shall be the same as
now constitutes both branches of the legislature of the aforesaid
Territory.
Sec. 4. That the members of the convention thus elected shall meet
at the capital of said Territory, on a day to be fixed by said governor,
chief justice, and United States attorney, not more than sixty days
subsequent to the day of election, which time of meeting shall be
contained in the aforesaid proclamation mentioned in the third
section of this act, and, after organization, shall declare, on behalf of
the people of said Territory, that they adopt the Constitution of the
United States; whereupon the said convention shall be, and is
hereby, authorized to form a constitution and State government for
said Territory: Provided, That the constitution shall be republican in
form, and make no distinction in civil or political rights on account of
race or color, except Indians not taxed, and not be repugnant to the
Constitution of the United States and the principles of the
Declaration of Independence: And provided further, That said
convention shall provide, by an ordinance irrevocable without the
consent of the United States and the people of said State, first, that
perfect toleration of religious sentiment shall be secured, and no
inhabitant of said State shall ever be molested in person or property
on account of his or her mode of religious worship; secondly, that
the people inhabiting said Territory do agree and declare that they
forever disclaim all right and title to the unappropriated public lands
lying within said Territory, and that the same shall be and remain at
the sole and entire disposition of the United States, and that the
lands belonging to citizens of the United States residing without the
said State shall never be taxed higher than the lands belonging to
residents thereof, and that no taxes shall be imposed by the State on
lands or property therein belonging to, or which may hereafter be
purchased by the United States.
Sec. 6. That until the next general census said State shall be entitled
to one Representative in the House of Representatives of the United
States, which Representative, together with the governor and State
and other officers provided for in said constitution, shall be elected
on a day subsequent to the adoption of the constitution, and to be
fixed by said constitutional convention; and until said State officers
are elected and qualified under the provisions of the constitution,
the territorial officers shall continue to discharge the duties of their
respective offices.
Sec. 10. That seventy-two other sections of land shall be set apart
and reserved for the use and support of a State university, to be
selected and approved in manner as aforesaid, and to be
appropriated and applied as the legislature of said State may
prescribe for the purpose named, and for no other purpose.
Sec. 11. That all salt-springs within said State, not exceeding twelve
in number, with six sections of land adjoining, and as contiguous as
may be to each, shall be granted to said State for its use, the said
land to be selected by the governor of said State within two years
after the admission of the State, and when so selected to be used
and disposed of on such terms, conditions, and regulations as the
legislature shall direct: Provided, That no salt-springs or lands the
right whereof is now vested in any individual or individuals, or which
hereafter shall be confirmed or adjudged to any individual or
individuals, shall by this act be granted to said State.
Sec. 12. That five per centum of the proceeds of the sales of
agricultural public lands lying within said State which shall be sold by
the United States subsequent to the admission of said State into the
Union, after deducting all the expenses incident to the same, shall
be paid to the said State for the purpose of making such internal
improvements within said State as the legislature thereof may direct:
Provided, That this section shall not apply to any lands disposed of
under the homestead laws of the United States, or to any lands now
or hereafter reserved for public or other uses.
Sec. 13. That any balance of the appropriations for the legislative
expenses of said Territory of Colorado remaining unexpended shall
be applied to and used for defraying the expenses of said
convention, and for the payment of the members thereof, under the
same rules and regulations and rates as are now provided by law for
the payment of the territorial legislature.
Sec. 14. That the two sections of land in each township herein
granted for the support of common schools shall be disposed of only
at public sale and at a price not less than two dollars and fifty cents
per acre, the proceeds to constitute a permanent school fund, the
interest of which to be expended in the support of common schools.
Sec. 15. That all mineral lands shall be excepted from the operation
and grants of this act.
An Act to amend the Act entitled “An Act to enable the people of
Colorado to form a constitution and State government, and for
the admission of said State into the Union on an equal footing
with the original States,” approved March 3, 1875.
CONSTITUTION.
PREAMBLE.
ARTICLE I.
BOUNDARIES.
BILL OF RIGHTS.
Section 1. That all political power is vested in and derived from the
people; that all government, of right, originates from the people, is
founded upon their will only, and is instituted solely for the good of
the whole.
Sec. 2. That the people of this State have the sole and exclusive right
of governing themselves, as a free, sovereign and independent
State; and to alter and abolish their constitution and form of
government whenever they may deem it necessary to their safety
and happiness, provided such change be not repugnant to the
Constitution of the United States.
ebookbell.com