Get Probability and Computing Randomization and Probabilistic Techniques in Algorithms and Data Analysis 2nd Edition Michael Mitzenmacher PDF ebook with Full Chapters Now
Get Probability and Computing Randomization and Probabilistic Techniques in Algorithms and Data Analysis 2nd Edition Michael Mitzenmacher PDF ebook with Full Chapters Now
com
https://textbookfull.com/product/probability-and-computing-
randomization-and-probabilistic-techniques-in-algorithms-
and-data-analysis-2nd-edition-michael-mitzenmacher/
OR CLICK BUTTON
DOWNLOAD NOW
https://textbookfull.com/product/probabilistic-data-structures-and-
algorithms-for-big-data-applications-gakhov/
textboxfull.com
https://textbookfull.com/product/inequalities-in-analysis-and-
probability-2nd-edition-odile-pons/
textboxfull.com
https://textbookfull.com/product/algorithms-design-techniques-and-
analysis-m-h-alsuwaiyel/
textboxfull.com
Randomization in clinical trials : theory and practice 2nd
Edition Lachin
https://textbookfull.com/product/randomization-in-clinical-trials-
theory-and-practice-2nd-edition-lachin/
textboxfull.com
https://textbookfull.com/product/analysis-for-computer-scientists-
foundations-methods-and-algorithms-michael-oberguggenberger/
textboxfull.com
https://textbookfull.com/product/analysis-techniques-for-racecar-data-
acquisition-2nd-edition-jorge-segers/
textboxfull.com
https://textbookfull.com/product/an-introduction-to-categorical-data-
analysis-3rd-edition-wiley-series-in-probability-and-statistics-
agresti/
textboxfull.com
Second Edition
www.cambridge.org
Information on this title: www.cambridge.org/9781107154889
10.1017/9781316651124
© Michael Mitzenmacher and Eli Upfal 2017
This publication is in copyright. Subject to statutory exception
and to the provisions of relevant collective licensing agreements,
no reproduction of any part may take place without the written
permission of Cambridge University Press.
First published 2017
Printed in the United States of America by Sheridan Books, Inc.
A catalogue record for this publication is available from the British Library.
Library of Congress Cataloging in Publication Data
Names: Mitzenmacher, Michael, 1969– author. | Upfal, Eli, 1954– author.
Title: Probability and computing / Michael Mitzenmacher Eli Upfal.
Description: Second edition. | Cambridge, United Kingdom ;
New York, NY, USA : Cambridge University Press, [2017] |
Includes bibliographical references and index.
Identifiers: LCCN 2016041654 | ISBN 9781107154889
Subjects: LCSH: Algorithms. | Probabilities. | Stochastic analysis.
Classification: LCC QA274.M574 2017 | DDC 518/.1 – dc23
LC record available at https://lccn.loc.gov/2016041654
ISBN 978-1-107-15488-9 Hardback
Additional resources for this publication at www.cambridge.org/Mitzenmacher.
Cambridge University Press has no responsibility for the persistence or accuracy
of URLs for external or third-party Internet Web sites referred to in this publication
and does not guarantee that any content on such Web sites is, or will remain,
accurate or appropriate.
To
vii
contents
viii
contents
x
contents
13 Martingales 341
13.1 Martingales 341
13.2 Stopping Times 343
13.2.1 Example: A Ballot Theorem 345
13.3 Wald’s Equation 346
13.4 Tail Inequalities for Martingales 349
13.5 Applications of the Azuma–Hoeffding Inequality 351
13.5.1 General Formalization 351
13.5.2 Application: Pattern Matching 353
13.5.3 Application: Balls and Bins 354
13.5.4 Application: Chromatic Number 355
13.6 Exercises 355
xi
contents
xii
contents
xiii
Preface to the Second Edition
In the ten years since the publication of the first edition of this book, probabilistic
methods have become even more central to computer science, rising with the growing
importance of massive data analysis, machine learning, and data mining. Many of the
successful applications of these areas rely on algorithms and heuristics that build on
sophisticated probabilistic and statistical insights. Judicious use of these tools requires
a thorough understanding of the underlying mathematical concepts. Most of the new
material in this second edition focuses on these concepts.
The ability in recent years to create, collect, and store massive data sets, such as
the World Wide Web, social networks, and genome data, lead to new challenges in
modeling and analyzing such structures. A good foundation for models and analysis
comes from understanding some standard distributions. Our new chapter on the nor-
mal distribution (also known as the Gaussian distribution) covers the most common
statistical distribution, as usual with an emphasis on how it is used in settings in com-
puter science, such as for tail bounds. However, an interesting phenomenon is that in
many modern data sets, including social networks and the World Wide Web, we do not
see normal distributions, but instead we see distributions with very different proper-
ties, most notably unusually heavy tails. For example, some pages in the World Wide
Web have an unusually large number of pages that link to them, orders of magnitude
larger than the average. The new chapter on power laws and related distributions covers
specific distributions that are important for modeling and understanding these kinds of
modern data sets.
Machine learning is one of the great successes of computer science in recent years,
providing efficient tools for modeling, understanding, and making predictions based on
large data sets. A question that is often overlooked in practical applications of machine
learning is the accuracy of the predictions, and in particular the relation between accu-
racy and the sample size. A rigorous introduction to approaches to these important
questions is presented in a new chapter on sample complexity, VC dimension, and
Rademacher averages.
xv
preface to the second edition
We have also used the new edition to enhance some of our previous material. For
example, we present some of the recent advances on algorithmic variations of the pow-
erful Lovász local lemma, and we have a new section covering the wonderfully named
and increasingly useful hashing approach known as cuckoo hashing. Finally, in addi-
tion to all of this new material, the new edition includes updates and corrections, and
many new exercises.
We thank the many readers who sent us corrections over the years – unfortunately,
too many to list here!
xvi
Preface to the First Edition
Why Randomness?
Why should computer scientists study and use randomness? Computers appear to
behave far too unpredictably as it is! Adding randomness would seemingly be a dis-
advantage, adding further complications to the already challenging task of efficiently
utilizing computers.
Science has learned in the last century to accept randomness as an essential com-
ponent in modeling and analyzing nature. In physics, for example, Newton’s laws led
people to believe that the universe was a deterministic place; given a big enough calcu-
lator and the appropriate initial conditions, one could determine the location of planets
years from now. The development of quantum theory suggests a rather different view;
the universe still behaves according to laws, but the backbone of these laws is proba-
bilistic. “God does not play dice with the universe” was Einstein’s anecdotal objection
to modern quantum mechanics. Nevertheless, the prevailing theory today for subparti-
cle physics is based on random behavior and statistical laws, and randomness plays a
significant role in almost every other field of science ranging from genetics and evolu-
tion in biology to modeling price fluctuations in a free-market economy.
Computer science is no exception. From the highly theoretical notion of probabilis-
tic theorem proving to the very practical design of PC Ethernet cards, randomness
and probabilistic methods play a key role in modern computer science. The last two
decades have witnessed a tremendous growth in the use of probability theory in comput-
ing. Increasingly more advanced and sophisticated probabilistic techniques have been
developed for use within broader and more challenging computer science applications.
In this book, we study the fundamental ways in which randomness comes to bear on
computer science: randomized algorithms and the probabilistic analysis of algorithms.
Randomized algorithms: Randomized algorithms are algorithms that make random
choices during their execution. In practice, a randomized program would use values
generated by a random number generator to decide the next step at several branches
of its execution. For example, the protocol implemented in an Ethernet card uses ran-
dom numbers to decide when it next tries to access the shared Ethernet communication
xvii
preface to the first edition
medium. The randomness is useful for breaking symmetry, preventing different cards
from repeatedly accessing the medium at the same time. Other commonly used applica-
tions of randomized algorithms include Monte Carlo simulations and primality testing
in cryptography. In these and many other important applications, randomized algo-
rithms are significantly more efficient than the best known deterministic solutions.
Furthermore, in most cases the randomized algorithms are also simpler and easier to
program.
These gains come at a price; the answer may have some probability of being incor-
rect, or the efficiency is guaranteed only with some probability. Although it may seem
unusual to design an algorithm that may be incorrect, if the probability of error is suf-
ficiently small then the improvement in speed or memory requirements may well be
worthwhile.
Probabilistic analysis of algorithms: Complexity theory tries to classify computa-
tion problems according to their computational complexity, in particular distinguishing
between easy and hard problems. For example, complexity theory shows that the Trav-
eling Salesman problem is NP-hard. It is therefore very unlikely that we will ever know
an algorithm that can solve any instance of the Traveling Salesman problem in time that
is subexponential in the number of cities. An embarrassing phenomenon for the clas-
sical worst-case complexity theory is that the problems it classifies as hard to compute
are often easy to solve in practice. Probabilistic analysis gives a theoretical explanation
for this phenomenon. Although these problems may be hard to solve on some set of
pathological inputs, on most inputs (in particular, those that occur in real-life applica-
tions) the problem is actually easy to solve. More precisely, if we think of the input as
being randomly selected according to some probability distribution on the collection of
all possible inputs, we are very likely to obtain a problem instance that is easy to solve,
and instances that are hard to solve appear with relatively small probability. Probabilis-
tic analysis of algorithms is the method of studying how algorithms perform when the
input is taken from a well-defined probabilistic space. As we will see, even NP-hard
problems might have algorithms that are extremely efficient on almost all inputs.
The Book
with continuous probability and the Poisson process (Chapter 8). The material from
Chapter 4 on Chernoff bounds, however, is needed for most of the remaining material.
Most of the exercises in the book are theoretical, but we have included some pro-
gramming exercises – including two more extensive exploratory assignments that
require some programming. We have found that occasional programming exercises are
often helpful in reinforcing the book’s ideas and in adding some variety to the course.
We have decided to restrict the material in this book to methods and techniques based
on rigorous mathematical analysis; with few exceptions, all claims in this book are fol-
lowed by full proofs. Obviously, many extremely useful probabilistic methods do not
fall within this strict category. For example, in the important area of Monte Carlo meth-
ods, most practical solutions are heuristics that have been demonstrated to be effective
and efficient by experimental evaluation rather than by rigorous mathematical analy-
sis. We have taken the view that, in order to best apply and understand the strengths
and weaknesses of heuristic methods, a firm grasp of underlying probability theory and
rigorous techniques – as we present in this book – is necessary. We hope that students
will appreciate this point of view by the end of the course.
Acknowledgments
Our first thanks go to the many probabilists and computer scientists who developed
the beautiful material covered in this book. We chose not to overload the textbook
with numerous references to the original papers. Instead, we provide a reference list
that includes a number of excellent books giving background material as well as more
advanced discussion of the topics covered here.
The book owes a great deal to the comments and feedback of students and teaching
assistants who took the courses CS 155 at Brown and CS 223 at Harvard. In particular
we wish to thank Aris Anagnostopoulos, Eden Hochbaum, Rob Hunter, and Adam
Kirsch, all of whom read and commented on early drafts of the book.
Special thanks to Dick Karp, who used a draft of the book in teaching CS 174 at
Berkeley during fall 2003. His early comments and corrections were most valuable in
improving the manuscript. Peter Bartlett taught CS 174 at Berkeley in spring 2004, also
providing many corrections and useful comments.
We thank our colleagues who carefully read parts of the manuscript, pointed out
many errors, and suggested important improvements in content and presentation: Artur
Czumaj, Alan Frieze, Claire Kenyon, Joe Marks, Salil Vadhan, Eric Vigoda, and the
anonymous reviewers who read the manuscript for the publisher.
We also thank Rajeev Motwani and Prabhakar Raghavan for allowing us to use some
of the exercises in their excellent book Randomized Algorithms.
We are grateful to Lauren Cowles of Cambridge University Press for her editorial
help and advice in preparing and organizing the manuscript.
Writing of this book was supported in part by NSF ITR Grant no. CCR-0121154.
xx
chapter one
Events and Probability
This chapter introduces the notion of randomized algorithms and reviews some basic
concepts of probability theory in the context of analyzing the performance of simple
randomized algorithms for verifying algebraic identities and finding a minimum cut-set
in a graph.
Computers can sometimes make mistakes, due for example to incorrect programming
or hardware failure. It would be useful to have simple ways to double-check the results
of computations. For some problems, we can use randomness to efficiently verify the
correctness of an output.
Suppose we have a program that multiplies together monomials. Consider the prob-
lem of verifying the following identity, which might be output by our program:
?
(x + 1)(x − 2)(x + 3)(x − 4)(x + 5)(x − 6) ≡ x6 − 7x3 + 25.
There is an easy way to verify whether the identity is correct: multiply together the
terms on the left-hand side and see if the resulting polynomial matches the right-hand
side. In this example, when we multiply all the constant terms on the left, the result
does not match the constant term on the right, so the identity cannot be valid. More
generally, given two polynomials F (x) and G(x), we can verify the identity
?
F (x) ≡ G(x)
d i
by converting the two polynomials to their canonical forms i=0 ci x ; two polynomi-
als are equivalent if and only if all the coefficients in their canonical forms are equal.
From thisdpoint on let us assume that, as in our example, F (x) is given as a product
F (x) = i=1 (x − ai ) and G(x) is given in its canonical form. Transforming F (x) to
its canonical form by consecutively multiplying the ith monomial with the product of
1
events and probability
We turn now to a formal mathematical setting for analyzing the randomized algorithm.
Any probabilistic statement must refer to the underlying probability space.
Definition 1.1: A probability space has three components:
1. a sample space , which is the set of all possible outcomes of the random process
modeled by the probability space;
2. a family of sets F representing the allowable events, where each set in F is a subset1
of the sample space ; and
3. a probability function Pr : F → R satisfying Definition 1.2.
An element of is called a simple or elementary event.
In the randomized algorithm for verifying polynomial identities, the sample space
is the set of integers {1, . . . , 100d}. Each choice of an integer r in this range is a simple
event.
Definition 1.2: A probability function is any function Pr : F → R that satisfies the
following conditions:
1. for any event E, 0 ≤ Pr(E ) ≤ 1;
2. Pr() = 1; and
3. for any finite or countably infinite sequence of pairwise mutually disjoint events
E1 , E2 , E3 , . . . ,
Pr Ei = Pr(Ei ).
i≥1 i≥1
In most of this book we will use discrete probability spaces. In a discrete probability
space the sample space is finite or countably infinite, and the family F of allow-
able events consists of all subsets of . In a discrete probability space, the probability
function is uniquely defined by the probabilities of the simple events.
Again, in the randomized algorithm for verifying polynomial identities, each choice
of an integer r is a simple event. Since the algorithm chooses the integer uniformly at
random, all simple events have equal probability. The sample space has 100d simple
events, and the sum of the probabilities of all simple events must be 1. Therefore each
simple event has probability 1/100d.
Because events are sets, we use standard set theory notation to express combinations
of events. We write E1 ∩ E2 for the occurrence of both E1 and E2 and write E1 ∪ E2 for
the occurrence of either E1 or E2 (or both). For example, suppose we roll two dice. If
E1 is the event that the first die is a 1 and E2 is the event that the second die is a 1, then
E1 ∩ E2 denotes the event that both dice are 1 while E1 ∪ E2 denotes the event that at
least one of the two dice lands on 1. Similarly, we write E1 − E2 for the occurrence
1 In a discrete probability space F = 2 . Otherwise, and introductory readers may skip this point, since the events
need to be measurable, F must include the empty set and be closed under complement and union and intersection
of countably many sets (a σ -algebra).
3
events and probability
of an event that is in E1 but not in E2 . With the same dice example, E1 − E2 consists
of the event where the first die is a 1 and the second die is not. We use the notation Ē
as shorthand for − E; for example, if E is the event that we obtain an even number
when rolling a die, then Ē is the event that we obtain an odd number.
Definition 1.2 yields the following obvious lemma.
Notice that Lemma 1.2 differs from the third part of Definition 1.2 in that Definition
1.2 is an equality and requires the events to be pairwise mutually disjoint.
Lemma 1.1 can be generalized to the following equality, often referred to as the
inclusion–exclusion principle.
If k = 2, it seems that the probability that the first iteration finds a root is at most 1/100
and the probability that the second iteration finds a root is at most 1/100, so the prob-
ability that both iterations find a root is at most (1/100)2 . Generalizing, for any k, the
probability of choosing roots for k iterations would be at most (1/100)k .
To formalize this, we introduce the notion of independence.
Definition 1.3: Two events E and F are independent if and only if
Pr(E ∩ F ) = Pr(E ) · Pr(F ).
More generally, events E1 , E2 , . . . , Ek are mutually independent if and only if, for any
subset I ⊆ [1, k],
Pr Ei = Pr(Ei ).
i∈I i∈I
If our algorithm samples with replacement then in each iteration the algorithm chooses
a random number uniformly at random from the set {1, . . . , 100d}, and thus the choice
in one iteration is independent of the choices in previous iterations. For the case where
the polynomials are not equivalent, let Ei be the event that, on the ith run of the algo-
rithm, we choose a root ri such that F (ri ) − G(ri ) = 0. The probability that the algo-
rithm returns the wrong answer is given by
Pr(E1 ∩ E2 ∩ · · · ∩ Ek ).
Since Pr(Ei ) is at most d/100d and since the events E1 , E2 , . . . , Ek are independent,
the probability that the algorithm gives the wrong answer after k iterations is
k k k
d 1
Pr(E1 ∩ E2 ∩ · · · ∩ Ek ) = Pr(Ei ) ≤ = .
i=1 i=1
100d 100
The probability of making an error is therefore at most exponentially small in the num-
ber of trials.
Now let us consider the case where sampling is done without replacement. In this
case the probability of choosing a given number is conditioned on the events of the
previous iterations.
Definition 1.4: The conditional probability that event E occurs given that event F
occurs is
Pr(E ∩ F )
Pr(E | F ) = .
Pr(F )
The conditional probability is well-defined only if Pr(F ) > 0.
Intuitively, we are looking for the probability of E ∩ F within the set of events defined
by F. Because F defines our restricted sample space, we normalize the probabilities
by dividing by Pr(F ), so that the sum of the probabilities of all events is 1. When
Pr(F ) > 0, the definition can also be written in the useful form
Pr(E | F ) Pr(F ) = Pr(E ∩ F ).
6
1.2 axioms of probability
Because (d − ( j − 1))/(100d − ( j − 1)) < d/100d when j > 1, our bounds on the
probability of making an error are actually slightly better without replacement. You
may also notice that, if we take d + 1 samples without replacement and the two poly-
nomials are not equivalent, then we are guaranteed to find an r such that F (r) − G(r) =
0. Thus, in d + 1 iterations we are guaranteed to output the correct answer. However,
computing the value of the polynomial at d + 1 points takes (d 2 ) time using the stan-
dard approach, which is no faster than finding the canonical form deterministically.
Since sampling without replacement appears to give better bounds on the probability
of error, why would we ever want to consider sampling with replacement? In some
cases, sampling with replacement is significantly easier to analyze, so it may be worth
7
Exploring the Variety of Random
Documents with Different Content
She either did not believe in its truth, or lulled herself into security
by depending upon the fidelity of her friends. Unmoved by the
danger that threatened her, she concealed from her husband the
information she had received; for which, when it was too late to
retrace her steps, he afterwards severely censured her. Ostermann,
who was early made aware of the proceedings of the conspirators,
warned the regent of her danger, and entreated her to take some
decisive measures to avert it: and the British ambassador, detecting,
probably, the insidious hand of France, predicted her destruction in
vain. Her facile nature still lingered inactive, until at last she received
an anonymous letter, in which she was strongly admonished of the
perils by which she was surrounded. A more energetic mind would
have acted unhesitatingly upon these repeated proofs of the
approaching insurrection; but Anna, still clinging to the side of
mercy, instead of seizing upon the ringleaders, who were known to
her, and quieting at once the apprehensions of her advisers, read the
whole contents of the letter in open court in the presence of
Elizabeth, and stated the nature of the reports that had reached her.
Elizabeth, of course, protested her ignorance of the whole business,
burst into a flood of tears, and asserted her innocence with such a
show of sincerity that the regent was perfectly satisfied, and took no
further notice of the matter.
This occurred on the 4th of December, 1741. Lestocq had
previously appointed the day of the consecration of the waters, the
6th of January, 1742, for Elizabeth to make her public appearance at
the head of the guards, to issue declarations setting forth her claims
upon the throne, and to cause herself to be proclaimed. But the
proceeding that had taken place in the court determined him to
hasten his plans. Now that the vigilance of the court was awakened,
he knew that his motions would be watched, and that the affair did
not admit of any further delay. He applied himself, accordingly, with
redoubled vigilance, to the business of collecting and organising the
partisans of the princess; continued to bribe them with French gold;
and, when everything was prepared, he again impressed upon his
mistress the urgent necessity of decision. He pointed out to her that
the guards, upon whose assistance she chiefly relied, were under
orders to march for Sweden, and that in a short time all would be
lost. She was still, however, timid and doubtful of the result, when
the artful Lestocq drew a card from his pocket, which represented
her on one side in the habit of a nun, and on the other with a crown
upon her head—asking her which fate she preferred; adding that the
choice depended upon herself, and upon the promptitude with which
she employed the passing moment. This argument succeeded; she
consented to place herself in his hands; and, remembering the
success that had attended the midnight revolution that consigned
Biron to banishment, he appointed the following night, the 5th of
December, for the execution of his plan—undertaking the principal
part himself, in the hope of the honours that were to be heaped
upon him in the event of success.
When the hour arrived Elizabeth again betrayed irresolution, but
Lestocq overcame her fears; and after having made a solemn vow
before the crucifix that no blood should be shed in the attempt, she
put on the order of St. Catherine, and placing herself in a sledge,
attended by Lestocq and her chamberlain, she drove to the barracks
of the Preobrajenski guards. When she arrived at this point, she
advanced towards the soldiers on foot, holding the cross in her
hand; and, addressing them in a speech of some length, justified the
grounds on which she advanced her claims to the throne; reminded
them that she was the daughter of Peter the Great; that she had
been illegally deprived of the succession; that a foreign child wielded
the imperial sceptre; and that foreigners were advanced, to the
exclusion of native Russians, to the highest offices in the state. A
considerable number of the guards had been previously prepared for
this proceeding by bribes and promises, and inflammatory liquors
were distributed amongst them to heighten their zeal. With the
exception of a few, who would not violate their duty and who were,
in consequence, manacled by the remainder, the whole body
responded to the address with enthusiasm.
They now proceeded to the palace of the emperor and his
parents, pressing into their train everybody they met on the way, to
prevent their object from being betrayed; and, forcing the sentries at
the gates, obtained easy admittance to the sleeping apartments of
the regent and the duke, whom they dragged, unceremoniously, and
without affording them time to dress, out of their beds, and
conveyed to the palace of Elizabeth, where they confined them
under a strong guard. The infant Ivan, unconscious of the misery
that awaited him, was enjoying a gentle slumber during this scene of
violence; and when he awoke he was carried, in a similar manner, to
the place where his unhappy parents were immured. On the same
night the principal persons connected with the government were
seized in the same way, and thrown into prison. Amongst them were
Lewis Ernest of Brunswick, the brother of the duke, Ostermann, and
Munich.
This revolution was as rapid and complete as that which deprived
Biron of the regency, and was effected by a similar stealthy
proceeding in the silence of the night. Early on the following
morning, the inhabitants were called upon to take the oath of fealty
to Elizabeth. But they were accustomed to these sudden movements
in the palace; and before the day was concluded the shouts of the
intoxicated soldiery announced that the people had confirmed, by
the usual attestation of allegiance, the authority of the empress.[49]
A manifesto was immediately issued, which contained the following
statement:
The empress Anna having nominated the grandson of her sister, a
child born into the world only a few weeks before the empress’
death, as successor to the throne; during the minority of whom
various persons had conducted the administration of the empire in a
manner highly iniquitous, whence disturbances had arisen both
within the country and out of it, and probably in time still greater
might arise; therefore all the faithful subjects of Elizabeth, both in
spiritual and temporal stations, particularly the regiments of the life-
guards, had unanimously invited her, for the prevention of all the
mischievous consequences to be apprehended, to take possession of
the throne of her father as nearest by right of birth; and that she
had accordingly resolved to yield to this universal request of her
faithful subjects, by taking possession of her inheritance derived
from her parents, the emperor Peter I and the empress Catherine.
Shortly after this another manifesto appeared, in which Elizabeth
grounded her legitimacy on the will of Catherine I. As the
statements in this document respecting the right of inheritance are
singular in themselves, and as they illustrate in a very remarkable
degree the irregularity with which the question of the succession
was suffered to be treated, the passage touching upon those points
appears to be worthy of preservation. It will be seen, upon reference
to previous facts, that these statements are highly coloured to suit
the demands of the occasion. After some preliminaries, the
manifesto proceeds to observe, that on the demise of Peter II,
whom she (Elizabeth) ought to have succeeded, Anna was elected
through the machinations of Ostermann; and afterwards, when the
sovereign was attacked by a mortal distemper, the same Ostermann
appointed as successor the son of Prince Antony Ulrich of Brunswick
and the princess of Mecklenburg, a child only two months old, who
had not the slightest claim by inheritance to the Russian throne;
and, not content with this, he added, to the prejudice of Elizabeth,
that after Ivan’s death the princes afterwards born of the said prince
of Brunswick and the princess of Mecklenburg should succeed to the
Russian throne; whereas even the parents themselves had not the
slightest right to that throne. That Ivan was, therefore, by the
machinations of Ostermann and Munich, confirmed emperor in
October, 1740; and because the several regiments of guards, as well
as the marching regiments, were under the command of Munich and
the father of Ivan, and consequently the whole force of the empire
was in the hands of those two persons, the subjects were compelled
to take the oath of allegiance to Ivan. That Antony Ulrich and his
spouse had afterwards broken this ordinance, to which they
themselves had sworn; had forcibly seized upon the administration
of the empire; and Anna had resolved, even in the lifetime of her
son Ivan, to place herself upon the throne as empress. That, in
order, then, to prevent all dangerous consequences from these
proceedings, Elizabeth had ascended the throne, and of her own
imperial grace had ordered the princess with her son and daughter
to set out for their native country.
Such were the arguments upon which Elizabeth attempted to
justify her seizure of the throne. With what sincerity she fulfilled the
act of grace towards the regent and her family, expressed in the last
sentence, will be seen hereafter.
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
textbookfull.com