Random Number Generatorsprinciples And Practices A Guide For Engineers And Programmers David Johnston instant download
Random Number Generatorsprinciples And Practices A Guide For Engineers And Programmers David Johnston instant download
https://ebookbell.com/product/random-number-generatorsprinciples-
and-practices-a-guide-for-engineers-and-programmers-david-
johnston-51027358
https://ebookbell.com/product/engineering-applications-of-fpgas-
chaotic-systems-artificial-neural-networks-random-number-generators-
and-secure-communication-systems-1st-edition-esteban-
tlelocuautle-5484912
https://ebookbell.com/product/random-number-generator-on-computers-
first-edition-naoya-nakazawa-hiroshi-nakazawa-63371562
https://ebookbell.com/product/a-phase-fluctuation-based-practical-
quantum-random-number-generator-scheme-with-delayfree-structure-min-
huang-10882418
Random Number Generation And Monte Carlo Methods 2nd Ed James E Gentle
https://ebookbell.com/product/random-number-generation-and-monte-
carlo-methods-2nd-ed-james-e-gentle-889832
Quantum Random Number Generation Kollmitzer C Ed
https://ebookbell.com/product/quantum-random-number-generation-
kollmitzer-c-ed-12084274
https://ebookbell.com/product/probability-and-random-number-a-first-
guide-to-randomness-hiroshi-sugita-6984630
https://ebookbell.com/product/recent-perspectives-in-random-matrix-
theory-and-number-theory-mezzadri-f-2045998
https://ebookbell.com/product/lectures-on-random-lozenge-tilings-
cambridge-studies-in-advanced-mathematics-series-number-193-1st-
edition-vadim-gorin-51678676
https://ebookbell.com/product/frontiers-in-number-theory-physics-and-
geometry-1-on-random-matrices-zeta-functions-and-dynamical-systems-r-
cartier-4101530
David Johnston
Random Number Generators—Principles and Practices
David Johnston
Random Number
Generators—Principles
and Practices
|
A Guide for Engineers and Programmers
ISBN 978-1-5015-1513-2
e-ISBN (PDF) 978-1-5015-0606-2
e-ISBN (EPUB) 978-1-5015-0626-0
www.degruyter.com
|
This book is dedicated to the memory of George Cox, without whom my work in ran-
dom number generators would never have started and who was a tenacious engineer-
ing partner.
Thank you to my wife, Tina, for putting up with me disappearing every weekend for
two years to write this book. To Charles Dike, Rachael Parker, James Shen, Ammon
Christiansen and Jesse Walker, Jian Zhong Wang, Kok Ching Eng, Beng Koi Lim, Ping
Juin Tan and all the other engineers who worked with me to achieve higher perfor-
mance random number generators a reality in Intel products; to Nichole Schimanski
for answering many of my dumb mathematics questions; to the many academics who
inspired me and provided important insights and findings, including Yvgenny Dodis,
Ingrid Verbauwhede, Vladimir Rožić, Bart Massey, Hugo Krawczyk, Boaz Barak, Rus-
sell Impagliazzo, and Avi Wigderson, thanks.
About De|G PRESS
Five Stars as a Rule
De|G PRESS, the startup born out of one of the world’s most venerable publishers,
De Gruyter, promises to bring you an unbiased, valuable, and meticulously edited
work on important topics in the fields of business, information technology, comput-
ing, engineering, and mathematics. By selecting the finest authors to present, without
bias, information necessary for their chosen topic for professionals, in the depth you
would hope for, we wish to satisfy your needs and earn our five-star ranking.
In keeping with these principles, the books you read from De|G PRESS will be
practical, efficient and, if we have done our job right, yield many returns on their price.
We invite businesses to order our books in bulk in print or electronic form as a
best solution to meeting the learning needs of your organization, or parts of your or-
ganization, in a most cost-effective manner.
There is no better way to learn about a subject in depth than from a book that is
efficient, clear, well organized, and information rich. A great book can provide life-
changing knowledge. We hope that with De|G PRESS books you will find that to be the
case.
https://doi.org/10.1515/9781501506062-201
Contents
About De|G PRESS | VII
Preface | XVII
1 Introduction | 1
1.1 Classes of Random Number Generators | 3
1.2 Naming RNGs | 5
1.3 Disambiguating RNG Types | 6
1.4 Nonuniform RNGs | 7
1.5 Noncryptographically Secure PRNG Algorithms | 8
1.6 Cryptographically Secure PRNG Algorithms | 9
1.6.1 Example of CSPRNG: The SP800-90A CTR DRBG | 11
1.6.2 Attacking CSPRNGs | 12
1.7 Controlled Defect RNGs | 14
1.8 Noise Source Circuits | 15
1.9 TRNG Extractor/Conditioning Algorithms | 16
1.10 The Structure of Secure RNG Implementations | 17
1.10.1 Point A, the Raw Data | 17
1.10.2 Points D and E, the Health Status Feedback | 18
1.10.3 Point B, the Seed Quality Data | 18
1.10.4 Point C, the PRNG Output | 18
1.11 Pool Extractor Structures | 19
1.12 What Key to Use? | 20
1.13 Multiple Input Extractors | 21
2 Entropy Sources | 23
2.1 Ring Oscillator Entropy Sources | 23
2.1.1 Ring Oscillator Entropy Source Problems | 25
2.2 Metastable Phase Collapse Ring Oscillator Entropy Sources | 26
2.3 Fast Digital TRNG Based on Metastable Ring Oscillator | 30
2.4 Feedback Stabilized Metastable Latch Entropy Source | 31
2.5 Infinite Noise Multiplier Entropy Source | 35
2.6 Diode Breakdown Noise Entropy Source | 39
3 Entropy Extraction | 41
3.1 The Simplest Extractor, the XOR Gate | 42
3.2 A Simple Way of Improving the Distribution of Random Numbers that
Have Known Missing Values Using XOR | 46
3.2.1 Is This Efficient? | 51
X | Contents
3.2.2 Why This Might Matter: Two Real-World Examples with Very
Different Results | 51
3.3 Debiasing Algorithms | 57
3.4 Von Neumann Debiaser Algorithm | 57
3.5 Yuval Peres Debiaser Algorithm | 63
3.6 Blum’s Method Debiaser Algorithm | 66
3.7 Cryptographic Entropy Extractors | 72
3.8 Pinkas Proof, or Why We Cannot Have Nice Things | 72
3.9 Seeded Entropy Extractors | 73
3.9.1 CBC-MAC Entropy Extractor | 74
3.9.2 CMAC Entropy Extractor | 77
3.9.3 HMAC Entropy Extractor | 79
3.10 Multiple Input Entropy Extractors | 81
3.10.1 Barak, Impagliazzo, Wigderson 3 Input Extractor | 82
3.10.2 2-EXT 2-Input Extractor | 88
Bibliography | 421
Index | 423
Preface
Many books and most academic papers on random numbers turn out to be either
highly mathematical and difficult to follow, or the opposite, offering little useful engi-
neering insight. In contrast, this book is aimed at the practicing programmer or hard-
ware engineer. The use of mathematics is kept to a level appropriate for the topic,
with an emphasis on working examples and codes that run and can be used and ex-
perimented with. The reader would benefit from being able to program a computer,
possessing the sort of mathematics common in undergraduate engineering courses
and preferably having an interest in random numbers.
Random Number Generators have many uses. From modeling stochastic systems
to randomizing games, to picking lottery winners, to playing board games (using dice),
to randomizing cryptographic protocols for security.
The most critical application is in cryptographic security, where random numbers
are an essential component of every cryptographic application. There can be no cryp-
tographic security without secure, unpredictable random numbers.
While random numbers may appear to be a trivial and simple topic, it turns out
that there are many counterintuitive concepts and many subdisciplines, including
random number testing, random number generation, entropy extraction, public key
generation, and simulation.
Unfortunately, in cryptography, random number generation has proven difficult
to get right and there are many examples of cryptographic systems undermined by
poor quality random number generators.
A number of programs have been written to accompany this book. They are mostly
written in Python 2 and C. Electronic copies of this code are available through Github
(https://github.com/dj-on-github/RNGBook_code). In addition a number of exter-
nally available software tools are used. Appendix D provides pointers to all the other
software used in this book and a reference to relate listings to their location in the
book.
https://doi.org/10.1515/9781501506062-202
1 Introduction
My first professional encounter with random numbers happened while implementing
the 802.11 WEP (Wired Equivalent Privacy) protocol in a WiFi chip. This required that
random numbers be used in the protocol and the approach I took was to take noisy
data from the wireless receive hardware and pass it through an iterated SHA-1 hash
algorithm. Once a sufficient amount of noisy data had been fed in, the output of the
hash was taken to be a random number.
Given that at the time I was largely ignorant of the theory behind random number
generation in general and entropy extraction in particular, the solution was not ter-
rible, but with hindsight of a decade of working on random number generators there
are some things I would have done differently.
Subsequent to attending the IEEE 802.11i working group to work on the replace-
ment protocol to WEP (one of the security protocol standards famously back-doored by
the NSA and thereby introducing bad cryptography into standards) and later on the
802.16 PKMv2 security protocol, the need for random numbers in security protocols
and the problems specifying and implementing them, led to my career being diverted
into a multiyear program to address how to build practical random number genera-
tors for security protocols that can be built in chips, tested, mass manufactured, and
remain reliable and secure while being available to software in many contexts. Ulti-
mately this emerged as the RdRand instruction and later the RdSeed instruction in
Intel CPUs. A decade later, the requirements are still evolving as new needs for ran-
dom number generators that can operate in new contexts emerge.
My initial model for this book was for it to be the book I needed back when first
implementing a hardware random number generator for the 802.11 WEP protocol.
I would have benefited greatly from a book with clear information on the right kinds
of design choices, the right algorithms, the tradeoffs, the testing methods, and enough
of the theory to understand what makes those things the correct engineering choices.
The scope of the book grew during its writing, to cover nonuniform RNGs, a much
expanded look at various types of random number testing, nonuniform RNGs, soft-
ware interfaces for RNGs, and a mix of software tools and methods that have been
developed along the way.
Since the early 2000s, the requirements for random number generators have
evolved greatly. For example, the need for security against side channel and fault in-
jection attacks, security from quantum computer attacks, the need for smaller, lower
power random number generators in resource-constrained electronics, and the need
for highly efficient RNGs and floating point hardware RNGs for massively parallel
chips.
Random numbers are used in many other fields outside of cryptography and the
requirements and tradeoffs tend to be different. This book explores various noncryp-
tographic random number applications and related algorithms. We find examples of
https://doi.org/10.1515/9781501506062-001
2 | 1 Introduction
C53D397A527761411C86BA48B7C1524B0C60AD859B172DB0BAE46B712936F08A
DA7E75F686516070C7AD725ABCE2746E3ADF41C36D7A76CB8DB8DA7ECDD2371F
D6CA8866C5F9632B3EDBCC38E9A40D4AE94437750F2E1151762C4793107F5327
D206D66D8DF11D0E660CB42FE61EC3C90387E57D11568B9834F569046F6CEDD0
The first is generated from a random number generator in an Intel CPU that has
a metastable entropy source, followed by an AES-CBC-MAC entropy extractor and an
SP800-90A and 90C XOR construction NRBG, with an AES-CTR-DRBG. This RNG uses
algorithms that are mathematically proven to be secure in that it is algorithmically
hard to infer the internal state from the output and impossible to predict past and
future values in the output when that state is known.
The second is generated with a PCG (Permuted Congruential Generator) RNG.
PCGs are deterministic RNGs that have excellent statistical properties, but there is no
proof that it is hard to infer the internal state of the algorithm from the output, so it is
not considered secure.
The third is generated with an LCG (Linear Congruential Generator) RNG. LCGs are
simple deterministic RNG algorithms for which there are a number of statistical flaws.
It is trivial to infer the internal state of the algorithm and so to predict future outputs.
So, it is proven to be insecure.
Later chapters on NRBGs, DRBGs, entropy extractors, uniform noncryptographic
random number generators and test methods will explore the differences between
these different types of random number generator and how to test for their proper-
ties.
Physical noise sources are never perfectly random. The common term for per-
fectly random is “IID”, meaning Independent and Identically Distributed. IID data is
allowed to be biased, but there must be no correlation between values in an IID se-
Naming RNGs | 5
quence and the data should also be statistically stationary, meaning the bias or any
other statistic cannot change with time.
There is always some bias, correlation, and nonstationarity even if it is too small to
be detected; so it is common and often necessary for a TRNG to have some form of post
processing to improve the random qualities of the data. We look at this in Chapter 3,
Entropy Extraction.
It is common but not always true, that sources of physical noise into a computer
tend to be slow and PRNGs tend to be fast. So it is common in real systems to feed the
output of a slow TRNG into a fast CS-PRNG to form a hybrid system that is cryptograph-
ically useful, nondeterministic, and fast. It is also appropriate to first pass the entropy
gathered from the noise source into an extractor before the CS-PRNG. The data from
the noise source is partially random and nondeterministic. The data from the entropy
extractor will be slower than from the noise source, but should, if in a correctly de-
signed RNG, be close to perfectly uniform, unpredictable, and nondeterministic. Once
the data from the entropy extractor has seeded a CS-PRNG, the output of the CS-PRNG
should be statistically uniform, cryptographically hard to predict, and faster.
So, as we pass through the chain, the properties of the random numbers can be la-
belled as “Bad”, “Close to perfect and ideal for cryptography” and then “Good enough
for cryptography”, whereas the performance can go from “Slow” to “Slower” to “Fast”.
See Figure 1.1.
might elicit the response from me: “Do you mean a nondeterministically seeded PRNG
with well-defined computational prediction bounds or do you mean a full entropy
source comprising noise source with an extractor?” Put more clearly, this is asking
(a) Is there an entropy source producing unpredictable bits? (b) Is there an entropy
extractor turning those unpredictable bits into fully unpredictable bits, each with a
50% probability of being 1 and each bit being independent of the others? (c) Is there
a secure PRNG algorithm that prevents an observer of the output bits being able to
predict future value from the PRNG or infer past values from the PRNG?
These are the common features of a secure PRNG and so form one possible defi-
nition of a TRNG (True Random Number Generator), while the TRNG term is used very
loosely in practice.
The different RNG types are identified based on the properties of the construction
of the RNG. The essential major components of secure RNGs are the entropy source (or
noise source), the entropy extractor (or conditioner or entropy distiller), and option-
ally a PRNG (or deterministic random bit generator).
Insecure random number generators are common outside the field of cryptogra-
phy for many purposes, including simulation, system modeling, computer graphics,
and many other things. These can be uniform or be designed to follow some nonuni-
form distribution such as a Gaussian distribution. The figures of merit for these types
of generator tend be based on speed performance or closeness to some statistical
model. Figure 1.2 gives a sequence of questions that divides the various common RNG
types.
The NIST SP800-90A, B, and C specifications do not concern themselves with in-
secure RNGs, and so do not specify names for insecure RNGs, although the names
they do specify tend to be different from common use. For example, SP800-90A calls a
PRNG (Pseudo-Random Number Generator) a DRBG (Deterministic Random Bit Gener-
ator), a TRNG (True Random Number Generator), an NRBG (Nondeterministic Random
Bit Generator), and an entropy source an SEI (Source of Entropy Input).
Generally, PRNGs do not have specific terms to separate secure PRNGs from inse-
cure PRNGs. Similarly, there is no commonly used term to distinguish a simple noise
source from a noise source with post processing to increase the per-bit entropy. In
common terminology, both might be called a TRNG, whereas NIST SP800-90B and C
do distinguish between an NRBG (Nondeterministic Random Bit Generator) and an
SEI (Source of Entropy Input). In the NIST arrangement, an SEI is a component of an
NRBG.
binomial distribution, gamma distribution, or one of the many other distributions de-
fined in mathematics.
These typically operate by starting with a uniform distribution and then applying
a transform to generate the required distribution from the uniform distribution.
We look at some nonuniform RNGs in Chapter 7 on Gaussian RNGs.
But they require repeatability, so each run has a seed number and the seed number
can be used to cause the PRNG to produce the same output and so the same simulation
run. Nonsecure PRNG algorithms tend to be simpler and faster than cryptographically
secure PRNGs, since they do not make use of the extensive cycles of linear and non-
linear transformations needed in a cryptographically secure algorithm. Efficiency is
often a goal of the design of such algorithms.
Some examples of commonly used insecure PRNG algorithms are:
1. Linear Congruential Generators (LGCs): A simple family of generators that com-
putes Xn+1 = (aXn + c) mod m.
2. XORShift: A simple generator with good statistical properties based on XORs and
shifts of its state variables.
3. Permuted Congruential Generators (PCGs): A family of generator with excellent
statistical properties, based on introducing a XorShift based permutation output
stage to the LCG loop.
Generally, we think of a PRNG having an internal “state”, that is, a number of bits. The
state of those bits determines the next output value, and each time a value is output,
the internal state is changed so that the next number appears random with respect to
all previous and future values.
The mechanism for determining the output from the state and the mechanism for
updating the state can be distilled into a next-state function fns (si ) and a state output
function fout (si ). These are arranged as shown in Figure 1.3.
10 | 1 Introduction
The primary security property for a CSPRNG is that an adversary with visibility of the
outputs xi cannot infer the internal state. The adversary infers the state by inverting
fout , thus computing si from xi and earlier outputs xi−n . So, it is necessary for the output
function fout to be hard to invert.
Another property desirable for a CSPRNG is backtracking resistance. If the state si
is revealed, we know the adversary can predict future outputs by repeatedly applying
fns to si . We want it to be hard for the adversary to compute earlier outputs from the
state. Therefore, the state update function fns should be hard to invert, so that si−1
cannot be computed from the inverse of fns . That is, computing
−1
si−1 = fns (si )
is computationally hard.
Another property for a CSPRNG is forward prediction resistance, whereby even
with knowledge of some internal state si , future states cannot be computed. This is
not a property of all CSPRNGs and, for example, is an optional property in SP800-90A.
The means to achieve forward prediction resistance is to inject fresh entropic data into
each state update. So the next state function now has two parameters, the previous
state si and fresh entropy Ei :
Thus, future state values are both a function of the previous state and future
entropy data that has not been generated yet. In real-world RNGs, this is typically
achieved by running the entropy source continuously, taking the new bits that are
available from the entropy source at the time of the update and stirring them into
the current or next state of the update function, using xor, or a cryptographic mixing
function.
Note that while it is easy to create PRNGs that have these desirable properties, by
simply using known good cryptographic functions with the noninvertibility properties
required, there are many examples where they do not have such properties, either
through poor design, or through deliberately hidden methods to invert the update and
output functions or leak the state. The now classic example of a deliberately flawed
RNG is the SP800-90 Dual-EC-DRBG, which we look at in Section 4.1.
CSPRNGs have an additional function to reseed the state of the PRNG with fresh
entropy. Typically this is always performed at the start of the PRNG being run and may
be performed periodically, or on request at later points in time.
Cryptographically Secure PRNG Algorithms | 11
This is a quick look at the SP800-90A CTR DRBG. We look at this and other CSPRNG
algorithms in more detail in Chapter 4. Details of the SP800-90A CTR DRNG are in
Section 4.1.3.
State: The state of the algorithm consists of three variables, V (vector) a 128 bit
number, K (Key) a key, one of 128, 192, or 256 bits (depending on the key size of the AES
function used), and finally C (count), the count of the number of generated outputs
since the last reseed.
Generate Function: The output function is called generate(). The function incre-
ments V and C and the output using the AES algorithm invoked with the key input K
and vector input V.
generate ():
V = V +1
C = C +1
x = AES (K ,V)
output x
Update Function: The next state function is called update(). This computes a new
value for K and V so that backtracking resistance is achieved. The key K that was used
in updating to the new K is lost, so inverting the procedure would require searching
all the possible values for K.
update ():
K ' = K xor AES (K , V +1)
V ' = V xor AES (K , V +2)
V = V +2
K = K'
V = V'
The above example assumes that the key size is the same as the block size of the
AES. If the key size was 256 bits, then the CTR algorithm would be extended in order
to get enough bits to update the key. In the listing below, the CTR algorithm is run for
three invocations of AES to get 256 bits of CTR output for a 256 bit key update and a
further 128 bits for the vector update.
update (K ,V ):
K_lower ' = K_lower xor AES (K , V +1)
K_upper ' = K_upper xor AES (K , V +2)
V ' = V xor AES (K , V +3)
V = V +3
K = K_upper ' | K_lower '
V = V'
12 | 1 Introduction
There are other details in the full specification that have been omitted here, such
as the personalization strings, initialization, additional entropy input, and handling
of multiple key sizes.
It is from these two algorithms that the term CTR DRBG is derived. Given K and V,
the output value and the values to XOR into K and V in the update are drawn from the
output of AES in CTR (CounTeR) mode. CTR mode involves taking an IV (initialization
vector) and incrementing it with a counter. For each increment, the value IV + n is en-
crypted with the key K. This yields a series of output values that are indistinguishable
from random. This is shown in Figure 1.4.
In the CTR DRBG, V is used as the IV (Initialization Vector) and the outputs from the
CTR mode operation are used to provide the data output and the update values for K
and V. This is shown for the 128 bit key case in Figure 1.5, which requires three invo-
cations of AES in the CTR algorithm, one each for output data value, the key update,
and the vector update.
Chapter 4 goes into detail on the design of a number of cryptographically secure RNGs.
Assuming there is no low effort algorithm to compute the internal state from the output
values, the way to predict the past or future outputs of a secure PRNG is to try guessing
all the possible internal states and compare the output from those states against the
observed output. When you find a match, you have the internal state.
If you had a PRNG with 16 bits of internal state and you had three output values A,
B, and C, all you need to do to determine future values D, E, and F is to try running the
Cryptographically Secure PRNG Algorithms | 13
algorithm for three steps starting from each of the 216 (that is 65536) possible internal
states. If the output matches A, B, and C, then you have the internal state and you
can keep going to predict future outputs. Obviously this is a trivial task for a modern
computer.
The following program implements a weak PRNG using AES-CTR mode. It is weak
because the key is only 16 bits in length. It generates 6 output values. Then the attack
algorithm executes a small loop which searches through all the 216 possible keys to find
a key that decrypts the first three random values to three values that increment by 1.
Those decrypted three values are the sequence of vector values V, V +1 and V +2. Once
it has found the key, it goes on to generate the next three outputs, which match the
final three values from the weak RNG, showing that it has managed to predict future
values, by inverting the output function to get K and V from the output data.
def byteify (n ):
bytelist = list ()
for j in xrange (16):
bytelist . append (( n >> (8* j )) & 0 xff )
return ( bytes ( bytearray ( bytelist )))
def printbytes (h , b ):
st = h
for i in xrange (16):
st = st + " %02 x " % ord ( b [15 - i ])
print st
So, a necessary feature of a secure PRNG is that it has enough internal state
bits that it is not computationally feasible to try every possible value. Typically a se-
cure PRNG would have 256 bits or more internal state. In the CTR-DRBG,
the key size determines the security level. With a key size of 128 bits, it would
take 10 782 897 524 556 318 080 696 079 years for a computer to search the key
space at a rate of 1 million keys per second. With a 256 bit key it would take
366 922 989 192 195 209 469 576 219 385 149 402 531 466 222 607 677 909 725 256 622
years.
4. SINBIAS (Sinusoidal Bias), a model which varies the bias of generated data sinu-
soidally.
In this structure, the entropy source outputs unprocessed entropic data to point A,
which is sent into the entropy extractor from point A.
The data at point A is sometimes named raw data, because it has not yet been
post processed by the extractor or PRNG. Thus, the data at the output C is sometimes
referred to as cooked data.
It is important that the input requirements of the extractor are met by the output
properties of the entropy source at point A. For example, an AES-CBC-MAC extractor
18 | 1 Introduction
will typically require the input data to have a min entropy of > 0.5 bits per bit. A mul-
tiple input extractor would require each of the inputs to be independent of each other
and also have some level of min-entropy. The quality of the data at point A generally is
described in terms of min-entropy and basic statistics such as mean, serial correlation,
and stationarity.
This raw data is also fed to the online health test, which is typically a statistical
test to ensure the entropy source is correctly functioning. This may run full time or on
demand.
The results of the online testing would typically be made available at the RNG’s main
interface via point E, but also it can (and should) be used to inform the extractor when
the quality of the data is meeting its input requirements via point D.
The output of the extractor will generally be close to full entropy, if the input require-
ments of the extractor are met, or in the case of single input extractors, it is possible
to show only computational predication bounds rather than min-entropy when the
input is only guaranteed to have a certain level of min-entropy [7]. It has been shown
mathematically that getting to 100% full entropy is impossible, but is it possible to
get arbitrarily close to full entropy. The gap between the actual min-entropy and full
entropy in extractor theory is referred to as ε (epsilon).
The data at point B might constitute the output of the RNG, or it might provide the
seed input to the PRNG to initialize or update the PRNG’s state to be nondeterministic.
The PRNG typically takes in a seed from point B either initially or periodically to in-
ject nondeterminism into its state. It will then proceed to generate outputs Each step
employs an output function to generate an output from the current state and also gen-
erate a next state from the current state.
The data at the PRNG output stage generally cannot be treated as having an
amount of min-entropy. Instead, it has a certain computational prediction bound.
This is because by observing past values from the output, there is always a brute
force algorithm that can search through all the possible values of the internal state
to find the state that would have generated the previous values. A secure PRNG algo-
Pool Extractor Structures | 19
rithm ensures that the amount of computation required is too high to be practically
implemented.
The current progress towards quantum computers makes it relevant to also con-
sider the computations prediction bounds for a quantum computer. Some RNGs,
such as number-theoretic RNGs like BlumBlumShub, are completely insecure against
a quantum computer. Conventional cryptographic RNGs based on block ciphers or
hashes or HMACs, tend to have their security strength reduced to the square root of
the security against a classical computer. So, the industry is in a process of doubling
the key sizes of RNGs to prepare for future quantum computer threats.
and output
So 512 bits of input data are compressed down to 128 bits of extract data.
If the extractor maintained a pool of 128 bits, it could use AES-CBC-MAC to mix
in the input raw entropy into the pool by including the pool in the AES-CBC-MAC cal-
culation. This is simply achieved by including the previous output in the input to the
CBC MAC algorithm:
So, if there were periods of low entropy from the entropy source, the pool would
maintain the complexity of the output of the extractor. Also, the extractor could keep
running while the raw entropy data is flowing in, until the PRNG needed a seed, so
the extractor ratio would be as large as possible for each reseeding.
A third enhancement available with a pool structure is that when the OHT tags
a set of input bits as not meeting the test requirement, they can still be mixed into
the pool so that any residual entropy is not lost. The AES-CBC-MAC algorithm can be
20 | 1 Introduction
continued until a required number of healthy input samples are mixed into the pool.
This forms an active response to attacks on the entropy source that try to reduce the
entropy of the source. As the number of unhealthy samples increases, the amount of
samples mixed into the pool increases.
The AES-CBC-MAC can be broken down into multiple single AES operations. The
pseudocode in Listing 1.5 shows this pool method in terms of an individual AES oper-
ation executed once per loop.
This will continue to extend the CBC-MAC until enough healthy samples are re-
ceived and will also mix in the previous pool value. The extensibility of CBC-MAC is
considered a weakness when used as a MAC, but in this context as an extractor, it is a
strength.
The Intel DRNG exhibits this behavior. The reseed of the AES-CTR-DRBG requires
256 bits, so it performs two parallel CBC-MACs, each generating 128 bits of seed data
to make 256 bits.
Similar mechanisms could be implemented with hashes or HMAC, both of which
are suitable extractor algorithms.
ing in from the entropy source. This approach was discussed and shown to be safe
in [21].
The approach taken in the TLS protocol, with the HKDF extractor, is to use a key
of 0 when a random key is not available.
This same rationale could be used with other seeded constructs. But first, the
mathematical proof of the properties of the extractor should be consulted to check
that such a seed will suffice.
Multiple input extractors tend to use basic arithmetic or algebraic methods rather than
rely on cryptographic primitives such as block ciphers or hashes. This leads to such
extractors being small and efficient to implement compared to cryptographic extrac-
tors. However, the cost is that multiple independent entropy sources are needed. So,
the savings in implementation costs of the extractor need to be factored against the
additional implementation costs of the entropy sources.
2 Entropy Sources
Entropy sources are physical systems that take noise from the environment and turns
it into random bits. Usually these are electrical circuits. There are some examples of
partly mechanical entropy sources, but these are not generally useful, except as a cu-
riosity, because they tend to be very slow and unreliable. Therefore, we will focus on
electronic entropy sources in this chapter.
Ideal binary random numbers are IID (independent and identically distributed)
and have a bias of 0.5; so, 1 is as likely as 0.
However, entropy sources can never achieve this property directly. The physical
world from which noise is sampled only yields data that has some level of correlation,
bias, and nonstationarity. Entropy extractor algorithms are, therefore, used to convert
the partially entropic output of an entropy source into data that closely approximates
full entropy IID data.
In this chapter, we look at entropy sources. In the next chapter, we look at entropy
extractors, which take partially entropic data from an entropy source and convert it to
almost fully entropic data.
The inverter, represented by a triangle with a circle on the output, takes a binary value,
0 or 1, and outputs the opposite value. With 0 on the input, it will output 1. With 1 on
the input, it will output 0.
https://doi.org/10.1515/9781501506062-002
24 | 2 Entropy Sources
In Figure 2.2, the binary value alternates from 0 to 1 to 0 as it crosses the inverters.
However, since there is an odd number of inverters, there must be a discontinuity point
where there is an input value that equals the output value. The output of that inverter
gate, therefore, changes to the opposite value. The discontinuity, as a result, moves
on to the next inverter. This discontinuity carries on cycling around the circuit. If you
look at the waveform at any particular point, then you will see that the value oscillates
up and down, changing once for each time the discontinuity makes a trip around the
circuit.
The upper trace in Figure 2.3 shows what you would see if you observed the voltage of
a point in the ring oscillator circuit changing over time, using an oscilloscope.
The lower trace shows what you would get if you were to take multiple traces and
overlap them. Aligning the left-hand edge, you would see the uncertainty in the loop
time, since the timing of the traces will vary as a result of noise randomly affecting
the time it takes for a signal to pass through a gate. The second edge shows a small
amount of variation. The size of this variation usually follows a normal distribution.
We will call that time σt , which is the normal random variate with the standard de-
viation of the loop time uncertainty. The diagram is an exaggerated view; it typically
takes thousands of clock periods for a full cycle of timing uncertainty to accumulate.
The uncertainty in the next edge is the sum of the previous uncertainty and the
same uncertainty in the current cycle. As time progresses, the uncertainty of the edge
timing increases until the uncertainty is larger than a single loop period.
The variance of two normal random variates added together is the sum of the two
variances of the two variates. So, if sampling at a period tsample is equal to N average
cycles, then
N
σt2sample = ∑ σt2 ,
1
N
σtsample = √∑ σt2 .
1
Ring Oscillator Entropy Sources | 25
So, with a ring oscillator circuit, measure the timing uncertainty. Then find which
value of N will lead to σtsample being several times greater than the loop time t. You can
set the sample period tsample = Nt, and you can expect the value of a point in the circuit
sampled every tsample seconds will appear random.
Ring oscillator entropy sources have been popular mostly because the circuit appears
very easy to implement. However, they have been found to have some issues. The out-
put of a ring oscillator is serially correlated. As the sample period tsample increases, the
serial correlation reduces, but it never goes away completely. This has been a problem
when the von Neumann debiaser or the Yuval Perez debiaser is used as the entropy
extractor. It is a requirement of those two debiaser algorithms that the input data sam-
ples be independent of each other. However, samples from serially correlated data are
not independent and so may lead to a lower output entropy than is expected from a
Von Neumann or Yuval Perez debiaser. The Von Neumann and Yuval Perez debiaser
algorithms are described in Sections 3.4 and 3.5.
A second problem is that ring oscillators have been shown to be vulnerable to
injection attack, where a periodic signal is injected via the power supply or via an
electromagnetic coupling device. This can lead to the ring oscillator locking to the
frequency of the injected signal, and so, the loop time becomes predictable and the
entropy is lost.
For example, at the CHES 2009 Conference, the paper [12] was presented show-
ing a frequency injection attack on a chip and pin payment card, where they ren-
dered the random data predictable and were therefore able to cryptographically attack
the authentication protocol. The paper is available at http://www.iacr.org/archive/
ches2009/57470316/57470316.pdf.
There are some common design mistakes with ring oscillator entropy sources.
A number of ring oscillator implementations implement multiple ring oscillators and
combine the outputs by XORing them together on the assumption that the sample out-
puts from the rings are independent. See Figure 2.4.
The problem with this is that it makes it more susceptible to injection attacks, be-
cause with multiple oscillators there tends to be multiple modes in which the oscilla-
tors will lock with each other. When sequences that are locked to each other in phase
or frequency are XORed together, they cancel and make a low entropy repeating se-
quence. The more oscillators, the more the number of opportunities for an injection
attack to work. Thus, if you are implementing a ring oscillator entropy source, then
the best number of loops to implement is 1. However, if you need higher performance,
using multiple loops is a way to achieve higher performance. An appropriate way to
combine the output of multiple loops is to independently feed them into an entropy
extractor that is tolerant of correlation between the input bits, as shown in Figure 2.5.
26 | 2 Entropy Sources
In a secure RNG, you would also need an online health test per loop. It would make
sense to have tests that both check the loops have not failed and test that the loops
are not correlated with each other, so they can detect an injection attack or loop lock-
ing failure. Chapter 9 goes into greater detail on online health testing algorithms in
general and correlation detection algorithms in particular.
Two properties of the ring sizes will help prevent locking between the rings.
First, make the size of the rings different enough that one loop cannot simply shift
its frequency a little bit to match the other.
Second, ensure the LCM (lowest common multiple) of the two loop frequencies is
large compared to the loop periods, so they will not easily find a common multiple of
their base frequencies at which they will both oscillate. Choosing frequencies that are
relatively prime might hypothetically have this property, since the LCM of two different
primes p and q is p×q. However, the frequencies are not bound to integer relationships,
so some extensive testing should be employed to ensure that the circuit cannot be
coaxed to lock its loops together using injection attacks.
A control signal causes the rings to switch between being several independent
rings into one large ring.
Here, we will take a look at the what happens with multiple discontinuities in a
ring oscillator in order to understand how this entropy source behaves. To simplify the
diagrams, we will shrink the inverter gates to arrows as shown in Figure 2.6. A filled
arrow represents a nongate that is outputting logic zero, and a hollow arrow represents
an inverter gate that is outputting logic one. The oscillator loop is shown as a chain of
these inverter gates, and the number of inverter gates is indicated in the middle. With
an odd number of inverter gates, there will be a travelling discontinuity as discussed
in Section 2.1. This is shown as the dot on the loop where two gates with the same
output value meet.
It is not possible to have two discontinuities in a ring with an odd number of gates. Try
stringing an odd number of gates together in alternating sequence with two disconti-
nuities; you will see that one of the discontinuities will cancel out.
However, you can have an odd number of discontinuities. Figure 2.7 shows a ring
of 17 gates with three discontinuities and a ring of 19 gates with five.
When two discontinuities meet they will cancel each other out. So, the odd num-
ber of discontinuities will be maintained. Over time, the discontinuities in a ring os-
cillator with multiple discontinuities will collide and cancel out until there is only a
single discontinuity left. You can consider the motion of the discontinuities as follow-
ing a random walk relative to a discontinuity travelling at exactly the average loop
28 | 2 Entropy Sources
frequency. Each passage through a gate will be a little slower or a little faster, depend-
ing on the noise in the system. Adding these together over time amounts to a random
walk. Since they are performing a one dimensional random walk on a loop instead of
an infinitely long line, then, as the distance of the two random walks of two disconti-
nuities gets larger and both add up to the loop size, they will collide. Thus, multiple
discontinuities on the same loop will very quickly walk into each other and cancel.
The design of the entropy source uses an odd number of ring oscillators so that
when connected in a large loop there are still an odd number of gates in the large loop.
Multiplexors configure the loops either as individual loops or as one large loop.
In Figure 2.8, a configuration of three ring oscillators connected via multiplexors
is shown.
With the control signal high, the loops operate independently, with the output of the
loop fed back into the input. With the control signal low, the loops are joined into
Metastable Phase Collapse Ring Oscillator Entropy Sources | 29
one big loop with 49 elements, as in Figure 2.9. The three discontinuities that were
present in the three oscillators are now on the main ring and these will quickly cancel
to a single discontinuity. It is possible at the point in the ring where the multiplexors
are, to be in phase when the two rings attach. So, this can also introduce two extra
discontinuities in the large ring when the multiplexors are switched from the three
ring mode to the large ring mode.
While oscillating as independent rings, the phase of the rings should be to some
extent independent of each other. On the switch to the single large ring, those multi-
ple discontinuities are all present on the ring. This state of having multiple disconti-
nuities in the loop is a metastable state. There is a vanishingly small probability that
the discontinuities travel around the loop at exactly the same speed and never col-
lide. In reality, over a short period of time, noise will vary the timing of the travel of
the discontinuities and the multiple phases will collapse together into one phase. The
transition from multiple phases to a single phase is metastable, and noise drives the
collapse to a stable state with a single discontinuity, making the timing of the collapse
and the resulting phase of the slow oscillation nondeterministic.
In Figure 2.10, we see the fast oscillation measured at the output of the 13 gate loop
when the control signal is high, and we see it switch to a low frequency oscillation of
the large loop when the control signal is low. The state of X is sampled by the output
flip-flop on the rising edge of the control signal, which comes at the end of the slow
oscillation period, by which time the metastable state of the loop that started with at
least three discontinuities has had time to resolve to a state with one discontinuity.
So, the circuit operates with a low frequency square wave on the control signal,
switching the circuit between the two modes. The phase of the circuit at a chosen point
is captured a short time after the switch to the large ring mode. This is the random bit
and so the circuit generates 1 bit for each cycle of the control signal.
An attraction of ring oscillator RNGs is that they can be built from basic logic com-
ponents and are, therefore, easily portable between different silicon processes. The at-
tractions of metastable entropy sources include their robustness from attack, that they
are easily mathematically modeled and exhibit a high measured min entropy and high
rate of entropy generation.
The Metastable Phase Collapse Ring Oscillator Entropy Source combines the ben-
efits of the ring oscillator with some of the benefits of a metastable source, albeit with-
out the high speed of single latch metastable sources.
Figure 2.13: Signal Trace of Metastable Ring Oscillator With Single Inverter Feedback.
or 0, driven by noise; this ultimately resolves to a single loop with a single travelling
discontinuity. The resulting timing diagram is as shown in Figure 2.13.
There is a problem with connecting an inverter directly back on itself in a fast
silicon CMOS process, as in Figure 2.12. When the output voltage is in the middle of
the range, there is what is called a crowbar current passing from the power supply
input to the 0 V connection. The P transistors connected to the power supply and the
N transistors connected to the 0 V line will both be partially switched on. In a normal
configuration, either one or the other transistor is switched of, so current does not
flow from the power supply to the 0 V line. This crowbar current will consume excess
power and will limit the device lifetime. Therefore, it is commonly forbidden in silicon
design rules to connect an inverter in that manner. However, there are more complex
circuit topologies, where additional transistors limit the crowbar current. Therefore,
such a circuit would need some custom circuit design, rather than using standard logic
inverter gates.
– Very reliable (e. g., zero known failures in over 400 million instances in Intel chips
delivered).
– Robust against a number of attack strategies.
– Relatively easy to port between silicon processes.
These are the reasons that we chose to use this sort of entropy source in mass produc-
tion after it was first developed.
The basic idea is that if you build two inverters into a stable back to back config-
uration, as in Figure 2.14, the two nodes, A and B, will resolve either to a 0.1 stable
state or a 1.0 stable state when powered on. The choice of 0.1 or 1.0 is driven by noise,
because the two gates are identical. Once the noise kicks the node voltages a little
bit in one direction, the logic gates will operate as open loop amplifiers and drive the
voltages to the state toward which the noise pushed it.
In order to cycle the generator quickly, switches, implemented with transistors, can
be put on nodes A and B or in the power supplies of the gates, to force the gates to one
of the two unstable states, 0,0 or 1,1.
If you build such a circuit, you will find that it tends to always output the same
binary value. This is because when you build a pair of gates like this in silicon, one
usually turns out to be stronger than the other. So, when resolving from a 1,1 or 0,0
state to a 0,1 or 1,0 state, one gate always wins and, therefore, overcomes the ability of
noise to drive the resolution.
This can be modeled by dropping a ball on a hill. See Figures 2.15 and 2.16.
The ball will either fall to the left or the right and the resulting output, 0 or 1, will
depend on which way it rolls.
The hill represents the transfer function of the metastable circuit. The ball can
land on it at the top (equivalent to the 1,1 state) and stay there indefinitely. But the
probability of it remaining on the top reduces exponentially with increasing time, and
very quickly noise will drive the ball to fall one way or the other.
The position of the hill is fixed and it represents the relative strengths of the logic
inverters. If one gate is stronger the hill will be further to the left. If the other is stronger,
the hill will be further to the right. Over a large population of devices, the position of
the hill will take on a Gaussian distribution, but in any single device the position will
remain in the same spot.
Random documents with unrelated
content Scribd suggests to you:
The town itself is—I feel assured—the kind of town that Jack
reached when he climbed to the top of the Beanstalk, for the
entrance to Roquebrune is precisely the sort of entrance one would
expect a beanstalk to lead to. In one kitchen full of brown shadows,
in a side street near the Rue Pié, is an ancient cupboard in which,
almost without question, Old Mother Hubbard kept that hypothetical
bone which caused the poor dog such unnecessary distress of mind;
while in a wicker cage in the window of a child’s bedroom was the
Blue Bird, singing as only that bird can sing.
As there are still wolves in the woods about Roquebrune and as
red hoods are still fashionable in the Place des Frères it is practically
certain that Little Red Riding Hood lived here since it is difficult to
imagine a town that would have suited her better. As for Jack the
Giant Killer it is beyond dispute that he came to Roquebrune, for the
very castle he approached is still standing, the very gate is there
from which he hurled defiance to the giant as well as the very stair
he ascended. Moreover there is a room or hall in the castle—or at
least the remains of it—which obviously no one but a giant could
have occupied.
As time goes on archæologists will certainly prove, after due
research, that Roquebrune is the City of Peter Pan. There is no town
he would love so well; none so adapted to his particular tastes and
habits, nor so convenient for the display of those domestic virtues
which Wendy possessed. No one should grow up in this queer city,
just as no place in a nursery tale should grow old.
ROQUEBRUNE: THE EAST GATE.
T
HE position of Roquebrune high up on the hillside appears—as
has already been stated—to be precarious. It seems as if the
little city were sliding down towards the sea and would,
indeed, make that descent if it were not for an inconsiderable ledge
that stands in its way. It can scarcely be a matter of surprise,
therefore, that there is a legend to the effect that Roquebrune once
stood much higher up the hill, that the side of the mountain broke
away, laying bare the cliff and carrying the town down with it to its
present site, where the opportune ledge stayed its further
movement.
Like other legendary landslips this convulsion of nature is said to
have taken place at night and to have been conducted with such
delicacy and precision that the inhabitants were unaware of the
“move.” They were not even awakened from sleep: no stool was
overturned: no door swung open: the mug of wine left overnight by
the drowsy reveller stood unspilled on the table: no neurotic dog
burst into barking, nor did a cock crow, as is the custom of that bird
when untoward events are in progress. Next morning the early riser,
strolling into the street with a yawn, found that his native town had
made quite a journey downhill towards the sea and had merely left
behind it a wide scar in the earth which would make a most
convenient site for a garden. Unhappily landslips are no longer
carried out with this considerate decorum, so the gratitude of
Roquebrune should endure for ever.
This is one legend; but there is another which is a little more
stirring and which has besides a certain botanical interest. At a
period which would be more clearly defined as “once upon a time”
the folk of Roquebrune were startled by a sudden horrible rumbling
in the ground beneath their feet, followed by a fearful and sickly
tremor which spread through the astonished town.
Everybody, clad or unclad, young or old, rushed into the street
screaming, “An earthquake!” It was an earthquake; because every
house in the place was trembling like a man with ague, but it was
more than an earthquake for the awful fact became evident that
Roquebrune was beginning to glide towards the sea.
People tore down the streets to the open square, to the Place
des Frères, which stands on the seaward edge of the town. The
stampede was hideous, for the street was unsteady and uneven. The
very road—the hard, cobbled road—was thrown into moving waves,
such as pass along a shaken strip of carpet. To walk was impossible.
Some fell headlong down the street; others crawled down on all
fours or slid down in the sitting position; but the majority rolled
down, either one by one or in clumps, all clinging together.
The noise was fearful. It was a din made up of the cracking of
splintered rock, the falling of chimneys, the rattle of windows and
doors, the banging to and fro of loose furniture, the crashing of the
church bells, mingled with the shouts of men, the prayers of women
and the screams of children. A man thrown downstairs and clinging
to the heaving floor could hear beneath him the grinding of the
foundations of his house against the rock as the building slid on.
The houses rocked from side to side like a labouring ship. As a
street heeled over one way the crockery and pots and pans would
pour out of the doors like water and rattle down the streets with the
slithering knot of prostrate people.
Clouds of dust filled the air, together with fumes of sulphur from
the riven cliff. Worst of all was an avalanche of boulders which
dropped upon the town like bombs in an air raid.
The people who clung to the crumbling parapet of the Place des
Frères saw most; for they were in a position which would correspond
to the front seat of a vehicle. They could feel and see the town
(castle, church and all) skidding downhill like some awful machine,
out of control and with every shrieking and howling brake jammed
on.
They could see the precipice ahead over which they must soon
tumble. Probably they did not notice that at the very edge of the
cliff, standing quite alone, was a little bush of broom covered with
yellow flowers.
The town slid on; but when the foremost wall reached the bush
the bush did not budge. It might have been a boss of brass. It
stopped the town as a stone may stop a wagon. The avalanche of
rocks ceased and, in a moment, all was peace.
The inhabitants disentangled themselves, stood up, looked for
their hats, dusted their clothes and walked back, with unwonted
steadiness, to their respective homes, grumbling, no doubt, at the
carelessness of the Town Council.
They showed some lack of gratitude for I notice that a bush of
broom has no place on the coat of arms of Roquebrune.
XXXIV
SOME MEMORIES OF ROQUEBRUNE
R
OQUEBRUNE is very old. It can claim a lineage so ancient that
the first stirrings of human life among the rocks on which it
stands would appear to the historian as a mere speck in the
dark hollow of the unknown. Roquebrune has been a town since
men left caves and forests and began to live in dwellings made by
hands. It can boast that for long years it was—with Monaco and Eze
—one of the three chief sea towns along this range of coast. Its
history differs in detail only from the history of any old settlement
within sight of the northern waters of the Mediterranean.
The Pageant of Roquebrune unfolds itself to the imagination as a
picturesque march of men with a broken hillside as a background
and a stone stair as a processional way. Foremost in the column that
moves across the stage would come the vague figure of the native
searching for something to eat; then the shrewd Phœnician would
pass searching for something to barter and then the staid soldierly
Roman seeking for whatever would advance the glory of his imperial
city. They all in turn had lived in Roquebrune.
ROQUEBRUNE, SHOWING THE CASTLE.
Anyhow, whatever the reason, the count and his men, all in good
spirits, appeared before the walls of the town and prepared for an
assault. Now the state of affairs was as follows. Roquebrune, owing
to its position, could not withstand a siege. Its fall was inevitable
and merely a question of time. The governor would, however, be
compelled to defend the town to the very last. He would man the
walls and barricade the gates and, calling his company together in
the Place des Frères would remind them of their duty, would tell
them, with uplifted sword, that Roquebrune must be defended so
long as a wall remained; that the enemy must not enter the town
except over their dead bodies and that, in the defence of their
homes, they must be prepared to die like heroes.
Now things seemed rather different to the governor’s wife. She
was a shrewd and practical woman not given to heroics. She knew
that Roquebrune could not withstand a siege and must assuredly be
taken. She probably heard the stirring address in the square and did
not at all like her husband’s talk about dying to a man and about
people walking over dead bodies and especially over his body. She
knew that the more determined the resistance the more terrible
would be the revenge when the town was taken. She did not like
people being killed, especially her nice people of Roquebrune.
Besides, as she paced to and fro, a couple of children were tugging
at her dress and asking her why she would not take them out on the
hill-side to play as she did every morning.
So when the night came she put a cloak over her head, made her
way out of the town, found the enemy’s camp and told the count
how—by certain arrangements she had made—he could enter the
town without the loss of a man.
Before the day dawned the bewildered inhabitants, who had
been up all night fussing and hiding away their things, found that
the Ventimiglians were in occupation of the town; for, as the
historian says, “the besiegers entered the town without striking a
blow.”
Thus ended the siege of Roquebrune. It ended in a way that was
probably satisfactory to both parties and, indeed, to everyone but
the governor who had, without question, a great deal to say to his
lady on the subject of minding her own business.
As she patted the head of her smallest child and glanced at the
breakfast table she, no doubt, replied that she had minded her own
business.
T
HE hills that overshadow the coast road between Cap d’Ail and
Roquebrune are perhaps as diligently traversed by the winter
visitor as any along the Riviera, because in this area level roads
are rare and those who would walk far afield must of necessity climb
up hill.
The hill-side is of interest on account of the number of pre-
historic walled camps which are to be found on its slopes. These
camps form a series of strongholds which extends from Cap d’Ail to
Roquebrune. There are some seven of these forts within this range.
The one furthest to the west is Le Castellar de la Brasca in the St.
Laurent valley on the Nice side of Cap d’Ail. Then come L’Abeglio just
above the Cap d’Ail church, the Bautucan on the site of the old
signal station above the Mid-Corniche, the Castellaretto over the
Boulevard de l’Observatoire, Le Cros near the mule-path to La Turbie
and lastly Mont des Mules and Le Ricard near Roquebrune.
Of these the camp most easily viewed—but by no means the
most easy to visit—is that of the Mont des Mules, on the way up to
La Turbie. This is a bare hill of rough rocks upon the eastern
eminence of which is a camp surrounded by a very massive wall
built up of huge unchiselled stones. It is fitly called a “camp of the
giants,” for no weaklings ever handled such masses of rock as these.
The Romans who first penetrated into the country must have viewed
these military works with amazement, for competent writers affirm
that they date from about 2,000 years before the birth of Christ.
Along this hill-side also are traces of the old Roman road,
fragments which have been but little disturbed and which, perhaps,
are still paved with the very stones over which have marched the
legions from the Imperial City. To the east of La Turbie and just
below La Grande Corniche are two Roman milestones, side by side,
in excellent preservation. There are two, because they have been
placed in position by two different surveyors.
They stand by the ancient way and show clearly enough the
mileage—603. The next milestone (604) stood on the Aurelian Way
just outside La Turbie, at the point where the road is crossed by the
railway, but only the base of it remains. Between it and the previous
milestone is a Roman wayside fountain under a rounded arch. It is
still used as a water supply by the cottagers and the conduit that
leads to it can be traced for some distance up the hill.
The first Roman milestone to the west of La Turbie (No. 605) is
on the side of the Roman road as it turns down towards Laghet.[49]
This milestone is the finest in the district and is remarkably well
preserved. Those who comment on the closeness of these milliaires
must remember that the Roman mile was 142 yards shorter than the
English.
At the foot of Mont Justicier, near to the gallows and by the side
of the actual Roman road, is the little chapel of St. Roch. It is a very
ancient chapel and its years weigh heavily upon it, for it has nearly
come to the end of its days. It is built of rough stones beneath a
coating of plaster and has a cove roof covered with red tiles. The
base of the altar still stands, traces of frescoes can be seen on the
walls and on one side of the altar is an ambry or small, square wall-
press. It was in this sorrowful little chapel that criminals about to be
executed made confession and received the last offices of the
Church.
A sadder place than this in which to die could hardly be realised.
The land around is so harsh, the hill so heartless, the spot so lonely.
And yet many troubled souls have here bid farewell to life and have
started hence on their flight into the unknown. Before the eyes of
the dying men would stretch the everlasting sea. On the West—
where the day comes to an end—the world is shut out by the vast
bastion of the Tête de Chien; but on the East, as far as the eye can
reach, all is open and welcoming and full of pity. It is to the East that
the closing eyes would turn, to the East where the dawn would
break and where would glow, in kindly tints of rose and gold, the
promise of another day.
There is one lonely tree on this Hill of Death—a shivering pine;
while, as if to show the kindliness of little things, some daisies and a
bush of wild thyme have taken up their place at the foot of the
gallows.
[49] The ancient road lies above and to the west of the
modern road to the convent.
[50] “Old Provence,” by T. A. Cook, Vol. 2, p. 169.
[51] “Les Alpes Maritimes,” 1902.
THE CHAPEL OF ST. ROCH.
XXXVI
MENTONE
M
ENTONE is a popular and quite modern resort on the Riviera
much frequented by the English on account of its admirable
climate. Placed on the edge of the Italian frontier it is the
last Mediterranean town in France. It lies between the sea and a
semicircle of green hills upon a wide flat which is traversed by four
rough torrents. It is, on the whole, a pleasant looking place although
it is not so brilliant in colour as the posters in railway stations would
make it. It is seen at its best from a distance, for then its many dull
streets, its prosaic boulevards and its tramlines are hidden by bright
villas and luxuriant gardens, by ruddy roofs and comfortable trees.
Standing up in its midst is the old town which gives to it a faint
suggestion of some antiquity.
This old town, together with the port, divides Mentone into two
parts—the West and the East Bays. The inhabitants also are divided
into two sections—the Westbayers and the Eastbayers, and these
two can never agree as to which side of the town is the more
agreeable. They have fought over this question ever since houses
have appeared in the two disputed districts and they are fighting on
the matter still. The Westbayer wonders that the residents on the
East can find any delight in living, while the Eastbayer is surprised
that his acquaintance in the other bay is still unnumbered with the
dead. I had formed the opinion that the Western Bay was the more
pleasant and the more healthy but Augustus Hare crushes me to the
ground for he writes, “English doctors—seldom acquainted with the
place—are apt to recommend the Western Bay as more bracing; but
it is exposed to mistral and dust, and its shabby suburbs have none
of the beauty of the Eastern Bay.” So I stand corrected, but hold to
my opinion still.
Hare is a little hard on Mentone by reason of its being so
painfully modern. “Up to 1860,” he says, “it was a picturesque
fishing town, with a few scattered villas let to strangers in the
neighbouring olive groves, and all its surroundings were most
beautiful and attractive; now much of its two lovely bays is filled
with hideous and stuccoed villas in the worst taste. The curious old
walls are destroyed, and pretentious paved promenades have taken
the place of the beautiful walks under tamarisk groves by the sea-
shore. Artistically, Mentone is vulgarised and ruined, but its dry,
sunny climate is delicious, its flowers exquisite and its excursions—
for good walkers—are inexhaustible and full of interest.”[52]
There can be few who will not admit that the modern town of
Mentone is commonplace and rather characterless, but, at the same
time, it must be insisted that a large proportion of the Mentone villas
are—from every point of view—charming and free from the charge
of being vulgar.
Some indeed, with their glorious gardens, are serenely beautiful.
With one observation by Mr. Hare every visitor will agree—that in
which he speaks of the country with which Mentone is surrounded.
It is magnificent and so full of interest and variety that it can claim, I
think, to have no parallel in any part of the French Riviera.
MENTONE: THE OLD TOWN.
ebookbell.com