100% found this document useful (1 vote)
12 views

Random Number Generatorsprinciples And Practices A Guide For Engineers And Programmers David Johnston instant download

The document is a comprehensive guide titled 'Random Number Generators—Principles and Practices' by David Johnston, aimed at engineers and programmers. It covers various types of random number generators, including cryptographically secure and noncryptographic algorithms, as well as entropy sources and extraction methods. The book also discusses testing methodologies for random number generation to ensure statistical uniformity and security.

Uploaded by

shudyhanssv7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
12 views

Random Number Generatorsprinciples And Practices A Guide For Engineers And Programmers David Johnston instant download

The document is a comprehensive guide titled 'Random Number Generators—Principles and Practices' by David Johnston, aimed at engineers and programmers. It covers various types of random number generators, including cryptographically secure and noncryptographic algorithms, as well as entropy sources and extraction methods. The book also discusses testing methodologies for random number generation to ensure statistical uniformity and security.

Uploaded by

shudyhanssv7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 83

Random Number Generatorsprinciples And Practices

A Guide For Engineers And Programmers David


Johnston download

https://ebookbell.com/product/random-number-generatorsprinciples-
and-practices-a-guide-for-engineers-and-programmers-david-
johnston-51027358

Explore and download more ebooks at ebookbell.com


Here are some recommended products that we believe you will be
interested in. You can click the link to download.

Engineering Applications Of Fpgas Chaotic Systems Artificial Neural


Networks Random Number Generators And Secure Communication Systems 1st
Edition Esteban Tlelocuautle

https://ebookbell.com/product/engineering-applications-of-fpgas-
chaotic-systems-artificial-neural-networks-random-number-generators-
and-secure-communication-systems-1st-edition-esteban-
tlelocuautle-5484912

Random Number Generator On Computers First Edition Naoya Nakazawa


Hiroshi Nakazawa

https://ebookbell.com/product/random-number-generator-on-computers-
first-edition-naoya-nakazawa-hiroshi-nakazawa-63371562

A Phase Fluctuation Based Practical Quantum Random Number Generator


Scheme With Delayfree Structure Min Huang

https://ebookbell.com/product/a-phase-fluctuation-based-practical-
quantum-random-number-generator-scheme-with-delayfree-structure-min-
huang-10882418

Random Number Generation And Monte Carlo Methods 2nd Ed James E Gentle

https://ebookbell.com/product/random-number-generation-and-monte-
carlo-methods-2nd-ed-james-e-gentle-889832
Quantum Random Number Generation Kollmitzer C Ed

https://ebookbell.com/product/quantum-random-number-generation-
kollmitzer-c-ed-12084274

Probability And Random Number A First Guide To Randomness Hiroshi


Sugita

https://ebookbell.com/product/probability-and-random-number-a-first-
guide-to-randomness-hiroshi-sugita-6984630

Recent Perspectives In Random Matrix Theory And Number Theory Mezzadri


F

https://ebookbell.com/product/recent-perspectives-in-random-matrix-
theory-and-number-theory-mezzadri-f-2045998

Lectures On Random Lozenge Tilings Cambridge Studies In Advanced


Mathematics Series Number 193 1st Edition Vadim Gorin

https://ebookbell.com/product/lectures-on-random-lozenge-tilings-
cambridge-studies-in-advanced-mathematics-series-number-193-1st-
edition-vadim-gorin-51678676

Frontiers In Number Theory Physics And Geometry 1 On Random Matrices


Zeta Functions And Dynamical Systems R Cartier

https://ebookbell.com/product/frontiers-in-number-theory-physics-and-
geometry-1-on-random-matrices-zeta-functions-and-dynamical-systems-r-
cartier-4101530
David Johnston
Random Number Generators—Principles and Practices
David Johnston
Random Number
Generators—Principles
and Practices

|
A Guide for Engineers and Programmers
ISBN 978-1-5015-1513-2
e-ISBN (PDF) 978-1-5015-0606-2
e-ISBN (EPUB) 978-1-5015-0626-0

Library of Congress Control Number: 2018949266

Bibliographic information published by the Deutsche Nationalbibliothek


The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie;
detailed bibliographic data are available on the Internet at http://dnb.dnb.de.

© 2018 Walter de Gruyter GmbH, Berlin/Boston


Typesetting: VTeX UAB, Lithuania
Printing and binding: CPI books GmbH, Leck

www.degruyter.com
|
This book is dedicated to the memory of George Cox, without whom my work in ran-
dom number generators would never have started and who was a tenacious engineer-
ing partner.

Thank you to my wife, Tina, for putting up with me disappearing every weekend for
two years to write this book. To Charles Dike, Rachael Parker, James Shen, Ammon
Christiansen and Jesse Walker, Jian Zhong Wang, Kok Ching Eng, Beng Koi Lim, Ping
Juin Tan and all the other engineers who worked with me to achieve higher perfor-
mance random number generators a reality in Intel products; to Nichole Schimanski
for answering many of my dumb mathematics questions; to the many academics who
inspired me and provided important insights and findings, including Yvgenny Dodis,
Ingrid Verbauwhede, Vladimir Rožić, Bart Massey, Hugo Krawczyk, Boaz Barak, Rus-
sell Impagliazzo, and Avi Wigderson, thanks.
About De|G PRESS
Five Stars as a Rule
De|G PRESS, the startup born out of one of the world’s most venerable publishers,
De Gruyter, promises to bring you an unbiased, valuable, and meticulously edited
work on important topics in the fields of business, information technology, comput-
ing, engineering, and mathematics. By selecting the finest authors to present, without
bias, information necessary for their chosen topic for professionals, in the depth you
would hope for, we wish to satisfy your needs and earn our five-star ranking.
In keeping with these principles, the books you read from De|G PRESS will be
practical, efficient and, if we have done our job right, yield many returns on their price.
We invite businesses to order our books in bulk in print or electronic form as a
best solution to meeting the learning needs of your organization, or parts of your or-
ganization, in a most cost-effective manner.
There is no better way to learn about a subject in depth than from a book that is
efficient, clear, well organized, and information rich. A great book can provide life-
changing knowledge. We hope that with De|G PRESS books you will find that to be the
case.

https://doi.org/10.1515/9781501506062-201
Contents
About De|G PRESS | VII

Preface | XVII

1 Introduction | 1
1.1 Classes of Random Number Generators | 3
1.2 Naming RNGs | 5
1.3 Disambiguating RNG Types | 6
1.4 Nonuniform RNGs | 7
1.5 Noncryptographically Secure PRNG Algorithms | 8
1.6 Cryptographically Secure PRNG Algorithms | 9
1.6.1 Example of CSPRNG: The SP800-90A CTR DRBG | 11
1.6.2 Attacking CSPRNGs | 12
1.7 Controlled Defect RNGs | 14
1.8 Noise Source Circuits | 15
1.9 TRNG Extractor/Conditioning Algorithms | 16
1.10 The Structure of Secure RNG Implementations | 17
1.10.1 Point A, the Raw Data | 17
1.10.2 Points D and E, the Health Status Feedback | 18
1.10.3 Point B, the Seed Quality Data | 18
1.10.4 Point C, the PRNG Output | 18
1.11 Pool Extractor Structures | 19
1.12 What Key to Use? | 20
1.13 Multiple Input Extractors | 21

2 Entropy Sources | 23
2.1 Ring Oscillator Entropy Sources | 23
2.1.1 Ring Oscillator Entropy Source Problems | 25
2.2 Metastable Phase Collapse Ring Oscillator Entropy Sources | 26
2.3 Fast Digital TRNG Based on Metastable Ring Oscillator | 30
2.4 Feedback Stabilized Metastable Latch Entropy Source | 31
2.5 Infinite Noise Multiplier Entropy Source | 35
2.6 Diode Breakdown Noise Entropy Source | 39

3 Entropy Extraction | 41
3.1 The Simplest Extractor, the XOR Gate | 42
3.2 A Simple Way of Improving the Distribution of Random Numbers that
Have Known Missing Values Using XOR | 46
3.2.1 Is This Efficient? | 51
X | Contents

3.2.2 Why This Might Matter: Two Real-World Examples with Very
Different Results | 51
3.3 Debiasing Algorithms | 57
3.4 Von Neumann Debiaser Algorithm | 57
3.5 Yuval Peres Debiaser Algorithm | 63
3.6 Blum’s Method Debiaser Algorithm | 66
3.7 Cryptographic Entropy Extractors | 72
3.8 Pinkas Proof, or Why We Cannot Have Nice Things | 72
3.9 Seeded Entropy Extractors | 73
3.9.1 CBC-MAC Entropy Extractor | 74
3.9.2 CMAC Entropy Extractor | 77
3.9.3 HMAC Entropy Extractor | 79
3.10 Multiple Input Entropy Extractors | 81
3.10.1 Barak, Impagliazzo, Wigderson 3 Input Extractor | 82
3.10.2 2-EXT 2-Input Extractor | 88

4 Cryptographically Secure Pseudorandom Number Generators | 95


4.1 SP800-90A | 95
4.1.1 SP800-90A Hash DRBG | 96
4.1.2 SP800-90A HMAC DRBG | 100
4.1.3 SP800-90A CTR DRBG | 103
4.1.4 Observations On the CTR DRBG | 108
4.2 ANSI X9-82 | 109
4.3 Salsa20 | 109
4.3.1 Salsa20 Hash | 112
4.4 Cha Cha | 115
4.5 Blum Blum Shub | 117

5 Nondeterministic Random Number Generators | 119


5.1 The XOR Construction NRBG | 120
5.2 The Oversampling Construction NRBG | 120
5.3 Standards-Based NRBG Summary | 121

6 Statistically Uniform Noncryptographic PRNGs | 123


6.1 Linear Congruential Generator | 123
6.2 Multiply with Carry Uniform PRNG | 126
6.3 Xorshift Uniform PRNG | 129
6.4 Permuted Congruential Generator | 132
6.4.1 PCG Naming | 133
6.4.2 64-32-PCG-XSH-RR | 134
6.4.3 64-32-PCG-XSH-RS | 135
6.4.4 128-64-PCG-XSL-RR | 135
Contents | XI

6.4.5 The LCG Generator for PCGs | 136


6.4.6 The MCG Generator for PCGs | 136

7 Gaussian or Normally Distributed PRNGs | 137


7.1 Box–Muller Transform Normal Random Variate Algorithm | 137
7.2 The Ziggurat Normal Random Variate Algorithm | 138
7.3 The ZIGNOR Normal Random Variate Algorithm | 142

8 Testing Random Numbers | 145


8.1 Known Answer Tests | 145
8.2 Distinguishability Tests | 147
8.3 PRNG Test Suites | 147
8.3.1 Dieharder | 147
8.3.2 NIST SP800-22 | 148
8.3.3 SEMB GM/T 0005-2012 | 148
8.4 Entropy Measurements | 148
8.5 Shannon Entropy Estimation | 148
8.6 Min Entropy Estimation | 149
8.7 Model Equivalence Testing | 151
8.8 Statistical Prerequisite Testing | 151
8.8.1 Mean | 151
8.8.2 Standard Deviation | 152
8.8.3 The χ 2 Test of Randomness | 154
8.8.4 Serial Correlation Coefficient | 159
8.8.5 Lag-N Correlation | 162
8.8.6 A Note on Bit Ordering | 165
8.8.7 Efficient Computation of a Correlogram using FFTs | 166
8.9 The Problem Distinguishing Entropy and Pseudorandomness | 169
8.10 Statistical Tests of Uniformity | 170
8.11 Results That are “Too Good” | 170
8.12 Summary | 171

9 Online Random Number Testing | 173


9.1 Tagging or Blocking? | 173
9.2 How Much Data to Test? | 175
9.3 What Should an Online Test Test for? | 175
9.4 The FIPS 140-2 Continuous RNG Test | 176
9.5 The SP800-90B Repetition Count Test | 179
9.6 The SP800-90B Adaptive Proportion Test | 180
9.7 Pattern Counting Health Test | 186
9.8 Online Mean and Serial Correlation Test | 188
9.9 Online Source Independence Test | 195
XII | Contents

10 SP800-22 Distinguishability Tests | 199


10.1 First, Do not Assume the Contrapositive | 199
10.2 SP800-22rev1a Monobit Test | 200
10.2.1 Application | 200
10.2.2 Procedure | 200
10.2.3 Monobit Test Python Implementation | 200
10.3 SP800-22rev1a Frequency Test Within a Block | 201
10.3.1 Application | 201
10.3.2 Procedure | 201
10.3.3 Frequency Test Within a Block Python
Implementation | 202
10.4 SP800-22rev1a Discrete Fourier Transform (DFT) Test | 203
10.4.1 Application | 203
10.4.2 Notes on the DFT Algorithm | 204
10.4.3 Procedure | 204
10.4.4 DFT Test Example Code | 207
10.5 SP800-22rev1a Nonoverlapping Template Matching Test | 208
10.5.1 Application | 208
10.5.2 Procedure | 209
10.5.3 Nonoverlapping Template Matching Test Example
Code | 210
10.6 Overlapping Template Matching Test | 211
10.6.1 Application | 213
10.6.2 Procedure | 213
10.6.3 Overlapping Template Matching Test Example Code | 215
10.7 SP800-22rev1a Longest Runs of Ones in a Block Test | 217
10.7.1 Application | 217
10.7.2 Procedure | 217
10.7.3 Longest Runs of Ones in a Block Test Example Code | 218
10.8 SP800-22rev1a Binary Matrix Rank Test | 221
10.8.1 Application | 221
10.8.2 Procedure | 221
10.8.3 SP800-22rev1a Binary Matrix Rank Test Example
Code | 224
10.9 SP800-22rev1a Random Excursion Test | 226
10.9.1 Application | 226
10.9.2 Procedure | 226
10.9.3 Random Excursion Test Example Code | 228
10.10 SP800-22rev1a Random Excursion Variant Test | 230
10.10.1 Application | 231
10.10.2 Procedure | 231
10.10.3 Random Excursion Variant Test Example Code | 231
Contents | XIII

10.11 SP800-22rev1a Maurer’s Universal Statistical Test | 233


10.11.1 Application | 233
10.11.2 Procedure | 233
10.11.3 Maurer’s Universal Statistical Test Example Code | 236
10.12 SP800-22rev1a Linear Complexity Test | 237
10.12.1 Application | 238
10.12.2 Procedure | 238
10.12.3 Linear Complexity Test Example Code | 239
10.13 SP800-22rev1a Serial Test | 241
10.13.1 Application | 243
10.13.2 Procedure | 243
10.13.3 Serial Test Example Code | 244
10.14 SP800-22rev1a Cumulative Sums Test | 245
10.14.1 Application | 246
10.14.2 Procedure | 246
10.14.3 Cumulative Sums Test Example Code | 246
10.15 SP800-22rev1a Approximate Entropy Test | 248
10.15.1 Application | 248
10.15.2 Procedure | 248
10.15.3 Approximate Entropy Test Example Code | 249

11 Software Tools | 251


11.1 hex2bin | 251
11.2 bin2hex | 252
11.3 cleanhex | 253
11.4 djenrandom | 254
11.4.1 Pure Model | 257
11.4.2 SUMS – Step Update Metastable Source Model | 258
11.4.3 Biased Model | 259
11.4.4 Correlated Model | 259
11.4.5 LCG Model | 260
11.4.6 PCG Model | 261
11.4.7 XorShift Model | 261
11.4.8 Normal Model | 262
11.4.9 File Model | 263
11.5 quickrdrand | 265
11.5.1 Quickrdrand Output Formats | 266
11.5.2 Quickrdrand data output size | 267
11.6 ent | 267
11.6.1 Ent Output Formatting | 267
11.6.2 Ent Symbol Size Options | 268
11.6.3 Ent Occurrence Counts | 268
XIV | Contents

11.6.4 Ent Statistical Metrics | 269


11.6.5 Uses of ent | 270
11.7 djent | 272
11.7.1 Parseable Filenames | 273
11.7.2 Measuring Lag-N Correlation Coefficient | 274
11.7.3 Changing Symbol Size | 274
11.8 Dieharder | 275
11.8.1 Running Dieharder Against Internal Generators | 277
11.8.2 Running Dieharder Against Stdin | 279
11.8.3 Running Dieharder Against File Input | 280
11.9 NIST STS 2.1.2 | 281
11.10 NIST SP800-90B Entropy Assessment Suite | 284

12 RdRand and RdSeed Instructions in x86 CPUs | 289


12.1 Intel DRNG | 289
12.1.1 RdRand Instruction | 291
12.1.2 RdSeed | 292
12.1.3 AMD Support for RdRand and RdSeed | 293

13 Accessing RNGs from Software | 295


13.1 MacOS and BSD getentropy() | 295
13.2 Linux getrandom() Syscall | 296
13.3 /dev/random and /dev/urandom | 297
13.4 RdRand and RdSeed | 299
13.5 The Python Random Library | 309
13.5.1 Python Cryptographically Secure Random Numbers | 312
13.6 The Windows CryptGenRand() API | 314

14 Floating-Point Random Numbers | 317


14.1 The Floating-Point Number Distribution | 319
14.2 The Fixed-Exponent Method | 321
14.3 Exponent Distribution Compensation | 323
14.4 Diving by the Largest Random Integer | 325

15 Making a Uniform Random Number Between Nonpower of Two Bounds | 329


15.1 Rejection Method for Nonpower of 2 Bounds | 331
15.2 Large Number, Small Modulus Method for Nonpower
of 2 Bounds | 332
15.3 Floating-Point Method for Nonpower of 2 Bounds | 334

16 Generating Random Prime Numbers | 337


16.1 Safe and Strong Primes | 343
Contents | XV

17 Additive Distributions | 345


17.1 Dice | 345
17.2 Unfair Dice | 346
17.3 How the Normal Distribution Forms | 346

18 Probability Distributions | 349


18.1 Names of Probability Distributions | 349
18.2 Properties of Distributions | 351
18.3 Fair Dice and Uniform Random Numbers | 352
18.4 Normal/Gaussian Distributions | 354
18.4.1 Normal PDFs and CDFs in Programming Languages | 357
18.4.2 Gnuplot Normal Distribution Functions | 358
18.4.3 Python Normal Distribution Functions | 359
18.4.4 R Normal Distribution Functions | 361
18.4.5 C Normal Distribution Functions | 362
18.4.6 Excel Normal Distribution Functions | 366
18.5 Random Walks and Degrees of Freedom | 366
18.5.1 Expected Final Positions for Random Walks | 371
18.6 Poisson Point Processes | 372
18.7 The Binomial Distribution | 374
18.7.1 The Binomial PMF | 374
18.7.2 The Binomial Discrete CDF | 378
18.7.3 The Binomial Quantile Function | 379

19 Quantifying Entropy | 383


19.1 Rényi Entropy | 383
19.1.1 Derivation of Shannon Entropy from Rényi Entropy | 384
19.1.2 Derivation of Min Entropy from Rényi Entropy | 386
19.2 Min-Entropy of Biased Binary Sources | 387
19.3 Min-Entropy of Serially Correlated Binary Sources | 387
19.4 Distance From Uniform | 390

20 Random Methods to Generate π | 393


20.1 Random Method for Computing π | 393
20.2 Another Random Method for Computing π | 395

Appendix A Adaptive Proportion Test Cutoff Tables | 397

Appendix B High-Precision Incomplete Beta Function Implementation | 403

Appendix C Incomplete Gamma Function Implementation | 409


XVI | Contents

Appendix D Software Tool Sources | 415

Appendix E Listing Reference | 417

Bibliography | 421

Index | 423
Preface
Many books and most academic papers on random numbers turn out to be either
highly mathematical and difficult to follow, or the opposite, offering little useful engi-
neering insight. In contrast, this book is aimed at the practicing programmer or hard-
ware engineer. The use of mathematics is kept to a level appropriate for the topic,
with an emphasis on working examples and codes that run and can be used and ex-
perimented with. The reader would benefit from being able to program a computer,
possessing the sort of mathematics common in undergraduate engineering courses
and preferably having an interest in random numbers.
Random Number Generators have many uses. From modeling stochastic systems
to randomizing games, to picking lottery winners, to playing board games (using dice),
to randomizing cryptographic protocols for security.
The most critical application is in cryptographic security, where random numbers
are an essential component of every cryptographic application. There can be no cryp-
tographic security without secure, unpredictable random numbers.
While random numbers may appear to be a trivial and simple topic, it turns out
that there are many counterintuitive concepts and many subdisciplines, including
random number testing, random number generation, entropy extraction, public key
generation, and simulation.
Unfortunately, in cryptography, random number generation has proven difficult
to get right and there are many examples of cryptographic systems undermined by
poor quality random number generators.
A number of programs have been written to accompany this book. They are mostly
written in Python 2 and C. Electronic copies of this code are available through Github
(https://github.com/dj-on-github/RNGBook_code). In addition a number of exter-
nally available software tools are used. Appendix D provides pointers to all the other
software used in this book and a reference to relate listings to their location in the
book.

https://doi.org/10.1515/9781501506062-202
1 Introduction
My first professional encounter with random numbers happened while implementing
the 802.11 WEP (Wired Equivalent Privacy) protocol in a WiFi chip. This required that
random numbers be used in the protocol and the approach I took was to take noisy
data from the wireless receive hardware and pass it through an iterated SHA-1 hash
algorithm. Once a sufficient amount of noisy data had been fed in, the output of the
hash was taken to be a random number.
Given that at the time I was largely ignorant of the theory behind random number
generation in general and entropy extraction in particular, the solution was not ter-
rible, but with hindsight of a decade of working on random number generators there
are some things I would have done differently.
Subsequent to attending the IEEE 802.11i working group to work on the replace-
ment protocol to WEP (one of the security protocol standards famously back-doored by
the NSA and thereby introducing bad cryptography into standards) and later on the
802.16 PKMv2 security protocol, the need for random numbers in security protocols
and the problems specifying and implementing them, led to my career being diverted
into a multiyear program to address how to build practical random number genera-
tors for security protocols that can be built in chips, tested, mass manufactured, and
remain reliable and secure while being available to software in many contexts. Ulti-
mately this emerged as the RdRand instruction and later the RdSeed instruction in
Intel CPUs. A decade later, the requirements are still evolving as new needs for ran-
dom number generators that can operate in new contexts emerge.
My initial model for this book was for it to be the book I needed back when first
implementing a hardware random number generator for the 802.11 WEP protocol.
I would have benefited greatly from a book with clear information on the right kinds
of design choices, the right algorithms, the tradeoffs, the testing methods, and enough
of the theory to understand what makes those things the correct engineering choices.
The scope of the book grew during its writing, to cover nonuniform RNGs, a much
expanded look at various types of random number testing, nonuniform RNGs, soft-
ware interfaces for RNGs, and a mix of software tools and methods that have been
developed along the way.
Since the early 2000s, the requirements for random number generators have
evolved greatly. For example, the need for security against side channel and fault in-
jection attacks, security from quantum computer attacks, the need for smaller, lower
power random number generators in resource-constrained electronics, and the need
for highly efficient RNGs and floating point hardware RNGs for massively parallel
chips.
Random numbers are used in many other fields outside of cryptography and the
requirements and tradeoffs tend to be different. This book explores various noncryp-
tographic random number applications and related algorithms. We find examples of

https://doi.org/10.1515/9781501506062-001
2 | 1 Introduction

random number uses occurring in simulations, lotteries, games, statistics, graphics,


and many other fields.
To give a taste of what will be covered in this book, the following are three exam-
ples of random bits represented in hex format. They all look similarly random, but
the first is statistically uniform and is generated using a very secure, nonpredictable
random number generator suitable for cryptography, while the second is statistically
uniform but completely insecure and predictable and the third is neither statistically
uniform nor cryptographically secure.

Listing 1.1: Output of a Cryptographically Secure RNG


E8E03922F6759144BDF8FD850A9F459D15709084A058C2447AB4AC22B9787B35
E43F8ED014DB8F7BC2877E79E722C5C950BAF7C1ECBD3F4B91116B8D6BB97A6F
D7DEB1BFE3370A92AAC1BB47A07617DD0C6F2061AA149F378D3461EFB70BC5F3
9 D6C75E43949102E91915E9DC074AB1CC9E4D89EDEBE84EC7B47A528EA040859
2 B4419CBE814C481BF9D277ABEC0D52BF87FBD5C477BCBD8AE40D8E74E904D85
FD56D0321FC55E20FB973616C8CA641B20BDE07B7428DE4565D6728A82589F2F
6 D0AD798F0BD2CCD7A222C2B54BD309925E824CA66793681C05743DEF3EF0868
CE121A2265BE29FA4A0D80086859CAD7E6AB1A0D550295B88478E9A7DC1AABFD
441727708 B22AEB9D5C58C5D6F4356AD2979062BFE4C25534F8497862DD104F2
6 CBD49AF08C52B55E23251598A3E713D7A068BDA374DE51F66F1502030CF28B6
956 C2F681EAAACB7EC7F9F33D7CBBD2527F8A623C0344D3CEDA65C6312BB8B79
BA02B15C2A536CE3BDD4A63E2947A2C79C1CFC835077917913881451CD655E50

Listing 1.2: Output of a Uniform, Insecure RNG


B7F4E5C40A7C151D654898AA7E12508D56446BCED37864F5F12B712D8EF87FCB
6 C6FFA364601091ED0DEF6895156848BB95F16EA805CB4D8B96BB97C19AD97E0
30898 D7F6BAF8B7B11E6CB331C908E6F1958828835471648BDC14A20C4FB7921
8 C0A9DFA6B189F45B159E9ACA36941483A2082D154B0DC701BE6778026992ADE
30 A296E239D2239EF823159623D5A26F2585B77F491BA7439775B8A72FAC6365
877039 B56D96277500CD6C7DBE084B06F61886EDF2C959E6448B88619A68FB12
062 FCC04A6C56D7264F4D06FA2CC24D9F4B51A47E7D000C61C16AB8375F90725
ED216F647269F0A25AB4C8F9B36ED9242B77B74DD4362B97B787F65DCB52C159
8 FA1BB5002127DD5F2988DC432A2A3EAA774526B866FEBC06635C7F1C244ACEE
565 FD626B28245F8730A8C3400E0C9F55741DDC1BC3DFE98A5D7F2A39870CEF1
779 CF3F0E9D554CCE630C732AD8CD9959B212EE9B542B8B3B060F15229AEA3B2
6 BB346EDD19ED238D8BD83588536AC62D39B87A43F0C370703A301FBE86CADE5

Listing 1.3: Output of a Nonuniform, Insecure RNG


3 D231558968E6514D6584DF4559E4C833E770226B83D23676CDDB2D8857C8E80
AC839B5EBE231102126DF551D3986C288065C2772C6614137B380194D51F6E3D
27 C8ABBB2E0538285244D4EB2930BA0EB90A034F0F85BC1BED6BE37F52A9D3FD
B0D3F55F40BA9BDA2DD17EB66F54DF0CB35BADDC5EDF1BB638F037ACCC9231E4
FE26C963DC8647F7ECB21C851A184D8EA970F9E4770220DF2EC2C7A8CCD11E5D
3974 D7355A55F5031C7CF123F5DB0EB81D3DDE8D359C6455334A1C62B1F964E6
BB583BA383AE86B84240D383A681A0867B4AC49286DC8848D91DCCC4BAC59C18
853 C9D150CBEF54152C3C960B3FB0E60DF0A6AA10078843B213C453C90D5E1FC
Classes of Random Number Generators | 3

C53D397A527761411C86BA48B7C1524B0C60AD859B172DB0BAE46B712936F08A
DA7E75F686516070C7AD725ABCE2746E3ADF41C36D7A76CB8DB8DA7ECDD2371F
D6CA8866C5F9632B3EDBCC38E9A40D4AE94437750F2E1151762C4793107F5327
D206D66D8DF11D0E660CB42FE61EC3C90387E57D11568B9834F569046F6CEDD0

The first is generated from a random number generator in an Intel CPU that has
a metastable entropy source, followed by an AES-CBC-MAC entropy extractor and an
SP800-90A and 90C XOR construction NRBG, with an AES-CTR-DRBG. This RNG uses
algorithms that are mathematically proven to be secure in that it is algorithmically
hard to infer the internal state from the output and impossible to predict past and
future values in the output when that state is known.
The second is generated with a PCG (Permuted Congruential Generator) RNG.
PCGs are deterministic RNGs that have excellent statistical properties, but there is no
proof that it is hard to infer the internal state of the algorithm from the output, so it is
not considered secure.
The third is generated with an LCG (Linear Congruential Generator) RNG. LCGs are
simple deterministic RNG algorithms for which there are a number of statistical flaws.
It is trivial to infer the internal state of the algorithm and so to predict future outputs.
So, it is proven to be insecure.
Later chapters on NRBGs, DRBGs, entropy extractors, uniform noncryptographic
random number generators and test methods will explore the differences between
these different types of random number generator and how to test for their proper-
ties.

1.1 Classes of Random Number Generators


Things that make random numbers are generically called Random Number Generators
(RNGs). These fall into two major types: Pseudo-Random Number Generators (PRNGs)
and True Random Number Generators (TRNGs). Unfortunately, TRNG is a term that is
not well defined. It is interpreted in different ways by different people in industry and
academia.
PRNGs are deterministic algorithms that generate a “random looking” sequence
of numbers. However, given the same starting conditions, a PRNG will always give the
same sequence. Hence, the name “Pseudo-Random” Number Generator.
Typically, a PRNG will be implemented as software or digital hardware. As we will
see later, there is a special class of PRNG, the “Cryptographically Secure” PRNG or
CS-PRNG, which while producing a deterministic sequence, provides guarantees on
how hard it is to predict numbers in the sequence, such that the PRNG is usable for
cryptographic purposes.
TRNGs are nondeterministic systems. They inevitably require a hardware com-
ponent to sense “noise” or “entropy” in the environment that can be turned into
4 | 1 Introduction

nondeterministic numbers. Since a computer algorithm follows a fixed set of in-


structions, it is impossible to write a nondeterministic random number generator
algorithm that works in isolation. You must have a physical hardware component to
pass a nondeterministic source of data into the system to form what is often called a
TRNG.
For example, the command line program “djenrandom” is a program to generate
random numbers of various types. The “pure” model only produces random numbers
that should be indistinguishable from random. However, by default, it internally uses
a deterministic PRNG without any source of physical randomness. In the example be-
low, we can see that we invoke the pure mode “-m pure” and pass the output to the
“head” command to see only the first two lines, that is, “djenrandom -m pure| head -2.”
The result is the same every time, as we would expect from a deterministic PRNG al-
gorithm, which always has the same initial state:

> djenrandom -m pure | head -2


BAA0D0E8CB60A3917EA080E11B5E089333C16DAC72DD57AAE470712D5C7D5621
FE06BA76C496828F45BD469E01F50CD45E36C7869D60AAF26EB1E0DED9A02CAA
> djenrandom -m pure | head -2
BAA0D0E8CB60A3917EA080E11B5E089333C16DAC72DD57AAE470712D5C7D5621
FE06BA76C496828F45BD469E01F50CD45E36C7869D60AAF26EB1E0DED9A02CAA
> djenrandom -m pure | head -2
BAA0D0E8CB60A3917EA080E11B5E089333C16DAC72DD57AAE470712D5C7D5621
FE06BA76C496828F45BD469E01F50CD45E36C7869D60AAF26EB1E0DED9A02CAA
>

We can persuade djenrandom to use a nondeterministic random seed (which


it pulls from the Linux /dev/random service) using the “-s” argument (s stands for
“seed”), to make it act as a TRNG. Here, we see that when acting as TRNG, the result
is different every time:

> djenrandom -s -m pure | head -2


B6DC13426156F65791F4AA5358D631AC805ECAE78DDDDD7D9A38A60E87CF64BA
40386087CA176AE0C4AE95F16E163F78FACB0BFAD56669CF4F9EE471241C7F46
> djenrandom -s -m pure | head -2
7616B7E1CB28F268BAB2083659A69D8577DA86538BEDFD9CA9FB21200EF70204
A078441938A921B7E01F09092BCDB392CF4BCC8400F120C12472703BA91FFC25
> djenrandom -s -m pure | head -2
C4284E7C8C042E59D44AD591C978C86DEFECBF32DF426AA7CA1B77FD02F46607
07BD9E153A38C0A733FC9C0F262987E0FA5C3DDFF4204850537B3FF55562627E
>

Physical noise sources are never perfectly random. The common term for per-
fectly random is “IID”, meaning Independent and Identically Distributed. IID data is
allowed to be biased, but there must be no correlation between values in an IID se-
Naming RNGs | 5

quence and the data should also be statistically stationary, meaning the bias or any
other statistic cannot change with time.
There is always some bias, correlation, and nonstationarity even if it is too small to
be detected; so it is common and often necessary for a TRNG to have some form of post
processing to improve the random qualities of the data. We look at this in Chapter 3,
Entropy Extraction.
It is common but not always true, that sources of physical noise into a computer
tend to be slow and PRNGs tend to be fast. So it is common in real systems to feed the
output of a slow TRNG into a fast CS-PRNG to form a hybrid system that is cryptograph-
ically useful, nondeterministic, and fast. It is also appropriate to first pass the entropy
gathered from the noise source into an extractor before the CS-PRNG. The data from
the noise source is partially random and nondeterministic. The data from the entropy
extractor will be slower than from the noise source, but should, if in a correctly de-
signed RNG, be close to perfectly uniform, unpredictable, and nondeterministic. Once
the data from the entropy extractor has seeded a CS-PRNG, the output of the CS-PRNG
should be statistically uniform, cryptographically hard to predict, and faster.
So, as we pass through the chain, the properties of the random numbers can be la-
belled as “Bad”, “Close to perfect and ideal for cryptography” and then “Good enough
for cryptography”, whereas the performance can go from “Slow” to “Slower” to “Fast”.
See Figure 1.1.

Figure 1.1: Properties, as numbers, pass through an RNG.

Subsequent Chapters 2, Entropy Sources, 3, Entropy Extraction, and 4, Cryptographi-


cally Secure Pseudorandom Number Generators will explain why.

1.2 Naming RNGs


There are commonly used names for different types of random number generators and
there are standardized names, which are completely different. See Table 1.1. For exam-
ple, the NIST SP800-90A, B, and C standards define a set of names that are different
from the commonly used ones above. See Table 1.2.
6 | 1 Introduction

Table 1.1: Table of RNG Names.

Type of RNG Common Name or Acronym

Any RNG RNG


Any Deterministic RNG PRNG
Any Nondeterministic RNG TRNG
A Noise Sampling Circuit Noise Source
A noise sample circuit with post processing TRNG or Entropy Source
A cascade of TRNG or full entropy source followed by a CS-PRNG TRNG

NIST Use a different set of names for the same concepts:

Table 1.2: Table of NIST Names for RNGs.

SP800-90 Term Meaning

RBG Random Bit Generator


DRBG Deterministic Random Bit Generator
SEI Source of Entropy Input
ES Entropy Source
FES Full Entropy Source

1.3 Disambiguating RNG Types


Given the types and names above we can draw up a taxonomy of RNGs. All RNGs are
RNGs, but can be split into two top level groups of deterministic and non-deterministic
RNGs.
The deterministic RNGs may or may not be cryptographically secure. An example
of an insecure property is lacking prediction resistance, whereby an observer can look
at the output values and infer the internal state and so predict the future outputs.
Another example is lacking backtracking resistance, where an observer can compute
previous values from a sequence by looking at a set of later values. Secure random
number generators will have the prediction resistance and backtracking resistance
properties.
Nondeterministic RNGs may be only a noise source, or a system with a noise
source and algorithmic post processing to produce high quality random numbers.
Depending on the application, the quality of the output may or may not matter.
No naming system, whether just in common use or standardized, addresses all
these cases, and this has led to a lot of confusion about what a random number gener-
ator really is. A common question, “Is that a TRNG (True Random Number Generator)”
Nonuniform RNGs | 7

might elicit the response from me: “Do you mean a nondeterministically seeded PRNG
with well-defined computational prediction bounds or do you mean a full entropy
source comprising noise source with an extractor?” Put more clearly, this is asking
(a) Is there an entropy source producing unpredictable bits? (b) Is there an entropy
extractor turning those unpredictable bits into fully unpredictable bits, each with a
50% probability of being 1 and each bit being independent of the others? (c) Is there
a secure PRNG algorithm that prevents an observer of the output bits being able to
predict future value from the PRNG or infer past values from the PRNG?
These are the common features of a secure PRNG and so form one possible defi-
nition of a TRNG (True Random Number Generator), while the TRNG term is used very
loosely in practice.
The different RNG types are identified based on the properties of the construction
of the RNG. The essential major components of secure RNGs are the entropy source (or
noise source), the entropy extractor (or conditioner or entropy distiller), and option-
ally a PRNG (or deterministic random bit generator).
Insecure random number generators are common outside the field of cryptogra-
phy for many purposes, including simulation, system modeling, computer graphics,
and many other things. These can be uniform or be designed to follow some nonuni-
form distribution such as a Gaussian distribution. The figures of merit for these types
of generator tend be based on speed performance or closeness to some statistical
model. Figure 1.2 gives a sequence of questions that divides the various common RNG
types.
The NIST SP800-90A, B, and C specifications do not concern themselves with in-
secure RNGs, and so do not specify names for insecure RNGs, although the names
they do specify tend to be different from common use. For example, SP800-90A calls a
PRNG (Pseudo-Random Number Generator) a DRBG (Deterministic Random Bit Gener-
ator), a TRNG (True Random Number Generator), an NRBG (Nondeterministic Random
Bit Generator), and an entropy source an SEI (Source of Entropy Input).
Generally, PRNGs do not have specific terms to separate secure PRNGs from inse-
cure PRNGs. Similarly, there is no commonly used term to distinguish a simple noise
source from a noise source with post processing to increase the per-bit entropy. In
common terminology, both might be called a TRNG, whereas NIST SP800-90B and C
do distinguish between an NRBG (Nondeterministic Random Bit Generator) and an
SEI (Source of Entropy Input). In the NIST arrangement, an SEI is a component of an
NRBG.

1.4 Nonuniform RNGs


There is a class of RNG that is designed to generate random numbers that follow a
distribution other than a uniform distribution, for example, a Gaussian distribution,
8 | 1 Introduction

Figure 1.2: RNG Type Flowchart.

binomial distribution, gamma distribution, or one of the many other distributions de-
fined in mathematics.
These typically operate by starting with a uniform distribution and then applying
a transform to generate the required distribution from the uniform distribution.
We look at some nonuniform RNGs in Chapter 7 on Gaussian RNGs.

1.5 Noncryptographically Secure PRNG Algorithms


This class of RNG generates uniformly distributed random numbers. However, they
are not designed to be unpredictable. It is possible for future and past values to be
predicted from a sequence of values from such an RNG. This makes them unsuitable
for cryptographic uses.
There are many noncryptographic uses of noncryptographically secure PRNGs.
For example, database hashing algorithms are used to create a randomized distribu-
tion that minimize collisions in the database, where different bits of data land in the
same place. Simulation programs often need random stimulus to drive their model.
Cryptographically Secure PRNG Algorithms | 9

But they require repeatability, so each run has a seed number and the seed number
can be used to cause the PRNG to produce the same output and so the same simulation
run. Nonsecure PRNG algorithms tend to be simpler and faster than cryptographically
secure PRNGs, since they do not make use of the extensive cycles of linear and non-
linear transformations needed in a cryptographically secure algorithm. Efficiency is
often a goal of the design of such algorithms.
Some examples of commonly used insecure PRNG algorithms are:
1. Linear Congruential Generators (LGCs): A simple family of generators that com-
putes Xn+1 = (aXn + c) mod m.
2. XORShift: A simple generator with good statistical properties based on XORs and
shifts of its state variables.
3. Permuted Congruential Generators (PCGs): A family of generator with excellent
statistical properties, based on introducing a XorShift based permutation output
stage to the LCG loop.

A number of such algorithms are looked at in Chapter 6 on noncryptographic PRNGs.

1.6 Cryptographically Secure PRNG Algorithms


Cryptographically secure PRNGs, while behaving deterministically, are designed so
that an observer seeing a subset of the output values cannot predict any other past,
present, or future values from the output of the PRNG.
Examples of commonly used secure PRNGs are:
1. SP800-90A DRBGs: This includes the CTR-DRBG, the Hash-DRBG, and the HMAC-
DRBG. The Dual-EC-DRBG is a famous example of a back-doored RNG that was
removed from the SP800-90A specification after the back door properties were
made widely known.
2. ChaChaCha and its predecessor Salsa20, using shifts, additions, and xors on its
state variables; used in OpenSSL and in the OpenBSD and NetBSD Random Num-
ber service.
3. BlumBlumShub: A number-theoretic RNG with well-proven security properties,
but very inefficient compared to the other PRNGs in this list.

Generally, we think of a PRNG having an internal “state”, that is, a number of bits. The
state of those bits determines the next output value, and each time a value is output,
the internal state is changed so that the next number appears random with respect to
all previous and future values.
The mechanism for determining the output from the state and the mechanism for
updating the state can be distilled into a next-state function fns (si ) and a state output
function fout (si ). These are arranged as shown in Figure 1.3.
10 | 1 Introduction

Figure 1.3: CSPRNG State Diagram.

The primary security property for a CSPRNG is that an adversary with visibility of the
outputs xi cannot infer the internal state. The adversary infers the state by inverting
fout , thus computing si from xi and earlier outputs xi−n . So, it is necessary for the output
function fout to be hard to invert.
Another property desirable for a CSPRNG is backtracking resistance. If the state si
is revealed, we know the adversary can predict future outputs by repeatedly applying
fns to si . We want it to be hard for the adversary to compute earlier outputs from the
state. Therefore, the state update function fns should be hard to invert, so that si−1
cannot be computed from the inverse of fns . That is, computing
−1
si−1 = fns (si )

is computationally hard.
Another property for a CSPRNG is forward prediction resistance, whereby even
with knowledge of some internal state si , future states cannot be computed. This is
not a property of all CSPRNGs and, for example, is an optional property in SP800-90A.
The means to achieve forward prediction resistance is to inject fresh entropic data into
each state update. So the next state function now has two parameters, the previous
state si and fresh entropy Ei :

si+1 = fnspr (si , Ei )

Thus, future state values are both a function of the previous state and future
entropy data that has not been generated yet. In real-world RNGs, this is typically
achieved by running the entropy source continuously, taking the new bits that are
available from the entropy source at the time of the update and stirring them into
the current or next state of the update function, using xor, or a cryptographic mixing
function.
Note that while it is easy to create PRNGs that have these desirable properties, by
simply using known good cryptographic functions with the noninvertibility properties
required, there are many examples where they do not have such properties, either
through poor design, or through deliberately hidden methods to invert the update and
output functions or leak the state. The now classic example of a deliberately flawed
RNG is the SP800-90 Dual-EC-DRBG, which we look at in Section 4.1.
CSPRNGs have an additional function to reseed the state of the PRNG with fresh
entropy. Typically this is always performed at the start of the PRNG being run and may
be performed periodically, or on request at later points in time.
Cryptographically Secure PRNG Algorithms | 11

1.6.1 Example of CSPRNG: The SP800-90A CTR DRBG

This is a quick look at the SP800-90A CTR DRBG. We look at this and other CSPRNG
algorithms in more detail in Chapter 4. Details of the SP800-90A CTR DRNG are in
Section 4.1.3.
State: The state of the algorithm consists of three variables, V (vector) a 128 bit
number, K (Key) a key, one of 128, 192, or 256 bits (depending on the key size of the AES
function used), and finally C (count), the count of the number of generated outputs
since the last reseed.
Generate Function: The output function is called generate(). The function incre-
ments V and C and the output using the AES algorithm invoked with the key input K
and vector input V.
generate ():
V = V +1
C = C +1
x = AES (K ,V)
output x

Update Function: The next state function is called update(). This computes a new
value for K and V so that backtracking resistance is achieved. The key K that was used
in updating to the new K is lost, so inverting the procedure would require searching
all the possible values for K.
update ():
K ' = K xor AES (K , V +1)
V ' = V xor AES (K , V +2)

V = V +2
K = K'
V = V'

The above example assumes that the key size is the same as the block size of the
AES. If the key size was 256 bits, then the CTR algorithm would be extended in order
to get enough bits to update the key. In the listing below, the CTR algorithm is run for
three invocations of AES to get 256 bits of CTR output for a 256 bit key update and a
further 128 bits for the vector update.
update (K ,V ):
K_lower ' = K_lower xor AES (K , V +1)
K_upper ' = K_upper xor AES (K , V +2)
V ' = V xor AES (K , V +3)

V = V +3
K = K_upper ' | K_lower '
V = V'
12 | 1 Introduction

There are other details in the full specification that have been omitted here, such
as the personalization strings, initialization, additional entropy input, and handling
of multiple key sizes.
It is from these two algorithms that the term CTR DRBG is derived. Given K and V,
the output value and the values to XOR into K and V in the update are drawn from the
output of AES in CTR (CounTeR) mode. CTR mode involves taking an IV (initialization
vector) and incrementing it with a counter. For each increment, the value IV + n is en-
crypted with the key K. This yields a series of output values that are indistinguishable
from random. This is shown in Figure 1.4.

Figure 1.4: CTR Mode.

In the CTR DRBG, V is used as the IV (Initialization Vector) and the outputs from the
CTR mode operation are used to provide the data output and the update values for K
and V. This is shown for the 128 bit key case in Figure 1.5, which requires three invo-
cations of AES in the CTR algorithm, one each for output data value, the key update,
and the vector update.

Figure 1.5: CTR DRBG Relationship to CTR


Mode.

Chapter 4 goes into detail on the design of a number of cryptographically secure RNGs.

1.6.2 Attacking CSPRNGs

Assuming there is no low effort algorithm to compute the internal state from the output
values, the way to predict the past or future outputs of a secure PRNG is to try guessing
all the possible internal states and compare the output from those states against the
observed output. When you find a match, you have the internal state.
If you had a PRNG with 16 bits of internal state and you had three output values A,
B, and C, all you need to do to determine future values D, E, and F is to try running the
Cryptographically Secure PRNG Algorithms | 13

algorithm for three steps starting from each of the 216 (that is 65536) possible internal
states. If the output matches A, B, and C, then you have the internal state and you
can keep going to predict future outputs. Obviously this is a trivial task for a modern
computer.
The following program implements a weak PRNG using AES-CTR mode. It is weak
because the key is only 16 bits in length. It generates 6 output values. Then the attack
algorithm executes a small loop which searches through all the 216 possible keys to find
a key that decrypts the first three random values to three values that increment by 1.
Those decrypted three values are the sequence of vector values V, V +1 and V +2. Once
it has found the key, it goes on to generate the next three outputs, which match the
final three values from the weak RNG, showing that it has managed to predict future
values, by inverting the output function to get K and V from the output data.

Listing 1.4: Weak PRNG Example


# !/ usr / bin / env python

# A weak RNG with only 16 bit keys


from Crypto . Cipher import AES
key = 0 x2984 # 16 bit key .
V = 0 x0123456789abcdef0123456789abcdef # 128 bit vector

def byteify (n ):
bytelist = list ()
for j in xrange (16):
bytelist . append (( n >> (8* j )) & 0 xff )
return ( bytes ( bytearray ( bytelist )))

def debyteify ( bytelist ):


bi = 0
for i in xrange (16):
bi = ( bi << 8) + ord ( bytelist [15 - i ])
# print " debyteify % x " % ord ( bytelist [15 - i ])
return ( bi )

def printbytes (h , b ):
st = h
for i in xrange (16):
st = st + " %02 x " % ord ( b [15 - i ])
print st

cipher = AES . new ( byteify ( key ) , AES . MODE_ECB )


outputs = list ()
for i in xrange (6):
outputs . append ( cipher . encrypt ( byteify ( V )))
V += 1

# Now outputs [] contains 6 randomish values


14 | 1 Introduction

for i in xrange ( len ( outputs )):


print " Output ␣ % d ␣ : ␣ %032 x " % (i , debyteify ( outputs [ i ]))

# Now search for the key using first three numbers .


for i in xrange (65536):
trialkey = byteify ( i )
cipher = AES . new ( trialkey , AES . MODE_ECB )
try1 = debyteify ( cipher . decrypt ( outputs [0]))
try2 = debyteify ( cipher . decrypt ( outputs [1]))
try3 = debyteify ( cipher . decrypt ( outputs [2]))

if ( try3 == ( try2 +1)) and ( try2 == ( try1 +1)):


print " Key ␣ %04 x ␣ works " % i
break
# Now predict the next 3 values
predict1 = cipher . encrypt ( byteify ( try3 +1))
predict2 = cipher . encrypt ( byteify ( try3 +2))
predict3 = cipher . encrypt ( byteify ( try3 +3))

print " Prediction ␣ for ␣ outputs ␣ 3 -5: "


print " %032 x " % debyteify ( predict1 )
print " %032 x " % debyteify ( predict2 )
print " %032 x " % debyteify ( predict3 )

So, a necessary feature of a secure PRNG is that it has enough internal state
bits that it is not computationally feasible to try every possible value. Typically a se-
cure PRNG would have 256 bits or more internal state. In the CTR-DRBG,
the key size determines the security level. With a key size of 128 bits, it would
take 10 782 897 524 556 318 080 696 079 years for a computer to search the key
space at a rate of 1 million keys per second. With a 256 bit key it would take
366 922 989 192 195 209 469 576 219 385 149 402 531 466 222 607 677 909 725 256 622
years.

1.7 Controlled Defect RNGs


In order to test extractor algorithms or calibrate entropy estimation algorithms, it is
necessary to create random numbers with known deviations from a uniform distribu-
tion such as bias or serial correlation.
The tool, djenrandom, made available with this book is an example of a program
that implements a set of controlled defect RNGs. The four defect models supported are
1. Bias, where the probability of a bit being 1 can be controlled.
2. Correlation, where the serial correlation of the bitstream can be controlled.
3. SUMS (Step Update Metastable Source), a model which closely models a meta-
stable entropy source with feedback and can produce data with bias, serial corre-
lation, and nonstationarity all at the same time.
Noise Source Circuits | 15

4. SINBIAS (Sinusoidal Bias), a model which varies the bias of generated data sinu-
soidally.

Links to the software tools, including djenrandom are given in Appendix D.


Chapter 11, Software Tools looks at using djenrandom for defective data generating
in greater detail, along with looking at a number of other software tools.

1.8 Noise Source Circuits


In order that an RNG is nondeterministic, it needs some source of nondeterministic
data. This requires hardware that is capable of taking nondeterministic events from
the environment and turning them into bits. This type of circuit is commonly called
a noise source or entropy source. Noise sources are circuits rather than algorithms.
However, noise sources should be combined with a deterministic post processing al-
gorithm to improve the entropy quality. These algorithms are called entropy extractors
or conditioners. This book treats noise sources as only the noise sampling circuit and
the extractor/conditioner algorithms separately.
Examples of noise source circuits include:
1. RO (Ring Oscillator)/Phase error accumulation sources. This is a widely used but
often easily attacked class of noise source. A ring oscillator is a loop of inverters
which self oscillates. When sampled periodically, the accumulated phase noise
of the loop results in random bits being sampled. It is common for RO circuits to
be implemented with multiple ROs, each with mutually prime loop lengths as a
mechanism to prevent them from running in phase with each other, although this
is rarely successful.
2. Metastable Phase Collapse Ring Oscillators. This is a class of entropy source that
entails running a number of independent ROs. These are then joined together in
one big loop. The multiple independent phases of the circuit at the point of circuit
switch will collapse to a single large loop oscillation. The collapse is metastable,
driven by noise and the resulting phase error is random with an exponential dis-
tribution resulting from the metastable resolution time. This circuit was docu-
mented in a paper from Samsung [24].
3. Feedback Stablized Metastable Latch. This is a practical form of entropy source in
silicon chips that repeatedly forces a latch into metastability and lets it resolve to
a 1 or 0. Noise is supposed to drive the resolution to 1 or 0 and so the resulting bits
have entropy. Typical latches have bias that would lead to the same value each
time, so a stabilization loop is needed to load one side or the other of the latch
to keep it operating in a balanced metastable mode. This kind of entropy source
is seen in Intel CPUs. A conference paper by Rachael Parker describes the circuit
and derives a closed form equation for the min entropy of metastable latch based
noise sources [16].
16 | 1 Introduction

4. Analog Modular Multiplier Loop/Infinite Noise Multiplier. This involves iteratively


amplifying a noisy source (such as from a diode or resistor) until it exceeds a par-
ticular voltage, whereupon the voltage is divided in half. Each time the circuit is
iterated, a single bit is output, based on whether the voltage is above or below
50% of the module threshold. This circuit is seen in some plug-in USB RNG de-
vices.
5. Reverse Biased Zener. This involves amplifying and sampling the noise from a re-
verse biased diode. This kind of circuit is often used in board level RNG circuits.
It is notoriously hard to ensure that these circuits work well when manufactured
in volume, but single circuits are easy to tweak to make them work.
6. Demod Error Vector. This method is used in some chips with radio demodulators.
These typically supply an error vector register that measures the distance of each
received symbol from the ideal symbol and is a function of the noise in the sender,
the receiver, and the intervening radio path. While the radio is receiving data, the
error vector from the demodulator tends to have high entropy. This kind of noise
source has been implemented in Bluetooth and 802.11 chips, where no alternate
entropy source was provided.

Chapter 2 examines entropy sources in greater detail.

1.9 TRNG Extractor/Conditioning Algorithms


An entropy extractor is an algorithm that takes in data that is typically partially en-
tropic and outputs data that is either more entropic than the input, or is close to fully
entropic. It is impossible to create a deterministic algorithm that outputs more entropy
than is input into the algorithm. Since we want the entropy per bit at the output to be
more entropic than the entropy per bit at the input, the number of bits at the output
of an extractor has to be smaller than the number of bits consumed at the input. So,
entropyin ≥ entropyout and len(entropyin ) > len(entropyout ).
The theory of entropy extractors tends to be a very mathematical area of computer
science and that is a fine topic for a different book. In this book we look primarily at
the practical implementation aspects of extractors.
Examples of entropy extractors include:
1. AES-CBC-MAC. An extractor algorithm in the current draft of SP800-90B. It is used
in Intel CPUs as the entropy extractor that conditions the seeds for the CS-PRNG
from which the RdRand instruction gets its random numbers.
2. Von Neumann and Yuval Perez whiteners. These are debiaser algorithms that,
when fed independent random input values, guarantee unbiased outputs. Unfor-
tunately, this algorithm is widely misused in various chips, by being fed from a
serially correlated source, so the input bits are not independent.
The Structure of Secure RNG Implementations | 17

3. BIW (Barack Impagliazzo Wigdersen) Extractor. An example of a multiple input


extractor. These extractors tolerate inputs with nonindependent inputs. However,
the separate inputs must be independent from each other. The BIW extractor takes
3 independent inputs. It is noted for its efficient implementation and is suitable
for use in resource-constrained environments.
4. 2-EXT Extractor. This is another multiple input extractor taking two independent
inputs. It is notable since it has been shown to be secure from quantum computer
attacks.
5. Many XOR Extractor. This is an ad-hoc extractor that entails simply XORing to-
gether the output of multiple separate entropy sources. This structure has been
seen in a number of mobile phone chips. As an extractor, it has poor properties
and therefore needs the noise sources to be of high quality and highly indepen-
dent from each other.

1.10 The Structure of Secure RNG Implementations


The components of a general secure RNG include the functional parts (entropy source,
extractor, CS-PRNG) typically along with self test features such as BIST (Built In Self
Test) and OHT (Online Health Test). See Figure 1.6.

Figure 1.6: RNG Structure.

1.10.1 Point A, the Raw Data

In this structure, the entropy source outputs unprocessed entropic data to point A,
which is sent into the entropy extractor from point A.
The data at point A is sometimes named raw data, because it has not yet been
post processed by the extractor or PRNG. Thus, the data at the output C is sometimes
referred to as cooked data.
It is important that the input requirements of the extractor are met by the output
properties of the entropy source at point A. For example, an AES-CBC-MAC extractor
18 | 1 Introduction

will typically require the input data to have a min entropy of > 0.5 bits per bit. A mul-
tiple input extractor would require each of the inputs to be independent of each other
and also have some level of min-entropy. The quality of the data at point A generally is
described in terms of min-entropy and basic statistics such as mean, serial correlation,
and stationarity.
This raw data is also fed to the online health test, which is typically a statistical
test to ensure the entropy source is correctly functioning. This may run full time or on
demand.

1.10.2 Points D and E, the Health Status Feedback

The results of the online testing would typically be made available at the RNG’s main
interface via point E, but also it can (and should) be used to inform the extractor when
the quality of the data is meeting its input requirements via point D.

1.10.3 Point B, the Seed Quality Data

The output of the extractor will generally be close to full entropy, if the input require-
ments of the extractor are met, or in the case of single input extractors, it is possible
to show only computational predication bounds rather than min-entropy when the
input is only guaranteed to have a certain level of min-entropy [7]. It has been shown
mathematically that getting to 100% full entropy is impossible, but is it possible to
get arbitrarily close to full entropy. The gap between the actual min-entropy and full
entropy in extractor theory is referred to as ε (epsilon).
The data at point B might constitute the output of the RNG, or it might provide the
seed input to the PRNG to initialize or update the PRNG’s state to be nondeterministic.

1.10.4 Point C, the PRNG Output

The PRNG typically takes in a seed from point B either initially or periodically to in-
ject nondeterminism into its state. It will then proceed to generate outputs Each step
employs an output function to generate an output from the current state and also gen-
erate a next state from the current state.
The data at the PRNG output stage generally cannot be treated as having an
amount of min-entropy. Instead, it has a certain computational prediction bound.
This is because by observing past values from the output, there is always a brute
force algorithm that can search through all the possible values of the internal state
to find the state that would have generated the previous values. A secure PRNG algo-
Pool Extractor Structures | 19

rithm ensures that the amount of computation required is too high to be practically
implemented.
The current progress towards quantum computers makes it relevant to also con-
sider the computations prediction bounds for a quantum computer. Some RNGs,
such as number-theoretic RNGs like BlumBlumShub, are completely insecure against
a quantum computer. Conventional cryptographic RNGs based on block ciphers or
hashes or HMACs, tend to have their security strength reduced to the square root of
the security against a classical computer. So, the industry is in a process of doubling
the key sizes of RNGs to prepare for future quantum computer threats.

1.11 Pool Extractor Structures


The most basic way of viewing an entropy extractor is that it takes in n bits and outputs
m bits of data which are passed onto the next stage.
A more useful structure is to have the extractor include its earlier result in the next
result. For example, if you were using AES-CBC-MAC with a 128 bit block size, with a
4:1 extractor ratio the algorithm would be: Given Inputs:

ina where len(ina ) = 128


inb where len(inb ) = 128
inc where len(inc ) = 128
ind where len(ind ) = 128

and output

outx = AES_CBC_MAC(key, ina |inb |inc |ind ) (1.1)

So 512 bits of input data are compressed down to 128 bits of extract data.
If the extractor maintained a pool of 128 bits, it could use AES-CBC-MAC to mix
in the input raw entropy into the pool by including the pool in the AES-CBC-MAC cal-
culation. This is simply achieved by including the previous output in the input to the
CBC MAC algorithm:

outx = AES_CBC_MAC(key, outx−1 |ina |inb |inc |ind )

So, if there were periods of low entropy from the entropy source, the pool would
maintain the complexity of the output of the extractor. Also, the extractor could keep
running while the raw entropy data is flowing in, until the PRNG needed a seed, so
the extractor ratio would be as large as possible for each reseeding.
A third enhancement available with a pool structure is that when the OHT tags
a set of input bits as not meeting the test requirement, they can still be mixed into
the pool so that any residual entropy is not lost. The AES-CBC-MAC algorithm can be
20 | 1 Introduction

continued until a required number of healthy input samples are mixed into the pool.
This forms an active response to attacks on the entropy source that try to reduce the
entropy of the source. As the number of unhealthy samples increases, the amount of
samples mixed into the pool increases.
The AES-CBC-MAC can be broken down into multiple single AES operations. The
pseudocode in Listing 1.5 shows this pool method in terms of an individual AES oper-
ation executed once per loop.

Listing 1.5: Pool Extractor Pseudocode


extractor_ratio = 4
pool = 0
healthy_samples = -1
while healthy_samples < extractor_ratio :
if healthy_samples = -1:
pool = AES ( key , pool )
healthy_samples ++
else :
pool = AES ( key , pool xor input )
if sample_was_healthy :
healthy_samples ++

This will continue to extend the CBC-MAC until enough healthy samples are re-
ceived and will also mix in the previous pool value. The extensibility of CBC-MAC is
considered a weakness when used as a MAC, but in this context as an extractor, it is a
strength.
The Intel DRNG exhibits this behavior. The reseed of the AES-CTR-DRBG requires
256 bits, so it performs two parallel CBC-MACs, each generating 128 bits of seed data
to make 256 bits.
Similar mechanisms could be implemented with hashes or HMAC, both of which
are suitable extractor algorithms.

1.12 What Key to Use?


In Chapter 3 on entropy extraction, there is a discussion of seeded extractors and how
getting a uniform seed is a chicken-and-egg problem. How can you get a uniform seed
from an extractor when the extractor needs a uniform seed at its input and uniform
data is not available from the physical world?
So what is the right key to use for CBC-MAC extractors? The key needs to be inde-
pendent of the raw entropy coming in, but it may also be static and public.
A value like all zeroes or all ones would not be independent from failure modes of
an entropy source or entropy sources exhibiting high bias.
The approach taken in Intel CPUs was to encrypt the value 1 with the key 0 using
AES. This value should have no algorithmic or statistical connection to raw data com-
Multiple Input Extractors | 21

ing in from the entropy source. This approach was discussed and shown to be safe
in [21].
The approach taken in the TLS protocol, with the HKDF extractor, is to use a key
of 0 when a random key is not available.
This same rationale could be used with other seeded constructs. But first, the
mathematical proof of the properties of the extractor should be consulted to check
that such a seed will suffice.

1.13 Multiple Input Extractors


Multiple input extractors, as the name suggests, take multiple independent inputs. It
is necessary that the inputs be independent. Consider if two inputs did not need to be
independent, then instead, a single input of dependent bits could be split into two sets
of data and meet the input requirements. This task could be done inside the extractor.
So, the only reason to ask for multiple inputs is that those inputs are independent.
Figure 1.7 shows the way a multiple input extractor takes input from multiple in-
dependent entropy sources and outputs a single stream of random numbers that are
very close to perfectly uniform, provided that the quality of the input entropy is high
enough. Each multiple input extractor is designed for a specific number of inputs. Two
and three input extractors are shown in Figure 1.7, because Section 3.10 goes into de-
tail on two multiple input extractors, the BIW three input extractor and the 2-EXT two
input extractor.

Figure 1.7: Three Input Extractor and Two Input Extractor.


22 | 1 Introduction

Multiple input extractors tend to use basic arithmetic or algebraic methods rather than
rely on cryptographic primitives such as block ciphers or hashes. This leads to such
extractors being small and efficient to implement compared to cryptographic extrac-
tors. However, the cost is that multiple independent entropy sources are needed. So,
the savings in implementation costs of the extractor need to be factored against the
additional implementation costs of the entropy sources.
2 Entropy Sources
Entropy sources are physical systems that take noise from the environment and turns
it into random bits. Usually these are electrical circuits. There are some examples of
partly mechanical entropy sources, but these are not generally useful, except as a cu-
riosity, because they tend to be very slow and unreliable. Therefore, we will focus on
electronic entropy sources in this chapter.
Ideal binary random numbers are IID (independent and identically distributed)
and have a bias of 0.5; so, 1 is as likely as 0.
However, entropy sources can never achieve this property directly. The physical
world from which noise is sampled only yields data that has some level of correlation,
bias, and nonstationarity. Entropy extractor algorithms are, therefore, used to convert
the partially entropic output of an entropy source into data that closely approximates
full entropy IID data.
In this chapter, we look at entropy sources. In the next chapter, we look at entropy
extractors, which take partially entropic data from an entropy source and convert it to
almost fully entropic data.

2.1 Ring Oscillator Entropy Sources


A common type of entropy source is the ring oscillator. This is a circuit that has an odd
number of inverters in a ring, so that when powered, it oscillates. The period of the os-
cillation varies over time as a result of noise in the circuit, and so, periodically, sam-
pling the phase of the circuit at some point on the ring should yield partially entropic
data. The longer the time between the samples, the greater the amount of phase noise
contributing to the values that are read, so the statistical properties of the data im-
prove with longer sampling intervals. A simplified circuit for a ring oscillator is shown
in Figure 2.1.

Figure 2.1: Ring Oscillator Circuit.

The inverter, represented by a triangle with a circle on the output, takes a binary value,
0 or 1, and outputs the opposite value. With 0 on the input, it will output 1. With 1 on
the input, it will output 0.

https://doi.org/10.1515/9781501506062-002
24 | 2 Entropy Sources

Figure 2.2: Ring Oscillator Circuit Values.

In Figure 2.2, the binary value alternates from 0 to 1 to 0 as it crosses the inverters.
However, since there is an odd number of inverters, there must be a discontinuity point
where there is an input value that equals the output value. The output of that inverter
gate, therefore, changes to the opposite value. The discontinuity, as a result, moves
on to the next inverter. This discontinuity carries on cycling around the circuit. If you
look at the waveform at any particular point, then you will see that the value oscillates
up and down, changing once for each time the discontinuity makes a trip around the
circuit.

Figure 2.3: Ring Oscillator Waveform.

The upper trace in Figure 2.3 shows what you would see if you observed the voltage of
a point in the ring oscillator circuit changing over time, using an oscilloscope.
The lower trace shows what you would get if you were to take multiple traces and
overlap them. Aligning the left-hand edge, you would see the uncertainty in the loop
time, since the timing of the traces will vary as a result of noise randomly affecting
the time it takes for a signal to pass through a gate. The second edge shows a small
amount of variation. The size of this variation usually follows a normal distribution.
We will call that time σt , which is the normal random variate with the standard de-
viation of the loop time uncertainty. The diagram is an exaggerated view; it typically
takes thousands of clock periods for a full cycle of timing uncertainty to accumulate.
The uncertainty in the next edge is the sum of the previous uncertainty and the
same uncertainty in the current cycle. As time progresses, the uncertainty of the edge
timing increases until the uncertainty is larger than a single loop period.
The variance of two normal random variates added together is the sum of the two
variances of the two variates. So, if sampling at a period tsample is equal to N average
cycles, then

N
σt2sample = ∑ σt2 ,
1

N
σtsample = √∑ σt2 .
1
Ring Oscillator Entropy Sources | 25

So, with a ring oscillator circuit, measure the timing uncertainty. Then find which
value of N will lead to σtsample being several times greater than the loop time t. You can
set the sample period tsample = Nt, and you can expect the value of a point in the circuit
sampled every tsample seconds will appear random.

2.1.1 Ring Oscillator Entropy Source Problems

Ring oscillator entropy sources have been popular mostly because the circuit appears
very easy to implement. However, they have been found to have some issues. The out-
put of a ring oscillator is serially correlated. As the sample period tsample increases, the
serial correlation reduces, but it never goes away completely. This has been a problem
when the von Neumann debiaser or the Yuval Perez debiaser is used as the entropy
extractor. It is a requirement of those two debiaser algorithms that the input data sam-
ples be independent of each other. However, samples from serially correlated data are
not independent and so may lead to a lower output entropy than is expected from a
Von Neumann or Yuval Perez debiaser. The Von Neumann and Yuval Perez debiaser
algorithms are described in Sections 3.4 and 3.5.
A second problem is that ring oscillators have been shown to be vulnerable to
injection attack, where a periodic signal is injected via the power supply or via an
electromagnetic coupling device. This can lead to the ring oscillator locking to the
frequency of the injected signal, and so, the loop time becomes predictable and the
entropy is lost.
For example, at the CHES 2009 Conference, the paper [12] was presented show-
ing a frequency injection attack on a chip and pin payment card, where they ren-
dered the random data predictable and were therefore able to cryptographically attack
the authentication protocol. The paper is available at http://www.iacr.org/archive/
ches2009/57470316/57470316.pdf.
There are some common design mistakes with ring oscillator entropy sources.
A number of ring oscillator implementations implement multiple ring oscillators and
combine the outputs by XORing them together on the assumption that the sample out-
puts from the rings are independent. See Figure 2.4.
The problem with this is that it makes it more susceptible to injection attacks, be-
cause with multiple oscillators there tends to be multiple modes in which the oscilla-
tors will lock with each other. When sequences that are locked to each other in phase
or frequency are XORed together, they cancel and make a low entropy repeating se-
quence. The more oscillators, the more the number of opportunities for an injection
attack to work. Thus, if you are implementing a ring oscillator entropy source, then
the best number of loops to implement is 1. However, if you need higher performance,
using multiple loops is a way to achieve higher performance. An appropriate way to
combine the output of multiple loops is to independently feed them into an entropy
extractor that is tolerant of correlation between the input bits, as shown in Figure 2.5.
26 | 2 Entropy Sources

Figure 2.4: Multiple Ring


Oscillators Poorly Combined.

Figure 2.5: Multiple Ring


Oscillators Well-Combined.

In a secure RNG, you would also need an online health test per loop. It would make
sense to have tests that both check the loops have not failed and test that the loops
are not correlated with each other, so they can detect an injection attack or loop lock-
ing failure. Chapter 9 goes into greater detail on online health testing algorithms in
general and correlation detection algorithms in particular.

2.2 Metastable Phase Collapse Ring Oscillator Entropy Sources


There is no commonly used name for this sort of entropy source; I call it the Metastable
Phase Collapse Ring Oscillator Entropy Source. I am not aware of a published version
of this type of entropy source. It represents an attractive alternative to ring oscillator
sources, using similar technology in a different configuration.
A paper was published with a related, but different, idea by Samsung engineers.
We will take a look a that design in Section 2.3.
The Metastable Phase Collapse Ring Oscillator Entropy Source involves a ring of
ring oscillators. Each ring oscillator is of a different length, so that they oscillate at
different frequencies.
Metastable Phase Collapse Ring Oscillator Entropy Sources | 27

Two properties of the ring sizes will help prevent locking between the rings.
First, make the size of the rings different enough that one loop cannot simply shift
its frequency a little bit to match the other.
Second, ensure the LCM (lowest common multiple) of the two loop frequencies is
large compared to the loop periods, so they will not easily find a common multiple of
their base frequencies at which they will both oscillate. Choosing frequencies that are
relatively prime might hypothetically have this property, since the LCM of two different
primes p and q is p×q. However, the frequencies are not bound to integer relationships,
so some extensive testing should be employed to ensure that the circuit cannot be
coaxed to lock its loops together using injection attacks.
A control signal causes the rings to switch between being several independent
rings into one large ring.
Here, we will take a look at the what happens with multiple discontinuities in a
ring oscillator in order to understand how this entropy source behaves. To simplify the
diagrams, we will shrink the inverter gates to arrows as shown in Figure 2.6. A filled
arrow represents a nongate that is outputting logic zero, and a hollow arrow represents
an inverter gate that is outputting logic one. The oscillator loop is shown as a chain of
these inverter gates, and the number of inverter gates is indicated in the middle. With
an odd number of inverter gates, there will be a travelling discontinuity as discussed
in Section 2.1. This is shown as the dot on the loop where two gates with the same
output value meet.

Figure 2.6: Single Ring Oscillator.

It is not possible to have two discontinuities in a ring with an odd number of gates. Try
stringing an odd number of gates together in alternating sequence with two disconti-
nuities; you will see that one of the discontinuities will cancel out.
However, you can have an odd number of discontinuities. Figure 2.7 shows a ring
of 17 gates with three discontinuities and a ring of 19 gates with five.
When two discontinuities meet they will cancel each other out. So, the odd num-
ber of discontinuities will be maintained. Over time, the discontinuities in a ring os-
cillator with multiple discontinuities will collide and cancel out until there is only a
single discontinuity left. You can consider the motion of the discontinuities as follow-
ing a random walk relative to a discontinuity travelling at exactly the average loop
28 | 2 Entropy Sources

Figure 2.7: Multiple Discontinu-


ities in a Ring Oscillator.

frequency. Each passage through a gate will be a little slower or a little faster, depend-
ing on the noise in the system. Adding these together over time amounts to a random
walk. Since they are performing a one dimensional random walk on a loop instead of
an infinitely long line, then, as the distance of the two random walks of two disconti-
nuities gets larger and both add up to the loop size, they will collide. Thus, multiple
discontinuities on the same loop will very quickly walk into each other and cancel.
The design of the entropy source uses an odd number of ring oscillators so that
when connected in a large loop there are still an odd number of gates in the large loop.
Multiplexors configure the loops either as individual loops or as one large loop.
In Figure 2.8, a configuration of three ring oscillators connected via multiplexors
is shown.

Figure 2.8: Metastable Ring Oscillator Circuit.

With the control signal high, the loops operate independently, with the output of the
loop fed back into the input. With the control signal low, the loops are joined into
Metastable Phase Collapse Ring Oscillator Entropy Sources | 29

Figure 2.9: Large Oscillator Configuration of Metastable Ring Oscillator.

one big loop with 49 elements, as in Figure 2.9. The three discontinuities that were
present in the three oscillators are now on the main ring and these will quickly cancel
to a single discontinuity. It is possible at the point in the ring where the multiplexors
are, to be in phase when the two rings attach. So, this can also introduce two extra
discontinuities in the large ring when the multiplexors are switched from the three
ring mode to the large ring mode.
While oscillating as independent rings, the phase of the rings should be to some
extent independent of each other. On the switch to the single large ring, those multi-
ple discontinuities are all present on the ring. This state of having multiple disconti-
nuities in the loop is a metastable state. There is a vanishingly small probability that
the discontinuities travel around the loop at exactly the same speed and never col-
lide. In reality, over a short period of time, noise will vary the timing of the travel of
the discontinuities and the multiple phases will collapse together into one phase. The
transition from multiple phases to a single phase is metastable, and noise drives the
collapse to a stable state with a single discontinuity, making the timing of the collapse
and the resulting phase of the slow oscillation nondeterministic.
In Figure 2.10, we see the fast oscillation measured at the output of the 13 gate loop
when the control signal is high, and we see it switch to a low frequency oscillation of
the large loop when the control signal is low. The state of X is sampled by the output
flip-flop on the rising edge of the control signal, which comes at the end of the slow

Figure 2.10: Signal Trace of Metastable Ring Oscillator.


30 | 2 Entropy Sources

oscillation period, by which time the metastable state of the loop that started with at
least three discontinuities has had time to resolve to a state with one discontinuity.
So, the circuit operates with a low frequency square wave on the control signal,
switching the circuit between the two modes. The phase of the circuit at a chosen point
is captured a short time after the switch to the large ring mode. This is the random bit
and so the circuit generates 1 bit for each cycle of the control signal.
An attraction of ring oscillator RNGs is that they can be built from basic logic com-
ponents and are, therefore, easily portable between different silicon processes. The at-
tractions of metastable entropy sources include their robustness from attack, that they
are easily mathematically modeled and exhibit a high measured min entropy and high
rate of entropy generation.
The Metastable Phase Collapse Ring Oscillator Entropy Source combines the ben-
efits of the ring oscillator with some of the benefits of a metastable source, albeit with-
out the high speed of single latch metastable sources.

2.3 Fast Digital TRNG Based on Metastable Ring Oscillator


The title “Fast Digital TRNG Based on Metastable Ring Oscillator” is the title of the pa-
per [24] available at https://www.iacr.org/archive/ches2008/51540162/51540162.pdf.
This paper describes a similar structure to the multiple ring oscillator structure
in Section 2.2. However, in place of the ring oscillator loops, there is a single inverter
gate that is switched between being in self feedback mode, or being in a loop of all the
inverters. This is directly equivalent to the phase collapse source in Section 2.2 with
all loops having only 1 inverter, as shown in Figure 2.11.

Figure 2.11: Fast Digital TRNG Based on Metastable Ring Oscillator.

In a silicon implementation of an inverter gate, connecting it back on itself will not


behave as a ring oscillator, since the loop time is too short for the signals to make full
transitions between low and high logic states. Instead, the voltage rests somewhere in
the middle between the high voltage and low voltage as shown in Figure 2.12.
This is a stable state while the feedback loop is connected, but when the loop is
broken and the inverters are all connected in a big loop, the state with the voltage in
the middle of the range is metastable, so the voltages at each node move to a logic 1
Feedback Stabilized Metastable Latch Entropy Source | 31

Figure 2.12: Single Inverter Feedback.

Figure 2.13: Signal Trace of Metastable Ring Oscillator With Single Inverter Feedback.

or 0, driven by noise; this ultimately resolves to a single loop with a single travelling
discontinuity. The resulting timing diagram is as shown in Figure 2.13.
There is a problem with connecting an inverter directly back on itself in a fast
silicon CMOS process, as in Figure 2.12. When the output voltage is in the middle of
the range, there is what is called a crowbar current passing from the power supply
input to the 0 V connection. The P transistors connected to the power supply and the
N transistors connected to the 0 V line will both be partially switched on. In a normal
configuration, either one or the other transistor is switched of, so current does not
flow from the power supply to the 0 V line. This crowbar current will consume excess
power and will limit the device lifetime. Therefore, it is commonly forbidden in silicon
design rules to connect an inverter in that manner. However, there are more complex
circuit topologies, where additional transistors limit the crowbar current. Therefore,
such a circuit would need some custom circuit design, rather than using standard logic
inverter gates.

2.4 Feedback Stabilized Metastable Latch Entropy Source


The Feedback Stabilized Metastable Latch Entropy Source is possibly the most robust
and reliable form of entropy source for implementation on a silicon chip. Its favorable
properties include:
– High performance.
– Low power consumption.
– Easily modeled min-entropy.
32 | 2 Entropy Sources

– Very reliable (e. g., zero known failures in over 400 million instances in Intel chips
delivered).
– Robust against a number of attack strategies.
– Relatively easy to port between silicon processes.

These are the reasons that we chose to use this sort of entropy source in mass produc-
tion after it was first developed.
The basic idea is that if you build two inverters into a stable back to back config-
uration, as in Figure 2.14, the two nodes, A and B, will resolve either to a 0.1 stable
state or a 1.0 stable state when powered on. The choice of 0.1 or 1.0 is driven by noise,
because the two gates are identical. Once the noise kicks the node voltages a little
bit in one direction, the logic gates will operate as open loop amplifiers and drive the
voltages to the state toward which the noise pushed it.

Figure 2.14: Back-to-Back Inverters.

In order to cycle the generator quickly, switches, implemented with transistors, can
be put on nodes A and B or in the power supplies of the gates, to force the gates to one
of the two unstable states, 0,0 or 1,1.
If you build such a circuit, you will find that it tends to always output the same
binary value. This is because when you build a pair of gates like this in silicon, one
usually turns out to be stronger than the other. So, when resolving from a 1,1 or 0,0
state to a 0,1 or 1,0 state, one gate always wins and, therefore, overcomes the ability of
noise to drive the resolution.
This can be modeled by dropping a ball on a hill. See Figures 2.15 and 2.16.
The ball will either fall to the left or the right and the resulting output, 0 or 1, will
depend on which way it rolls.
The hill represents the transfer function of the metastable circuit. The ball can
land on it at the top (equivalent to the 1,1 state) and stay there indefinitely. But the
probability of it remaining on the top reduces exponentially with increasing time, and
very quickly noise will drive the ball to fall one way or the other.
The position of the hill is fixed and it represents the relative strengths of the logic
inverters. If one gate is stronger the hill will be further to the left. If the other is stronger,
the hill will be further to the right. Over a large population of devices, the position of
the hill will take on a Gaussian distribution, but in any single device the position will
remain in the same spot.
Random documents with unrelated
content Scribd suggests to you:
The town itself is—I feel assured—the kind of town that Jack
reached when he climbed to the top of the Beanstalk, for the
entrance to Roquebrune is precisely the sort of entrance one would
expect a beanstalk to lead to. In one kitchen full of brown shadows,
in a side street near the Rue Pié, is an ancient cupboard in which,
almost without question, Old Mother Hubbard kept that hypothetical
bone which caused the poor dog such unnecessary distress of mind;
while in a wicker cage in the window of a child’s bedroom was the
Blue Bird, singing as only that bird can sing.
As there are still wolves in the woods about Roquebrune and as
red hoods are still fashionable in the Place des Frères it is practically
certain that Little Red Riding Hood lived here since it is difficult to
imagine a town that would have suited her better. As for Jack the
Giant Killer it is beyond dispute that he came to Roquebrune, for the
very castle he approached is still standing, the very gate is there
from which he hurled defiance to the giant as well as the very stair
he ascended. Moreover there is a room or hall in the castle—or at
least the remains of it—which obviously no one but a giant could
have occupied.
As time goes on archæologists will certainly prove, after due
research, that Roquebrune is the City of Peter Pan. There is no town
he would love so well; none so adapted to his particular tastes and
habits, nor so convenient for the display of those domestic virtues
which Wendy possessed. No one should grow up in this queer city,
just as no place in a nursery tale should grow old.
ROQUEBRUNE: THE EAST GATE.

ROQUEBRUNE: THE PLACE DES FRÈRES.


Peter Pan is not adapted to the cold, drear climate of England.
He stands, as a figure in bronze, in Kensington Gardens with
perhaps snow on his curly head or with rain dripping from the edge
of his scanty shirt. He should be always in the sun, within sight of a
sea which is ever blue and among hills which are deep in green. He
could stride down a street in Roquebrune clad—as the sculptor
shows him—only in his shirt without exciting more than a pleasant
nod, but in the Bayswater Road he would attract attention. He is out
of place in a London park in a waste of tired grass dotted with iron
chairs which are let out at a penny apiece. Those delightful little
people and those inquisitive animals who are peeping out of the
crevices in the bronze rock upon which he stands would flourish in
this sunny hill town, for there are rocks in the very streets among
which they could make their homes.
Then again Captain Hook would enjoy Roquebrune. It is so full of
really horrible places and there are so many half-hidden windows out
of which he could scream to the terror of honest folk. The pirates
too would be more comfortable in this irregular city, for it is near the
sea and close to that kind of cave without which no pirate is ever
quite at ease. Moreover the Serpentine affords but limited scope to
those whose hearts are really devoted to the pursuit of piracy and
buccaneering.
So far I do not happen to have met with a pirate of Captain
Hook’s type within the walls of Roquebrune; but, late one afternoon
when the place was lonely I saw a bent man plodding up in the
shadows of the Rue Mongollet. He was a sinewy creature with
brown, hairy legs. I could not see his face because he bore on his
shoulders a large and flabby burden, but I am convinced that he was
Sindbad the Sailor, toiling up from the beach and carrying on his
back the Old Man of the Sea.
XXXIII
THE LEGEND OF ROQUEBRUNE

T
HE position of Roquebrune high up on the hillside appears—as
has already been stated—to be precarious. It seems as if the
little city were sliding down towards the sea and would,
indeed, make that descent if it were not for an inconsiderable ledge
that stands in its way. It can scarcely be a matter of surprise,
therefore, that there is a legend to the effect that Roquebrune once
stood much higher up the hill, that the side of the mountain broke
away, laying bare the cliff and carrying the town down with it to its
present site, where the opportune ledge stayed its further
movement.
Like other legendary landslips this convulsion of nature is said to
have taken place at night and to have been conducted with such
delicacy and precision that the inhabitants were unaware of the
“move.” They were not even awakened from sleep: no stool was
overturned: no door swung open: the mug of wine left overnight by
the drowsy reveller stood unspilled on the table: no neurotic dog
burst into barking, nor did a cock crow, as is the custom of that bird
when untoward events are in progress. Next morning the early riser,
strolling into the street with a yawn, found that his native town had
made quite a journey downhill towards the sea and had merely left
behind it a wide scar in the earth which would make a most
convenient site for a garden. Unhappily landslips are no longer
carried out with this considerate decorum, so the gratitude of
Roquebrune should endure for ever.
This is one legend; but there is another which is a little more
stirring and which has besides a certain botanical interest. At a
period which would be more clearly defined as “once upon a time”
the folk of Roquebrune were startled by a sudden horrible rumbling
in the ground beneath their feet, followed by a fearful and sickly
tremor which spread through the astonished town.
Everybody, clad or unclad, young or old, rushed into the street
screaming, “An earthquake!” It was an earthquake; because every
house in the place was trembling like a man with ague, but it was
more than an earthquake for the awful fact became evident that
Roquebrune was beginning to glide towards the sea.
People tore down the streets to the open square, to the Place
des Frères, which stands on the seaward edge of the town. The
stampede was hideous, for the street was unsteady and uneven. The
very road—the hard, cobbled road—was thrown into moving waves,
such as pass along a shaken strip of carpet. To walk was impossible.
Some fell headlong down the street; others crawled down on all
fours or slid down in the sitting position; but the majority rolled
down, either one by one or in clumps, all clinging together.
The noise was fearful. It was a din made up of the cracking of
splintered rock, the falling of chimneys, the rattle of windows and
doors, the banging to and fro of loose furniture, the crashing of the
church bells, mingled with the shouts of men, the prayers of women
and the screams of children. A man thrown downstairs and clinging
to the heaving floor could hear beneath him the grinding of the
foundations of his house against the rock as the building slid on.
The houses rocked from side to side like a labouring ship. As a
street heeled over one way the crockery and pots and pans would
pour out of the doors like water and rattle down the streets with the
slithering knot of prostrate people.
Clouds of dust filled the air, together with fumes of sulphur from
the riven cliff. Worst of all was an avalanche of boulders which
dropped upon the town like bombs in an air raid.
The people who clung to the crumbling parapet of the Place des
Frères saw most; for they were in a position which would correspond
to the front seat of a vehicle. They could feel and see the town
(castle, church and all) skidding downhill like some awful machine,
out of control and with every shrieking and howling brake jammed
on.
They could see the precipice ahead over which they must soon
tumble. Probably they did not notice that at the very edge of the
cliff, standing quite alone, was a little bush of broom covered with
yellow flowers.
The town slid on; but when the foremost wall reached the bush
the bush did not budge. It might have been a boss of brass. It
stopped the town as a stone may stop a wagon. The avalanche of
rocks ceased and, in a moment, all was peace.
The inhabitants disentangled themselves, stood up, looked for
their hats, dusted their clothes and walked back, with unwonted
steadiness, to their respective homes, grumbling, no doubt, at the
carelessness of the Town Council.
They showed some lack of gratitude for I notice that a bush of
broom has no place on the coat of arms of Roquebrune.
XXXIV
SOME MEMORIES OF ROQUEBRUNE

R
OQUEBRUNE is very old. It can claim a lineage so ancient that
the first stirrings of human life among the rocks on which it
stands would appear to the historian as a mere speck in the
dark hollow of the unknown. Roquebrune has been a town since
men left caves and forests and began to live in dwellings made by
hands. It can boast that for long years it was—with Monaco and Eze
—one of the three chief sea towns along this range of coast. Its
history differs in detail only from the history of any old settlement
within sight of the northern waters of the Mediterranean.
The Pageant of Roquebrune unfolds itself to the imagination as a
picturesque march of men with a broken hillside as a background
and a stone stair as a processional way. Foremost in the column that
moves across the stage would come the vague figure of the native
searching for something to eat; then the shrewd Phœnician would
pass searching for something to barter and then the staid soldierly
Roman seeking for whatever would advance the glory of his imperial
city. They all in turn had lived in Roquebrune.
ROQUEBRUNE, SHOWING THE CASTLE.

As the Pageant progressed there would pass by the hectoring


Lombard, the swarthy Moor, a restless band of robber barons and
pirate chiefs, a medley of mediæval men-at-arms and a cluster of
lords and ladies with their suites. They all in turn had lived in
Roquebrune. Finally there would mount the stair the shopkeeper and
the artisan of to-day, who would reach the foot of Roquebrune in a
tramcar.
This Pageant of Roquebrune would impress the mind with the
great antiquity of man, with his ceaseless evolution through the ages
with an ever-repeated change in face, in speech, in bearing and in
garb. Yet look! Above the housetops of the present town a company
of swifts is whirling with a shrill whistle like that of a sword swishing
through the air. They, at least, have remained unchanged.
They hovered over the town before the Romans came. They have
seen the Saracens, the troopers of Savoy, the Turkish bandits, the
soldiers of Napoleon. Age after age, it would seem, they have been
the same, the same happy birds, the same circle of wings, the same
song in the air.
On the rock too are bushes of rosemary—“Rosemary for
remembrance.” The little shrub with its blue flower has also seen no
change. The caveman knew it when he first wandered over the hill
with the curiosity of a child. The centurion picked a bunch of it to
put in his helmet. The pirate of six hundred years ago slashed at it
with his cutlass as he passed along and the maiden of to-day
presses it shyly upon her parting lover.
In the Pageant of Roquebrune man is, indeed, the new-comer,
the upstart, the being of to-day, the creature that changes. The
swifts, the rosemary and the hillside belong to old Roquebrune.
The following are certain landmarks in the tale of the town.[47] It
seems to have belonged at first to the Counts of Ventimiglia, about
in the same way that a wallet picked up by the roadside would
belong to the finder. In 477 these Counts sold it to a Genoese family
of the name of Vento. In 1189 the town is spoken of as Genoese and
as being in the holding of the Lascaris. It was indeed for long a
stronghold of this house. About 1353 Carlo Grimaldi of Monaco
purchased Roquebrune from Guglielmo Lascaris, Count of
Ventimiglia, for 6,000 golden florins. The union of Monaco,
Roquebrune and Mentone thus accomplished lasted for 500 years
with unimportant intervals during which the union was for a moment
severed or reduced to a thread. From 1524 to 1641 the little town
was under the protection of Spain.
In 1848 Roquebrune, supported by Mentone, rebelled against the
Grimaldi, after suffering oppression at their hands for thirty-three
years, and declared itself a free town or, rather, a little republic. It so
remained until 1860 when it was united with France at the time that
Nice was ceded to that country. An indemnity of 4,000,000 francs
was paid to the Prince of Monaco in compensation for such of his
dominions as changed hands in that year.[48]
Roquebrune, of course, did not escape the disorders which befell
other towns in its vicinity. Its position rendered it weak, exposed it to
danger and made it difficult to defend. It was sacked on occasion,
notably by the Turks about 1543 after they had dealt with Eze in the
manner already described (page 127). It met with its most serious
sorrow in 1560 when it was assaulted, set on fire and gravely
damaged.
At this date the history of Roquebrune ended or at least changed
from that of a fortified place to that of a somewhat humble hill town.
So it sank, like Eze, into obscurity. The ruins that remain date from
this period and it is upon the wreckage of that year that the present
town is founded. The castle would appear to have been restored, for
the last time, in 1528 when the work was directed by Augustin
Grimaldi of Monaco and bishop of Grasse.
By the manner in which Roquebrune bore the stress of years and
faced the troubles of life the little town differed curiously from her
two neighbours of Monaco and Eze. Monaco and Eze were distinctly
masculine in character. They were men-towns. They were, by natural
endowment, very strong. They boasted of their strength and took
advantage of it. They fought everybody and every thing. They
seemed to encourage assault and indeed to provoke it. If hit they hit
back again. Their masculinity got them into frequent trouble.
Moreover they loved the sea and were masters of it.
Now Roquebrune was feminine. She was a woman-town. She
was constitutionally weak. She was little able to defend herself.
When hit she did not hit back again, because she was not strong
enough. She was bullied and was powerless to resent it. She was
afraid of the sea, as many women are, and cared not to venture on
it.
She showed her feminine disposition in more ways than one.
Roquebrune had been under the harsh tyranny of Monaco for a
number of years, but she endured her ill treatment in silence. She
bent her back to the blow. She crouched on the ground, passive and
apparently cowed. Women will endure oppression patiently and
without murmuring for a very long time. But a moment comes when
they revolt, and it is noteworthy that they revolt generally with
success, for the issue depends not only upon a masterly patience,
but upon the choice of the proper time to end it. A town of the type
of Eze would have had neither the patience to wait nor the instinct
to select the moment for an uprising. Eze, after a year or so of
hardship, would have flown at the throat of Monaco and would
probably have been annihilated in the venture.
Roquebrune waited a great deal more than a year or so. She
waited and endured for thirty-three years and when instinct told her
that the right time had come she turned upon the enemy, but not
with a battleaxe in her hand. She quietly placed herself under the
protection of Italy and when she had secured that support she
boldly declared herself a free city and a free city she remained until
she was received into the open arms of France.
An episode that happened in 1184 will, perhaps, still better
illustrate the feminine character of Roquebrune. In that year the
town was besieged by the Ventimiglians. The reason for the assault
is not explained by the historian. It is probable that mere want of
something to do led to this act of wickedness. One can imagine the
Count of Ventimiglia bored to the verge of melancholia by idleness
and can conceive him as becoming tiresome and unmanageable.
One morning, perhaps, a courtier would address his yawning lord
with the remark, “What! nothing to do, sir! Why not go and sack
Roquebrune?” To which the count, quite cheered, would reply, “An
excellent idea. Send for the captain.”
ROQUEBRUNE: RUE DE LA FONTAINE.
View of Castle.

Anyhow, whatever the reason, the count and his men, all in good
spirits, appeared before the walls of the town and prepared for an
assault. Now the state of affairs was as follows. Roquebrune, owing
to its position, could not withstand a siege. Its fall was inevitable
and merely a question of time. The governor would, however, be
compelled to defend the town to the very last. He would man the
walls and barricade the gates and, calling his company together in
the Place des Frères would remind them of their duty, would tell
them, with uplifted sword, that Roquebrune must be defended so
long as a wall remained; that the enemy must not enter the town
except over their dead bodies and that, in the defence of their
homes, they must be prepared to die like heroes.
Now things seemed rather different to the governor’s wife. She
was a shrewd and practical woman not given to heroics. She knew
that Roquebrune could not withstand a siege and must assuredly be
taken. She probably heard the stirring address in the square and did
not at all like her husband’s talk about dying to a man and about
people walking over dead bodies and especially over his body. She
knew that the more determined the resistance the more terrible
would be the revenge when the town was taken. She did not like
people being killed, especially her nice people of Roquebrune.
Besides, as she paced to and fro, a couple of children were tugging
at her dress and asking her why she would not take them out on the
hill-side to play as she did every morning.
So when the night came she put a cloak over her head, made her
way out of the town, found the enemy’s camp and told the count
how—by certain arrangements she had made—he could enter the
town without the loss of a man.
Before the day dawned the bewildered inhabitants, who had
been up all night fussing and hiding away their things, found that
the Ventimiglians were in occupation of the town; for, as the
historian says, “the besiegers entered the town without striking a
blow.”
Thus ended the siege of Roquebrune. It ended in a way that was
probably satisfactory to both parties and, indeed, to everyone but
the governor who had, without question, a great deal to say to his
lady on the subject of minding her own business.
As she patted the head of her smallest child and glanced at the
breakfast table she, no doubt, replied that she had minded her own
business.

[47] As to the name “Cabbé Roquebrune,” Dr. Müller


says that cabbé means a little cape (the Cap
Martin).
[48] Durandy, “Mon pays, etc., de la Riviera,” 1918. Dr.
Müller, “Mentone,” 1910. Bosio, “La Province des
Alpes Maritimes,” 1902.
THE ROMAN MILESTONES “603”.
A PIECE OF THE OLD ROMAN ROAD.
XXXV
GALLOWS HILL

T
HE hills that overshadow the coast road between Cap d’Ail and
Roquebrune are perhaps as diligently traversed by the winter
visitor as any along the Riviera, because in this area level roads
are rare and those who would walk far afield must of necessity climb
up hill.
The hill-side is of interest on account of the number of pre-
historic walled camps which are to be found on its slopes. These
camps form a series of strongholds which extends from Cap d’Ail to
Roquebrune. There are some seven of these forts within this range.
The one furthest to the west is Le Castellar de la Brasca in the St.
Laurent valley on the Nice side of Cap d’Ail. Then come L’Abeglio just
above the Cap d’Ail church, the Bautucan on the site of the old
signal station above the Mid-Corniche, the Castellaretto over the
Boulevard de l’Observatoire, Le Cros near the mule-path to La Turbie
and lastly Mont des Mules and Le Ricard near Roquebrune.
Of these the camp most easily viewed—but by no means the
most easy to visit—is that of the Mont des Mules, on the way up to
La Turbie. This is a bare hill of rough rocks upon the eastern
eminence of which is a camp surrounded by a very massive wall
built up of huge unchiselled stones. It is fitly called a “camp of the
giants,” for no weaklings ever handled such masses of rock as these.
The Romans who first penetrated into the country must have viewed
these military works with amazement, for competent writers affirm
that they date from about 2,000 years before the birth of Christ.
Along this hill-side also are traces of the old Roman road,
fragments which have been but little disturbed and which, perhaps,
are still paved with the very stones over which have marched the
legions from the Imperial City. To the east of La Turbie and just
below La Grande Corniche are two Roman milestones, side by side,
in excellent preservation. There are two, because they have been
placed in position by two different surveyors.
They stand by the ancient way and show clearly enough the
mileage—603. The next milestone (604) stood on the Aurelian Way
just outside La Turbie, at the point where the road is crossed by the
railway, but only the base of it remains. Between it and the previous
milestone is a Roman wayside fountain under a rounded arch. It is
still used as a water supply by the cottagers and the conduit that
leads to it can be traced for some distance up the hill.
The first Roman milestone to the west of La Turbie (No. 605) is
on the side of the Roman road as it turns down towards Laghet.[49]
This milestone is the finest in the district and is remarkably well
preserved. Those who comment on the closeness of these milliaires
must remember that the Roman mile was 142 yards shorter than the
English.

THE ROMAN FOUNTAIN NEAR LA TURBIE.


Above the Mont des Mules is Mont Justicier. It is a hill so bleak
and so desolate that it is little more than a wind-swept pile of
stones. It has been used for centuries as a quarry and much of the
material employed in the building of the Roman trophy at La Turbie
came from its barren sides. Its dreariness is rendered more dismal
by its history and by the memories that cloud its past. These
memories do not recall a busy throng of quarrymen who roared out
chanties as they worked at their cranes and whose chatter could be
heard above the thud of the pick and the clink of the chisel. They
recall the time when this dread mound was the Hill of Death and a
terror in the land.
On the summit of Mont Justicier is a tall, solitary column. It
appears, at a distance, to be a shaft of marble; but it is made up of
small pieces of white stone cemented together. It is a large column
nearly three feet in diameter and some fifteen feet in height. Near it
is the base of a second column of identical proportions to the first.
The distance between the two pillars is twelve feet and they stand
on a platform which faces southwards across the sea. These
columns were the posts of a gigantic gallows. Their summits were
connected by a cross beam and from that beam at least six ropes
could dangle. This is why the mound is named Mont Justicier, or, as
it would be called in England, Gallows Hill.
The Mount became a place of execution in the Middle Ages and
towards the end of the seventeenth century there would never be a
time when bodies could not be seen swinging from the beam of the
great gallows, since it was here that the brigands known as the
Barbets were hanged.
The term “Barbet” has a somewhat curious history. It was
originally a nickname given by the Catholics to the Protestant
Vaudois and later to the Protestants of the Cevennes and elsewhere.
The name had origin in the circumstance that the Vaudois called
their ministers “barbes” or “uncles,” in somewhat the same way that
the Catholics call their priests “fathers.”
The term was later applied to Protestant heretics generally and
notably to the Albigensians who held to the mountains of Piedmont
and Dauphiné. They refused baptism, the Mass, the adoration of the
Cross, the traffic in indulgences. “What was originally a logical revolt
of pure reason against dogmatic authority soon took unfortunately
varying forms, and then reached unpardonable extremes.”[50] These
men were outlawed, were hunted down and massacred and treated
as rogues and vagabonds of a pernicious type. For their ill name
they were themselves not a little to blame. They kept to the
mountains from which great efforts were made to dislodge them
about the end of the seventeenth century.
The term Barbets was subsequently given to the inhabitants of
the valleys of the Alps who lived by plunder and contraband and
finally to any brigands or robbers who had their lairs among the
mountains. “In the year 1792,” writes Rosio,[51] “irregular bands were
formed, under the name of Barbets, which were trained and
commanded by military officers devoted to Sardinia. These bands of
men harassed the French army, pillaged the camps and held up
convoys. When the House of Savoy lost its hold on the Continent the
Barbets divided into smaller companies and gave themselves up to
open brigandage. Their habitat was in the mountains of Levens, of
L’Escarene, Eze and La Turbie. Near Levens the unfortunates who fell
into their hands were hurled into the Vesubie from a rock 300
metres high which is still called Le Saut des Français.”
GALLOWS HILL.

MONT JUSTICIER: THE TWO PILLARS OF THE GALLOWS.

At the foot of Mont Justicier, near to the gallows and by the side
of the actual Roman road, is the little chapel of St. Roch. It is a very
ancient chapel and its years weigh heavily upon it, for it has nearly
come to the end of its days. It is built of rough stones beneath a
coating of plaster and has a cove roof covered with red tiles. The
base of the altar still stands, traces of frescoes can be seen on the
walls and on one side of the altar is an ambry or small, square wall-
press. It was in this sorrowful little chapel that criminals about to be
executed made confession and received the last offices of the
Church.
A sadder place than this in which to die could hardly be realised.
The land around is so harsh, the hill so heartless, the spot so lonely.
And yet many troubled souls have here bid farewell to life and have
started hence on their flight into the unknown. Before the eyes of
the dying men would stretch the everlasting sea. On the West—
where the day comes to an end—the world is shut out by the vast
bastion of the Tête de Chien; but on the East, as far as the eye can
reach, all is open and welcoming and full of pity. It is to the East that
the closing eyes would turn, to the East where the dawn would
break and where would glow, in kindly tints of rose and gold, the
promise of another day.
There is one lonely tree on this Hill of Death—a shivering pine;
while, as if to show the kindliness of little things, some daisies and a
bush of wild thyme have taken up their place at the foot of the
gallows.

[49] The ancient road lies above and to the west of the
modern road to the convent.
[50] “Old Provence,” by T. A. Cook, Vol. 2, p. 169.
[51] “Les Alpes Maritimes,” 1902.
THE CHAPEL OF ST. ROCH.
XXXVI
MENTONE

M
ENTONE is a popular and quite modern resort on the Riviera
much frequented by the English on account of its admirable
climate. Placed on the edge of the Italian frontier it is the
last Mediterranean town in France. It lies between the sea and a
semicircle of green hills upon a wide flat which is traversed by four
rough torrents. It is, on the whole, a pleasant looking place although
it is not so brilliant in colour as the posters in railway stations would
make it. It is seen at its best from a distance, for then its many dull
streets, its prosaic boulevards and its tramlines are hidden by bright
villas and luxuriant gardens, by ruddy roofs and comfortable trees.
Standing up in its midst is the old town which gives to it a faint
suggestion of some antiquity.
This old town, together with the port, divides Mentone into two
parts—the West and the East Bays. The inhabitants also are divided
into two sections—the Westbayers and the Eastbayers, and these
two can never agree as to which side of the town is the more
agreeable. They have fought over this question ever since houses
have appeared in the two disputed districts and they are fighting on
the matter still. The Westbayer wonders that the residents on the
East can find any delight in living, while the Eastbayer is surprised
that his acquaintance in the other bay is still unnumbered with the
dead. I had formed the opinion that the Western Bay was the more
pleasant and the more healthy but Augustus Hare crushes me to the
ground for he writes, “English doctors—seldom acquainted with the
place—are apt to recommend the Western Bay as more bracing; but
it is exposed to mistral and dust, and its shabby suburbs have none
of the beauty of the Eastern Bay.” So I stand corrected, but hold to
my opinion still.
Hare is a little hard on Mentone by reason of its being so
painfully modern. “Up to 1860,” he says, “it was a picturesque
fishing town, with a few scattered villas let to strangers in the
neighbouring olive groves, and all its surroundings were most
beautiful and attractive; now much of its two lovely bays is filled
with hideous and stuccoed villas in the worst taste. The curious old
walls are destroyed, and pretentious paved promenades have taken
the place of the beautiful walks under tamarisk groves by the sea-
shore. Artistically, Mentone is vulgarised and ruined, but its dry,
sunny climate is delicious, its flowers exquisite and its excursions—
for good walkers—are inexhaustible and full of interest.”[52]
There can be few who will not admit that the modern town of
Mentone is commonplace and rather characterless, but, at the same
time, it must be insisted that a large proportion of the Mentone villas
are—from every point of view—charming and free from the charge
of being vulgar.
Some indeed, with their glorious gardens, are serenely beautiful.
With one observation by Mr. Hare every visitor will agree—that in
which he speaks of the country with which Mentone is surrounded.
It is magnificent and so full of interest and variety that it can claim, I
think, to have no parallel in any part of the French Riviera.
MENTONE: THE OLD TOWN.

Mentone is a quiet place that appears to take its pleasure


demurely, if not sadly. It is marked too by a respectability which is
commendable, but at the same time almost awe-inspiring. Perhaps
its nearness to Monte Carlo makes this characteristic more
prominent. If Monte Carlo be a town of scarlet silks, short skirts and
high-heeled shoes Mentone is a town of alpaca and cotton gloves
and of skirts so long that they almost hide the elastic-side boots.
There is a class of English lady—elderly, dour and unattached—
that is comprised under the not unkindly term of “aunt.” They are
propriety personified. They are spoken of as “worthy.” Although not
personally attractive they are eminent by reason of their intimate
knowledge of the economics of life abroad. To them those human
mysteries, the keeper of the pension, the petty trader and the
laundress are as an open book. They fill the frivolous bachelor with
reverential alarm, but their acquaintance with the rate of exchange,
the price of butter and the cheap shop is supreme in its intricacy.
These “aunts” are to be found in larger numbers in Mentone than in
any other resort of the English in France.
Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.

More than just a book-buying platform, we strive to be a bridge


connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.

Join us on a journey of knowledge exploration, passion nurturing, and


personal growth every day!

ebookbell.com

You might also like