PAPER
ABSTRACT | This paper describes the use of physical unclon- and be authenticated by another party and/or securely
able functions (PUFs) in low-cost authentication and key handle private information. Indeed, smartphones have
generation applications. First, it motivates the use of PUFs become a unified platform capable of conducting financial
versus conventional secure nonvolatile memories and defines transactions, storing a user’s secure information, acting as
the two primary PUF types: ‘‘strong PUFs’’ and ‘‘weak PUFs.’’ It an authentication token for the user, and performing
describes strong PUF implementations and their use for low- many other secure applications. The development of
cost authentication. After this description, the paper covers powerful mobile computing hardware has provided the
both attacks and protocols to address errors. Next, the paper software flexibility to enable convenient mobile data pro-
covers weak PUF implementations and their use in key gene- cessing. However, comparable mobile hardware security
ration applications. It covers error-correction schemes such as has been slower to develop. Due to the inherent mobility
pattern matching and index-based coding. Finally, this paper of such devices, the threat model must include use cases
reviews several emerging concepts in PUF technologies such as where the device operates in an untrusted environment
public model PUFs and new PUF implementation technologies. and the adversary has a degree of physical access to the
system.
KEYWORDS | Arbiter; index-based coding; pattern matching; The current best practice for providing such a secure
physical unclonable function (PUF); public model PUFs; ring memory or authentication source in such a mobile system
oscillator; SRAM; unclonable is to place a secret key in a nonvolatile electrically erasable
programmable read-only memory (EEPROM) or battery-
backed static random-access memory (SRAM) and use
I. INTRODUCTION hardware cryptographic operations such as digital signa-
Mobile and embedded devices are becoming ubiquitous, tures or encryption. This approach is expensive both in
interconnected platforms for everyday tasks. Many such terms of design area and power consumption. In addition,
tasks require the mobile device to securely authenticate such nonvolatile memory is often vulnerable to invasive
attack mechanisms. Protection against such attacks re-
quires the use of active tamper detection/prevention cir-
cuitry which must be continually powered.
Physical unclonable functions (PUFs) are a promising
Manuscript received September 3, 2013; accepted April 8, 2014. Date of publication
May 30, 2014; date of current version July 18, 2014.
innovative primitive that are used for authentication and
C. Herder and S. Devadas are with the Computer Science and Artificial Intelligence secret key storage without the requirement of secure
Laboratory (CSAIL), Department of Electrical Engineering and Computer Science
(EECS), Massachusetts Institute of Technology, Cambridge, MA 02139 USA
EEPROMs and other expensive hardware described above
(e-mail: [email protected]). [7], [34]. This is possible, because instead of storing secrets
M.-D. Yu is with Verayo, Inc., San Jose, CA 95129 USA and also with the Computer
Security and Industrial Cryptography (COSIC) research group, KU Leuven,
in digital memory, PUFs derive a secret from the physical
Leuven-Heverlee B-3001, Belgium. characteristics of the integrated circuit (IC). For example,
F. Koushanfar is with the Adaptive Computing and Embedded Systems Lab (ACES),
Department of Electrical and Computer Engineering (ECE), Rice University, Houston,
this paper will discuss a PUF that uses the innate manu-
TX 77005 USA. facturing variability of gate delay as a physical character-
Digital Object Identifier: 10.1109/JPROC.2014.2320516 istic from which one can derive a secret. This approach is
0018-9219 2014 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/
redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
1126 Proceedings of the IEEE | Vol. 102, No. 8, August 2014
Herder et al.: Physical Unclonable Functions and Applications: A Tutorial
advantageous over standard secure digital storage for sev- unique disorder of the object [34]. This fingerprint should
eral reasons. be stable over time and robust to other environmental
• PUF hardware uses simple digital circuits that are conditions and to readout. Further, it must be ‘‘unclonable’’
easy to fabricate and consume less power and area in the sense that the cost to engineer and manufacture
than EEPROM/RAM solutions with antitamper another object with the same fingerprint must be prohi-
circuitry. In addition, simple PUF applications do bitively expensive or impractical using known manufac-
not require expensive cryptographic hardware turing techniques (including by the original manufacturer).
such as the secure hash algorithm (SHA) or a One example of early usage of unique objects for secu-
public/private key encryption algorithm. rity was proposed for the identification of nuclear weapons
• Since the ‘‘secret’’ is derived from physical char- during the Cold War [9]. One would spray a thin coating of
acteristics of the IC, the chip must be powered on randomly distributed light-reflecting particles onto the
for the secret to reside in digital memory. Any surface of the nuclear weapon. Since these particles are
physical attack attempting to extract digital infor- randomly distributed, the resulting interference pattern
mation from the chip, therefore, must do so while after being illuminated from various angles is unique and
the chip is powered on. difficult to reproduce.
• Invasive attacks are more difficult to execute with- Therefore, immediately after being applied, each inter-
out modifying the physical characteristics from ference pattern would be measured as a ‘‘signature’’ and
which the secret is derived. Therefore, continually stored in a secure database. A weapon could then be iden-
powered active antitamper mechanisms are not tified at any later time by re-illuminating the surface and
required to secure the PUF [4]. comparing the interference pattern against the measured
• Nonvolatile memory is more expensive to manu- interference pattern. At the time, it was presumed to be
facture. EEPROMs require additional mask layers, infeasible to reproduce such an interference pattern even
and battery-backed RAMs require an external if an adversary knew the illumination angle(s) and the
always-on power source. resulting pattern.
A PUF is based on the idea that even though the mask
and manufacturing process is the same among different ICs,
each IC is actually slightly different due to normal II. TYPES OF PUFs
manufacturing variability. PUFs leverage this variability to The two primary applications of PUFs are for: 1) low-cost
derive ‘‘secret’’ information that is unique to the chip (a authentication; and 2) secure key generation. These two
silicon ‘‘biometric’’). In addition, due to the manufacturing applications have resulted from the fact that PUFs de-
variability that defines the secret, one cannot manufacture signed during the past decade have mostly fallen into two
two identical chips, even with full knowledge of the chip’s broad categories. These categories are described as ‘‘strong
design. PUF architectures exploit manufacturing variability PUFs’’ and ‘‘weak PUFs.’’ Strong PUFs are typically used
in multiple ways. In addition to gate delay, architectures also for authentication, while weak PUFs are used for key
use the power-on state of SRAM, threshold voltages, and storage.
many other physical characteristics to derive the secret. Each PUF can be modeled as a black-box challenge–
This paper discusses the most popular PUF architec- response system. In other words, a PUF is passed an input
tures. After defining a conceptual model for a PUF, this challenge c, and returns a response r ¼ f ðcÞ, where f ðÞ
paper defines protocols to address two primary applica- describes the input/output relations of the PUF. The black-
tions: strong authentication and cryptographic key gener- box model is appropriate here, because the internal param-
ation. It then provides a case study on the most popular eters of f ðÞ are hidden from the user since they represent
approach to each of these respective applications. It fo- the internal manufacturing variability that the PUF uses to
cuses on the experimental results from extensively tested generate a unique challenge–response set. Such param-
realizations of actual PUF architectures that currently im- eters would include the variability of a circuit’s internal
plement these protocols in real-world applications. Finally, gate delay as described in the Introduction. PUF security
this paper provides a perspective on future research relat- relies on the difficulty of measurement or estimation of
ing to PUFs by briefly discussing current open problems as these parameters as well as the difficulty of manufacturing
well as the latest proposed solutions. two chips with the same set of parameters.
The fundamental difference between weak and strong
A. Previous Work: Unique Objects PUFs is the domain of f ðÞ, or informally, the number of
Although many of the architectures that integrate PUFs unique challenges c that the PUF can process. A weak PUF
into existing IC technology are new, it should be noted that can only support a small number of challenges (in some
the concepts of unclonability and uniqueness have been cases only a single challenge). A strong PUF can support a
used extensively in the past for other applications [13]. For large enough number of challenges such that complete
example, ‘‘unique objects’’ are well defined as objects with determination/measurement of all challenge–response
a unique set of properties (a ‘‘fingerprint’’) based on the pairs (CRPs) within a limited timeframe is not feasible.
the ‘‘digital fingerprint’’ measured by the PUF. If the pa- mentation by Pappu et al., the scattering medium consisted
rameters vary too much, the digital key (for the weak PUF) of a large number of randomly positioned 100-m silica
or response (for the strong PUF) will change, and the spheres suspended in a hardened epoxy. Each sphere acts
crypto operation will fail. as a small lens, refracting individual rays of light as they
The first mechanism to mitigate such effects is to use move through the scattering block. The overall size of the
differential design techniques to cancel out first-order en- scattering block was on the order of 1-mm thickness.
vironmental dependencies. Using the gate delay example, Therefore, even a relatively simple optical path must en-
typical PUFs using this effect will not measure a single counter 10 spheres as it travels through the scattering
gate’s delay, but rather the difference between two iden- block.
tically designed, but distinct gates on a die. In this way, any All of these paths then are focused into an image on the
environmental factor should affect each gate equally. detector. It is intuitively true that each of these paths will
Although differential design methodologies do improve be very sensitive to input coordinates. Studies on speckle
reliability, noise is still a factor in PUF design. Even in patterns produced by reflection/transmission by rough
optimal environmental conditions, noise will result in one surfaces have found this to be true both experimentally and
or several of the output bits of the PUF being incorrect for mathematically [2]. In addition, the speckle pattern is also
any given challenge. Therefore, modern PUF designs em- sensitive to the internal structure of the scattering block.
ploy multiple error-correction techniques to correct these Therefore, it is difficult to fabricate two blocks with iden-
bits, improving reliability. However, many of these error- tical speckle patterns. Finally, due to the complex nature of
correcting techniques have been shown to leak bits of the the physical interactions, it is difficult to model the inter-
secret key, since they require the computation and public nal dynamics of the scattering medium. It is also difficult
storage of syndrome bits. As such, an excess number of to use the output speckle to determine properties of the
PUF bits are generated and then downmixed to produce a scattering block (such as the locations of the silica
full entropy key. spheres).
In addition to standard error-correction techniques, These assumptions, while not strictly based on known
PUFs also use soft-decision coding. This coding technique computationally difficult problems, can be trusted to be
takes advantage of the reliability information of a given difficult due to the fact that ray-tracing electromagnetic
response bit to improve error-correction performance. simulation is a well-studied field with established theo-
This reliability information can be obtained from repeated retical models and best practices. One can make the
PUF response readings in the case of SRAM PUFs, or the statement that if an adversary were able to break the above
magnitude of frequency difference values in the case of optical PUF by efficiently reproducing the physical device,
ring-oscillator PUFs. Both of these error-correcting modeling the entire scattering block, or discovering the
techniques will be discussed further in the context of the sphere locations via observation of the speckle pattern, this
weak and strong PUF examples to be presented in would represent a major advancement in the field of ray-
Sections IV-E, VI-B, VI-C1, and VI-D1. based models of electromagnetic simulation. It is for this
reason that Pappu et al. described this optical PUF as a
‘‘physical one-way function.’’
II I. EXAMPLE S TRONG
PUF ARCHITECTURE S B. Arbiter PUF
Although the capabilities of the above optical PUF are
A. Optical PUF significant, and they represented a significant step forward
One of the first implementations of a strong PUF was in the understanding and construction of PUFs, the prac-
constructed by Pappu et al. in 2001 [30]. The paper terms tical applications are limited due to the macroscopic opti-
the device a ‘‘physical one-way function,’’ but the func- cal nature. This limitation stemmed from two properties.
tionality is identical to that of a strong PUF. Pappu et al. First, the actual unclonable object (the scattering
describe a device with three primary components: 1) a block) was separate from the measurement apparatus (the
laser directed along the Z-axis that can be moved in the XY- imaging device). As a result, the trust gained from
plane and whose polarization can be modified; 2) a sta- authenticating an optical PUF is more limited. In a prac-
tionary scattering medium that sits along the path of the tical use case, the objective of authenticating the PUF is
laser beam; and 3) an imaging device that records the typically to authenticate the associated processor to which
output ‘‘speckle’’ pattern of laser light exiting the scat- it is connected. However, since the optical PUF is sepa-
tering medium. rated from the digital measurement circuitry, an optical
In this device, the input challenge is a laser XY location PUF as described by Pappu et al. designed to authenticate
and polarization, and the response is the associated speckle processor A can easily be detached from processor A and
pattern. The speckle pattern is strongly dependent on the connected to processor B. Processor B could then authen-
input location/polarization because multiple scattering ticate itself as processor A. It is more desirable for the
events occur inside the scattering medium. In the imple- digital measurement apparatus to be integrated in with the
Fig. 1. Arbiter PUF circuit. The circuit creates two delay paths with the same layout length for each input X, and produces an output Y
based on which path is faster.
PUF such that the PUF is not separable from the device it is an invasive attacker would have difficulty in extracting the
used to authenticate. individual delays even with physical access. This assump-
Second, since both key generation and authentication tion is based on the hypothesis that an invasive attacker
applications use integrated electronics, a more practical would destroy the gate delay properties using his/her
PUF would have the same properties as the optical PUF measurement techniques.
and simultaneously be integrated directly with a con- The last security assumption is that given a set of CRPs
ventional complementary metal–oxide–semiconductor from an arbiter PUF, an adversary could not calculate the
(CMOS) process. This integration would be such that the internal delays of the gates. For the architecture described
IC could not be separated from the PUF. above, this is actually not the case. Each delay is indepen-
Silicon implementations of strong PUFs were described dent from all other delays, and the delays add linearly. As a
in the paper by Gassend et al. beginning in 2002 using result, one can use standard linear system analysis to
manufacturing variability in gate delay as the source of intelligently gather data about the gate delays from the
unclonable randomness [7]. In one implementation, a race response bits. In fact, it can be shown that this system
condition is established in a symmetric circuit. This is breaks after only a small number of challenges [17]. This
shown in Fig. 1. An input edge is split to two multiplexors problem can be resolved by several approaches proposed
(muxes). Depending on the input challenge bits ðX½0 by Gassend et al. and described in Section IV-D.
X½127Þ, this path will vary. Although the layout is identical Finally, in both optical and arbiter PUF architectures, it
(propagation time should be the same for each edge no should be noted that environmental factors play a signi-
matter what challenge bits are chosen), manufacturing va- ficant role. For the optical PUF, calibration of the input
riability in the gate delay of each mux will result in one edge location is a concern. In the case of the arbiter PUF, one
arriving at the latch first, and the latch acts as the ‘‘arbiter.’’ can easily recognize that environmental variations such as
The output will, therefore, depend on the challenge bits. temperature, supply voltage, aging, and even random noise
In Fig. 1, there are 128 challenge bits and one response will affect the delay of each edge through the arbiter PUF.
bit. Of course, one typically operates multiple identical In addition, if the delays are close enough, the latch’s setup
circuits in parallel to achieve 128 response bits. In this time will be violated, potentially resulting in an unpre-
way, the arbiter PUF can be scaled to an almost arbitrary dictable output. As a result, the response bits may not be
number of CRPs. stable. In this case, error-correcting techniques are used to
The security of the arbiter PUF, like the optical PUF increase the stability of the PUF while maintaining its
before it, is based on assumptions regarding manufacturing security. Techniques for accomplishing this will be cov-
capabilities and ultimately metrology of the individual gate ered in Section IV-E. Although key generation has zero
delays. Because the design is symmetric, the design does error tolerance, PUF authentication usually incorporates
not contain any ‘‘secret’’ information. An adversarial man- an allowable error threshold, thereby decreasing the sta-
ufacturer that has the PUF design cannot manufacture a bility requirement, and often obviating the need for error
duplicate PUF, because the behavior of the PUF is defined correction.
by the inherent variability in the manufacturing process.
Even the original manufacturer of the PUF could not pro-
duce two identical PUFs, since this would require a sig- I V. LOW-COST AUTHENTICATION:
nificant improvement in manufacturing control. STRONG PUFS
The second security assumption is that the individual The strong PUF architectures described above are typically
gate delays are difficult to measure directly. It assumes that associated with the application of low-cost authentication.
A. Authentication Protocol
Fig. 2. Four individual arbiter PUF circuits with nonlinearities
As described previously, the strong PUF receives a introduced via XOR’ing their outputs.
challenge and generates a response. However, the require-
ments of a strong PUF state that an adversary provided
with polynomial CRPs should not be able to predict the
response to a new challenge. naturally led to the introduction of other ‘‘nonlinear’’
Although this is a desirable property, it also presents a effects to make such modeling attacks more difficult.
usage problem. Since the PUF acts as a ‘‘black box,’’ even These efforts included xor arbiter PUFs, lightweight
the authentication server only has access to previously ob- secure PUFs, and feedforward arbiter PUFs [8], [16], [17],
served CRPs and, therefore, also cannot predict the re- [22], [39].
sponse to a new challenge. In a xor arbiter, multiple arbiter PUF outputs are
Therefore, the protocol for using PUFs is significantly xor’ed to form a single response bit. This is shown in
different than most public/private key cryptographic sys- Fig. 2. These structures have shown greater resilience
tems. Consider a server authenticating a client. against machine learning attacks [24], [35]. Recent
1) PUF is manufactured. studies have demonstrated the vulnerability of the xor
2) Server obtains access to PUF and generates a table arbiters to a combination of machine learning and side-
of CRPs. These pairs are stored in internal secret channel attacks [19], [36]. Developing methods to
storage. suppress the side channels could help in alleviating this
3) PUF is given to the client. vulnerability.
4) The client submits a request to the server to
authenticate. C. Arbiter PUF Implementation
5) Server picks a known CRP and submits the chal- The arbiter PUF was implemented and studied by
lenge to the client. Devadas et al. as a part of a radio-frequency identification
6) The client runs the challenge on the PUF, returns (RFID) IC fabricated in 0.18-m technology [4]. In this
the response to the server. implementation, a single arbiter PUF is implemented on-
7) Server checks to see that the response is correct chip. This primitive has an input challenge of 64 bits and a
and marks the CRP as used. single output bit. To construct a k-bit response, a linear
Because the server cannot predict the PUF behavior, it feedback shift register (LFSR) is used to generate a pseu-
must internally store CRPs to be used later. Each CRP must dorandom sequence based on the input challenge. The
be used only once. Therefore, the server must either store PUF is then evaluated k times using k different bit vectors
enough CRPs so that it will not run out, or it must pe- from this larger pseudorandom sequence. Finally, to pre-
riodically ‘‘recharge’’ the table by establishing secure com- vent learning attacks on the PUF output bits, an additional
munication with an authenticated client and requesting scrambling routine is performed.
responses to new challenges. To address the CRP table In this implementation, area and power consumption
scalability problem, newer protocols based on storage of a represented a major design constraint. Therefore, the
compact model for PUF have emerged. A brief discussion above PUF implementation with only a single arbiter is
of these protocols is included in Section VII-A. used. As a result, the majority of the silicon area is con-
Note that each client PUF will have unique CRPs, and sumed by standard RFID components (RFID front–end,
therefore can be individually authenticated. In addition, one-time programmable memory, digital logic). The PUF
the server must store tables of CRPs for each of the clients and associated LFSR have been implemented in less
to be authenticated. than 0.02 mm2 using 0.18-m fabrication technology. In
addition, the PUF only consumes dynamic power during
B. Arbiter PUF Topologies evaluation, and the power consumption was shown to be
The initial implementation of silicon PUFs had known small with respect to the power stored on the RFID chip.
security issues due to the fact that the delays were linearly In order to understand the PUF’s utility as an
added to produce the resultant response bit [8]. As a result, identification and authentication source, intra-PUF and
they could be learned with relative ease. This issue inter-PUF variation are defined as follows [7].
B. SRAM PUF
Both the arbiter PUF and the ring-oscillator PUF ulti- VI. CRYPTOGRAPHIC KEY GENERATION:
mately depend on variations in the propagation delay of WEAK PUFS
gates. However, this is not the only physical property on Due to their limited challenge–response space, weak PUF
which a PUF can be built. A popular weak PUF structure architectures are typically used for cryptographic key
exploits the positive feedback loop in a SRAM or SRAM- generation. In this case, a weak PUF will replace a secure
like structure shown in Fig. 5. A SRAM cell has two stable nonvolatile memory that would have stored the crypto-
states (used to store a 1 or a 0), and positive feedback to graphic key. Once the key is derived from the weak PUF, it
force the cell into one of these two states and, once it is is stored in secure volatile memory during the device’s
there, prevent the cell from transitioning out of this state operation. This key can then be used for authentication,
accidentally. encryption, and other cryptographic protocols. Due to the
A write operation forces the SRAM cell to transition fact that one or very few keys can be generated by the PUF,
toward one of the two states. However, if the device pow- the security of this key during operation is of paramount
ers up and no write operation has occurred, the SRAM cell importance. If the secure key is revealed, any device can
exists in a metastable state where theoretically, the feed- emulate the weak PUF.
back pushing the cell toward the ‘‘1’’ state equals the feed-
back pushing the cell toward the ‘‘0’’ state, thereby keeping A. Key Generation Protocol
the cell in this metastable state indefinitely. In actual Because weak PUFs like the ones discussed above have
implementations, however, one feedback loop is always effectively fixed ‘‘challenge bits,’’ the key generation proto-
slightly stronger than the other due to small transistor col is fairly simple. In the case of the SRAM PUF, one
threshold mismatches resulting from process variation. simply powers on the SRAM and observes the memory
Natural thermal and shot noise trigger the positive feed- state. Similarly for the ring-oscillator PUF, one simply
back loop, and the cell relaxes into either the ‘‘1’’ or ‘‘0’’ pairwise compares each of the oscillators in order to mea-
state depending on this process variation. sure the correct ordering of oscillation frequency.
Note that since the final state depends on the differ- In both of these cases, the complexity lies in the limi-
ence between two feedback loops, the measurement is tations of physical implementations that result in both
differential. Therefore, common mode noise such as die statistical and systematic noise that must be corrected/
temperature, power supply fluctuations, and common mitigated. The actual approach used to address these issues
mode process variations should not strongly impact the differs for SRAM and ring-oscillator PUFs because the
transition. underlying physical implementation is different.
Although this idea was patented in 2002, the first ex- Ultimately, a stable set of unique bits is extracted from
perimental implementation was performed in 2007, where the weak PUF. These bits can then be used in any of a
a custom SRAM array based on 0.13-m technology was number of cryptographic protocols. Note that weak PUFs
can be used for authentication (similar to strong PUFs) niques to construct a ‘‘symmetric’’ and ‘‘common centroid’’
even though they do not have a large number of CRPs. By layout of the SRAM cell.
supplementing the weak PUF with a hardware HMAC/AES The study demonstrated that the SRAM PUF behaved
implementation, one can achieve authentication capability as desired. After fabrication, an equal number of SRAM
at the cost of the additional power and area required by the cells tended toward ‘‘1’’ and ‘‘0’’ to within experimental
cryptographic hardware primitives that embody the error for both layouts. The study identified that cell po-
HMAC/AES protocol. sitioning within the SRAM, SRAM positioning on the
wafer, and subsequent wafers were all decorrelated with
the SRAM cell’s tendency toward ‘‘1’’ or ‘‘0.’’
B. Arbiter PUF With Pattern Matching
A challenge arose with the recognition that roughly 4%
Error Correction
of the SRAM cells did not have enough mismatch to
Although weak PUFs are typically used for secure key
strongly favor ‘‘1’’ or ‘‘0.’’ These cells probabilistically
generation due to their limited challenge–response space
settled into ‘‘1’’ or ‘‘0’’ at random due to the contributions
size, protocols allowing for strong PUFs to be used in this
of thermal and shot noise. The number of these unstable
capacity have also been developed. A key challenge in
bits increased at temperature/voltage corners and as the
adapting strong PUFs to key generation is in correcting
chip aged.
errors in PUF response bits. To this end, Paral and Devadas
The study by Holcomb et al. tested the functionality of
have proposed the use of a ‘‘pattern matching’’ technique
SRAM PUFs on off-the-shelf RAM and processor products
to correct for errors in strong PUFs for use as a key gene-
such as the MSP430 and Intel’s WISP RFID device [11]. In
ration mechanism [29].
this application, the SRAM cell was a part of another on-
A full description of this work is beyond the scope of
chip SRAM that was actively used for program/data and
this paper. In a nutshell, this approach reverses the tradi-
not custom fabricated in any way to enhance stability or
tional challenge–response format of a PUF. In this case, a
skew performance.
secret offset I is chosen (the key is derived from I). A W bit
In this way, an end user can use an existing off-the-
portion of the response at offset I is published publicly. To
shelf component with no silicon modification and, using
recover the key, a strong PUF iterates through a deter-
software alone, implement a weak PUF for cryptographic
ministic set of challenges (which may depend on previous
or identification purposes.
secret data that have been measured). The PUF response to
Although off-the-shelf SRAM cells are not optimized
this challenge contains the W pattern bits in the output.
for usage as a PUF, Holcomb et al. did observe a bit stability
The PUF uses an error-tolerant comparison circuit (up to T
of 5% across temperatures from 0 C to 50 C. This
bits of error) to identify the offset I of the W-bit block in
stability is roughly the same as the stability measured by
the PUF response bits. The secret key is then derived from
Su et al., indicating that the custom fabrication did not
I. This process can be repeated several times to obtain
help significantly in this regard.
larger sets of secret bits.
However, the off-the-shelf SRAM cells were observed
The study identified that, with the correctly chosen
to have a significant bias toward the ‘‘1’’ state. This changes
parameters (PUF output size: 1024, W: 256, T: 80), the
the entropy and unique identification analysis. In this
error occurrence could be decreased such that the PUF
study, 512 B of SRAM were used as a fingerprint. Due to
always succeeded in regenerating the key on the first try
the systematic skew of the SRAM cells, the min-entropy of
across environmental conditions.
this block was roughly 200 bits (plus/minus 10 bits de-
pending on temperature). This 512-B block was then
C. SRAM PUF Implementation passed through a universal hash to extract 128 bits of
As previously mentioned, the SRAM PUF leverages the output data.
threshold voltage mismatch of transistors in a SRAM cell Finally, because the SRAM is being used as a memory
due to manufacturing mismatch. This mismatch results in element for the processor, it is always powered, even if the
a repeatable tendency to settle into a ‘‘1’’ or ‘‘0’’ state when PUF section of the SRAM is never written during normal
the SRAM cell is powered on with no writes occurring. operation. This continual powering of the SRAM in a ‘‘1’’
Several studies have constructed SRAM PUFs and analyzed or ‘‘0’’ state results in negative bias temperature instabi-
their properties. lity (NBTI). This is a type of ‘‘burn-in’’ for deep submicro-
One of the first implementations of such a chip iden- meter metal–oxide–semiconductor field-effect transistor
tification system was tested by Su et al. with RFID ap- (MOSFET) technology, where the threshold voltage of a
plications [37]. In this study, a custom SRAM cell was transistor increases over time due to the applied stress
constructed to minimize potential systematic mismatch conditions of high temperature and a constant vertical
between the two transistors. Such a skew would result in a electric field across the gate terminal while the transistor
given SRAM cell being more likely to favor a ‘‘1’’ than a ‘‘0’’ is ‘‘on.’’
or vice versa, even with random process variation. To pre- Therefore, if a SRAM cell is powered on and set to the
vent such systematic skew, they used analog layout tech- ‘‘0’’ state for a long time (10 days), then on subsequent
power-on sequences, the cell is more likely to skew toward area. Finally, the ability for a SRAM cell to maintain its
the ‘‘1’’ state. This predictable behavior stands in contrast state depends on the supply voltage. If, during the turn-on
to the behavior of temperature variations, which can either process, the supply voltage is held for some time at a low
skew the cell toward ‘‘0’’ or ‘‘1’’ as the temperature (100 mV) voltage, the thermal noise will induce a tran-
fluctuates. sition into the cell’s favored state, resulting in higher
stability. However, if the voltage turn-on is fast, then cells
1) SRAM PUF Error Correction: Because these papers become less stable. An attacker with access to the power
targeted the application of die identification, rather than channel could potentially control the stability of some of
key generation, this problem could be sidestepped by a the SRAM PUF bits through this mechanism [12].
statistical analysis of the probability that two independent Recently, it has also been identified by Helfmeier et al.
dies would have IDs that were close enough to be misiden- that the SRAM power-on state can be observed via near-
tified as a result of this noise. Unfortunately, the nature of infrared imaging of the SRAM during the turn-on tran-
cryptographic operations is such that not even a single bit sient. Once the SRAM ‘‘fingerprint’’ has been measured (the
can be incorrect. This will require a different approach to PUF response bits have been stolen), one can use focused ion
error correction. beam (FIB) techniques to modify a second IC to have a
Maes et al. describe a low-overhead approach to imple- matching fingerprint as the first by cutting traces and/or
menting a soft-decision helper algorithm [18]. They de- demolishing transistors in the SRAM cell [10].
scribe a method wherein one collects confidence data in Finally, one notes that SRAM data are not erased im-
each bit by taking between 10 and 100 measurements of mediately on power down. The data remain ‘‘stored’’ in the
the SRAM PUF with N output bits prior to provisioning. SRAM cell for a certain short time after the cell is powered
This yields an output estimate vector X 2 f0; 1gN , and a down due to an effect called ‘‘data remanence.’’ Oren et al.
vector of error probabilities Pe , where the ith element of have demonstrated that this effect can be used to inject
this vector corresponds to the probability that a measure- faults into the SRAM PUF. In doing so, one can nonin-
ment of Xi will be erroneous. Going forward, X serves as vasively learn the SRAM PUF output bits indirectly [28].
the ‘‘fuzzy secret,’’ and Pe is public information. It has been
proven by Maes et al. that revealing Pe does not leak any D. Ring-Oscillator PUF Implementation
min-entropy of the response X. This work goes on to de- Yu and Devadas designed a delay-based weak PUF
scribe an implementation where the above soft-decision based on the ring-oscillator architecture, and proposed the
helper algorithm is combined with Reed–Muller codes and first PUF key generation architecture that does not require
a universal hash function to distill the PUF output bits to a traditional error correction [40]–[42]. The proposed
full-entropy reproducible secret key. This implementation index-based syndrome coding method is a departure
takes 1536 SRAM PUF response bits (78% min-entropy from prior error-correction schemes based on code-offset
with an average bit-error probability of 15%), and distills syndrome [5], where the syndrome format enables soft-
these data down to a 128-bit full-entropy key with a failure decision functionality without the complexities associated
rate of 106 . This approach demonstrates the feasibility with an explicit traditional soft-decision error-correction
of using SRAM PUFs as cryptographic key sources in spite decoder, which in general has a higher complexity than an
of the errors inherent in SRAM PUF output bits. equivalent hard-decision error-correction decoder.
In this architecture, several oscillator PUF banks are
2) Attacks on SRAM PUFs: Because the SRAM PUF instantiated, with each oscillator bank comprising 2k ring
provides a secure key (as opposed to providing challenge– oscillators. A k-bit challenge is applied to each bank, to
response functionality like the strong PUF), it relies on determine which oscillators correspond to the top delays,
other conventional security primitives to keep that key and which oscillators correspond to the bottom delays. The
protected while the chip is powered. As a result, any side top and bottom rows are summed to produce x and y,
channel or other vulnerabilities associated with the cryp- respectively. These values are used to produce a single bit
tographic hardware pose a threat to the secret key out- PUF output and associated ‘‘soft-decision’’ information
putted by the SRAM PUF. In addition, since this key is corresponding to a PUF challenge. Specifically, the output
kept secret, the modeling attacks used against strong PUFs bit is the sign of x–y. The ‘‘confidence’’ (discussed more in
cannot be used, since no input/output relations of the PUF Section VI-D1) is derived from the magnitude of x–y.
should ever be revealed. Fig. 6 shows a simplified diagram for illustrative pur-
However, there are other ways identified in the liter- poses. More complex ‘‘recombination’’ functions using
ature to attack a SRAM PUF more directly. Many of these xors or amplitude modulation based on additional chal-
depend on the level of access that one has to the SRAM. If lenge bits were used in actual implementation.
one can insert a ‘‘write’’ command, then one could leverage Each of the oscillators is configured with ‘‘challenge
the NBTI to deliberately force individual bits toward ‘‘1.’’ If bits.’’ For the purpose of cryptographic key generation,
one could modify the temperature, one could potentially these bits are fixed (see the ‘‘fixed challenge’’ in Fig. 6) in
cause the PUF to fail by running the PUF outside its design order to reproduce the same key each time.
would have to take into account its differential nature, the B. Timed Authentication and Public Models
same principle can be used. By locking the frequency, an Although the secure model PUF architecture de-
attacker can drive the frequency of a given PUF to a scribed above mitigates the secure storage requirement for
desired value without invasive measures. PUF usage in authentication applications, it still requires
In addition, it was shown in 2011 that the electromag- secure bootstrapping and secure storage of the secret
netic radiation from the ring-oscillator PUF could also be model.
used to steal the output bits [27]. This attack can be defeated Both of these requirements are alleviated by a new
by running several oscillators in parallel, which has been type of PUF described as timed authentication PUFs,
done in many studies on the ring-oscillator PUF, some of public model PUFs (PPUFs), or SIMulation Possible but
which predate the identification of the attack [40]–[42]. Laborious (SIMPL) systems [1], [20], [24], [33]. This paper
will refer to this concept as a PPUF. An FPGA
implementation of the PPUF was proposed alongside the
VI I. EM ERGING PUF CONCE PTS concepts of an FPGA erasable PUF [20], [21]. A full
Although existing PUF technology has been successful in characterization and compaction of the physical delays of
addressing applications in low-cost authentication and the FPGA components is performed.
secure key generation, PUF technology still has significant A PPUF has a model that emulates the challenge–
untapped potential. New PUF architectures and applica- response behavior of the PPUF hardware. This model is
tions are continually being developed. A full review of each publicVknown to everyone. The key difference between
of these paths is outside the scope of this paper, but a few the PPUF model and the PPUF hardware is that the PPUF
of the popular emerging trends will be covered. For a more hardware computes the response in a measurably faster
comprehensive coverage, we refer interested readers to a time. Therefore, the authentication scheme works as fol-
recent article on the topic [32]. lows (where a server is authenticating a Client):
1) server obtains the desired PPUF model from a
A. Model-based PUFs trusted third party storage;
When considering the application of low-cost authen- 2) server generates a challenge and computes the
tication, one of the primary drawbacks of strong PUF response using the PPUF model;
architectures is the establishment of the secret challenge– 3) server sends challenge to the client and begins timer;
response table. Not only does this require the server to 4) the client uses its PPUF hardware to compute a
securely communicate with the PUF prior to any authen- response and sends it back to the server;
tication rounds in a ‘‘secure bootstrapping’’ phase, but also 5) server measures the client response time T;
once a CRP is used, it must be discarded and never used 6) server accepts if T G T0 and client’s response is
again. Therefore, the server must collect a large number of correct.
CRPs at manufacture and store them secretly. For large In the above scheme, first note that the PPUF model is
applications with thousands of PUF clients, this corre- stored publicly. Although it may be publicly read, the
sponds to a large amount of required secret storage. PPUF model storage must be resistant against tampering or
To mitigate these challenges, researchers have recog- rewriting, as the server must be able to trust that a given
nized that if a PUF could have an associated ‘‘secret model’’ PPUF model is associated with a certain PPUF hardware
that emulates the PUF challenge–response behavior, then owner. This can be done using the traditional public key
the secure storage requirements could be alleviated [3], infrastructure (PKI) or other similar roots of trust.
[21], [24]. The secret model in the case of an arbiter PUF In addition, the server must be able to establish some
would be the delays of the individual stages. Newer delay- value T0 as described in the above scheme. This time is
based PUF constructions have even used this compact 1) long enough to allow the PPUF hardware to compute the
model to link software-based attestation to intrinsic device response and allow for roundtrip network latency; and
characteristics [14]. This linking enables secure timed (and simultaneously, 2) short enough that no model could
even) remote attestation. emulate the PPUF hardware and correctly produce a re-
Such a ‘‘secret model’’ PUF still requires both the sponse in that time. This establishment of T0 is the funda-
‘‘secure bootstrapping’’ phase as well as the secure storage, mental challenge of designing a PPUF capable of enacting
as the PUF and authenticating server must ‘‘agree’’ on a the above authentication scheme.
secret PUF model that describes the PUF behavior. This It is clear that such a PPUF system would have wide-
model must be kept secret, as it exactly describes the PUF spread application. The key recognition demonstrating the
behavior and can be used to spoof an authentication power of a PPUF is that the PPUF hardware contains no
sequence. However, a server may now choose any random secrets. Counterintuitively, the device is still capable of
challenge and independently compute the correct PUF securely authenticating itself to any server. The server also
response. Further, an encrypted model can be stored on contains no secret information. Simply put, there is no
the PUF device and a reader that knows the encryption key secret information anywhere in the protocol. The authen-
can authenticate the device in an offline fashion. tication capability derives solely from the computational
difference between the hardware and the model, and the space and communicates with its neighbors. Each time
unclonability of the hardware. step can then be simulated in constant time by this set of
With this in mind, applications in embedded security processors running in parallel. Therefore, with enough
are immediately obvious. A modern embedded electronic parallelism, any classical system can be simulated with
device being authenticated by a server currently uses se- only constant factor slowdown.
cure nonvolatile memory (NVM) to store secret bits. Even Therefore, if one accepts that the PPUF model will only
strong and weak PUFs can be considered as ‘‘storage’’ be slower than the PPUF hardware by a constant factor, the
devices using manufacturing variation in variables such as next step is to design a system with bounds on this constant
dopant concentration to store secret bits. In all modern factor. If a model is provably 106 slower than the PPUF
electronics, this secure storage acts as the ‘‘root of trust’’ on hardware, then this constant factor is large enough to de-
which all authentication and cryptographic mechanisms rive an acceptable T0 for PPUF operation.
are based. If the secret bits can be stolen (in the case of Many PPUF architectures have been proposed [1], [20],
secure NVM) or approximated (in the case of PUF mod- [21], [33]. However, to date, the authors are unaware of a
eling), then the security is broken. proposed PPUF architecture where such constant factor
In the case of a PPUF, there are no bits to steal. The bounds are provable, or even those with bounds that can be
security is instead based on the difficulty of reproducing an strongly argued.
exact copy of the PPUF hardware. This represents a funda- To establish such bounds, one first recognizes that any
mental shift in security paradigms. Using this mechanism, a computational model will use CMOS technology, since
secure embedded system can be deployed in a highly un- CMOS is simultaneously the fastest and least expensive
trusted environment with a strong threat model (an adver- computational platform currently available. In recognizing
sary already has access to both the PPUF design and PPUF this fact, one can then identify the minimum timescale of
model) and still act as a trusted authentication source. active CMOS devices as a comparison benchmark to the
timescale of the differential equations describing the PPUF
C. New PUF Architectures hardware system.
The current primary open problem to PPUF design is One potential avenue of approach that has been
the identification of a system with a provable T0 parameter identified is in the use of optoelectronics. Optoelectronic
as described in Section VII-B. Intuitively, one immediately systems simultaneously have fast enough internal dynam-
would hope for a provable asymptotic separation between ics to allow for significant constant factor slowdown, and
the PPUF computational hardware and the computational are also integrable into existing CMOS processes.
hardware used to execute the model. Such a separation has In conclusion, this paper has introduced two primary
been observed between computers leveraging quantum applications of PUF technology: low-cost authentication
effects and classical computers, but such quantum and secret key generation. It has covered several of the
computational devices are still not close to the scale most popular approaches to each of these applications,
required for such cryptographic applications. including arbiter PUFs, SRAM PUFs, and ring-oscillator
Therefore, classical systems must be considered for PUFs. It has discussed potential mathematical and phy-
potential practical PPUF implementations. It is recognized sical attacks on each PUF technology as well as popular
that such an asymptotic speedup between the PPUF hard- error-correcting techniques for each. Finally, this paper
ware and the computer running the model is not possible. discusses new PUF technologies such as PPUFs that
Classical dynamical systems at a fundamental level are demonstrate that PUF technology still has tremendous
governed by local differential equations (e.g., Maxwell’s untapped potential. PUFs provide a new, secure technol-
equations, Lagrangian mechanics, and Newtonian gravita- ogy for authentication and secure key storage with many
tion). As a result, one can see that a discretized universe advantages over existing approaches. New PUF error-
can be simulated with only constant factor slowdown. correction approaches and technologies such as PPUFs
Qualitatively speaking, one can imagine a computer represent an exciting new frontier for both PUF research
with a processor dedicated to simulating each point in as well as cryptography as a whole. h
REFERENCES [4] S. Devadas, E. Suh, S. Paral, R. Sowell, [6] B. Gassend, ‘‘Physical random functions,’’
T. Ziola, and V. Khandelwal, ‘‘Design and M.S. thesis, Dept. Electr. Eng. Comput. Sci.,
[1] N. Beckmann and M. Potkonjak, implementation of PUF-Based ‘unclonable’ Massachusetts Inst. Technol., Cambridge,
‘‘Hardware-based public-key cryptography RFID ICs for anti-counterfeiting and security MA, USA, Jan. 2003.
with public physically unclonable functions,’’ applications,’’ in Proc. IEEE Int. Conf. RFID,
Information Hiding, vol. 5806. Berlin, [7] B. Gassend, D. Clarke, M. van Dijk, and
May 2008, pp. 58–64. S. Devadas, ‘‘Silicon physical random
Germany: Springer-Verlag, 2009, pp. 206–220,
ser. Lecture Notes in Computer Science. [5] Y. Dodis, L. Reyzin, and A. Smith, ‘‘Fuzzy functions,’’ in Proc. 9th ACM Conf. Comput.
extractors: How to generate strong keys Commun. Security (CCS), 2002, pp. 148–160.
[2] C. Dainty, Laser Speckle and Related from biometrics and other noisy data,’’
Phenomena. New York, NY, USA: [8] B. Gassend, D. Lim, D. Clarke, M. van Dijk,
Advances in CryptologyVEurocrypt 2004, and S. Devadas, ‘‘Identification and
Springer-Verlag, 1984. vol. 3027. Germany: Springer-Verlag, authentication of integrated circuits,’’
[3] S. Devadas, ‘‘Non-networked RFID PUF 2004, pp. 523–540, ser. Lecture Notes in Concurrency Comput., Practice Exp., vol. 16,
authentication,’’ U.S. Patent 8 683 210, Computer Science. no. 11, pp. 1077–1098, 2004.
U.S. Patent Appl. 12/623 045, 2008.
[9] S. Graybeal and P. McFate, ‘‘Getting out pp. 1–16, ser. Lecture Notes in Computer [32] M. Rostami, J. B. Wendt, M. Potkonjak, and
of the STARTing block,’’ Sci. Amer., vol. 261, Science. F. Koushanfar, ‘‘Quo vadis, PUF?: Trends
no. 6, 1989. [21] M. Majzoobi and F. Koushanfar, and challenges of emerging physical-disorder
[10] C. Helfmeier, C. Boit, D. Nedospasov, and ‘‘Time-bounded authentication of FPGAs,’’ based security,’’ in Proc. Conf. Design Autom.
J.-P. Seifert, ‘‘Cloning physically unclonable IEEE Trans. Inf. Forensics Security, vol. 6, Test Eur., 2014, article 352.
functions,’’ in Proc. IEEE Int. Symp. no. 3, pt. 2, pp. 1123–1135, Sep. 2011. [33] U. Rührmair, ‘‘SIMPL systems: On a public
Hardware-Oriented Security Trust, 2013, [22] M. Majzoobi, F. Koushanfar, and key variant of physical unclonable functions,’’
DOI: 10.1109/HST.2013.6581556. M. Potkonjak, ‘‘Lightweight secure PUFs,’’ International Association for Cryptologic
[11] D. Holcomb, W. Burleson, and K. Fu, ‘‘Initial in Proc. ACM/IEEE Int. Conf. Comput.-Aided Research, Tech. Rep., 2009.
SRAM state as a fingerprint and source of Design, 2008, pp. 670–673. [34] U. Rührmair, S. Devadas, and F. Koushanfar,
true random numbers for RFID tags,’’ [23] M. Majzoobi, F. Koushanfar, and ‘‘Security based on physical unclonability
presented at Conf. RFID Security, Malaga, M. Potkonjak, ‘‘Testing techniques and disorder,’’ Introduction to Hardware
Spain, Jul. 11–13, 2007. for hardware security,’’ in Proc. IEEE Security and Trust, M. Tehranipoor and
[12] D. Holcomb, W. Burleson, and K. Fu, Int. Test Conf., 2008, DOI: 10.1109/ C. Wang, Eds. New York, NY, USA:
‘‘Power-up SRAM state as an identifying TEST.2008.4700636. Springer-Verlag, 2012, pp. 65–102.
fingerprint and source of true random [24] M. Majzoobi, F. Koushanfar, and [35] U. Rührmair, F. Sehnke, J. Sölter, G. Dror,
numbers,’’ IEEE Trans. Comput., vol. 58, no. 9, M. Potkonjak, ‘‘Techniques for design S. Devadas, and J. Schmidhuber, ‘‘Modeling
pp. 1198–1210, Sep. 2009. and implementation of secure reconfigurable attacks on physical unclonable functions,’’ in
[13] D. Kirovski, ‘‘Anti-counterfeiting: Mixing PUFs,’’ ACM Trans. Reconfigurable Technol. Proc. 17th ACM Conf. Comput. Commun.
the physical and the digital world,’’ Towards Syst., vol. 2, no. 1, 2009, DOI: 10.1145/ Security, 2010, pp. 237–249.
Hardware-Intrinsic Security, A.-R. Sadeghi and 1502781.1502786. [36] U. Rührmair, X. Xu, J. Slter, A. Mahmoud,
D. Naccache, Eds. New York, NY, USA: [25] M. Majzoobi, M. Rostami, F. Koushanfar, F. Koushanfar, and W. Burleson, ‘‘Power
Springer-Verlag, 2010, pp. 223–233. D. S. Wallach, and S. Devadas, ‘‘Slender and timing side channels for PUFs and their
[14] J. Kong, F. Koushanfar, P. K. Pendyala, PUF protocol: A lightweight, robust, secure efficient exploitation,’’ Rep. 2013/851, 2013.
A.-R. Sadeghi, and C. Wachsmann, ‘‘PUFatt: authentication by substring matching,’’ in [37] Y. Su, J. Holleman, and B. Otis, ‘‘A 1.6 pJ/bit
Embedded platform attestation based on Proc. IEEE Symp. Security Privacy Workshops, 96 (percent) stable chip ID generating circuit
novel processor-based PUFs,’’ presented at 2012, pp. 33–44. using process variations,’’ in Proc. IEEE Int.
the ACM/IEEE Design Autom. Conf., [26] A. T. Markettos and S. W. Moore, Solid-State Circuits Conf., 2007, pp. 200–201.
San Francisco, CA, USA, Jun. 1–4, 2014. ‘‘The frequency injection attack on [38] G. E. Suh, ‘‘AEGIS: A single-chip secure
[15] P. Layman, S. Chaudhry, J. Norman, and ring-oscillator-based true random number processor,’’ Ph.D. dissertation, Dept. Electr.
J. Thomson, ‘‘Electronic fingerprinting generators,’’ in Proc. Int. Workshop Cryptogr. Eng. Comput. Sci., Massachusetts Inst.
of semiconductor integrated circuits,’’ Hardware Embedded Syst., 2009, pp. 317–331. Technol., Cambridge, MA, USA,
U.S. Patent 6 738 294, Sep. 2002. [27] D. Merli, D. Schuster, F. Stumpf, and G. Sigl, Aug. 2005.
[16] J.-W. Lee, D. Lim, B. Gassend, G. E. Suh, ‘‘Semi-invasive em attack on FPGA RO PUFs [39] G. E. Suh and S. Devadas, ‘‘Physical
M. van Dijk, and S. Devadas, ‘‘A technique and countermeasures,’’ in Proc. Workshop unclonable functions for device
to build a secret key in integrated circuits Embedded Syst. Security, 2011, pp. 2:1–2:9. authentication and secret key generation,’’
with identification and authentication [28] Y. Oren, A.-R. Sadeghi, and C. Wachsmann, in Proc. ACM/IEEE Design Autom. Conf., 2007,
applications,’’ in Proc. IEEE VLSI Circuits ‘‘On the effectiveness of the remanence decay pp. 9–14.
Symp., 2004, pp. 176–179. side-channel to clone memory-based PUFs,’’ [40] M.-D. M. Yu and S. Devadas, ‘‘Secure and
[17] D. Lim, ‘‘Extracting secret keys from in Proc. Int. Workshop Cryptogr. Hardware robust error correction for physical unclonable
integrated circuits,’’ M.S. thesis, Dept. Electr. Embedded Syst., 2013, pp. 107–125. functions,’’ IEEE Design Test Comput., vol. 27,
Eng. Comput. Sci., Massachusetts Inst. [29] R. S. Pappu, Z. Paral, and S. Devadas, no. 1, pp. 48–65, Jan./Feb. 2010.
Technol., Cambridge, MA, USA, May 2004. ‘‘Reliable and efficient PUF-based key gener- [41] M.-D. M. Yu, D. M’Raihi, R. Sowell, and
[18] R. Maes, P. Tuyls, and I. Verbauwhede, ation using pattern matching,’’ in Proc. IEEE S. Devadas, ‘‘Lightweight and secure
‘‘Low-overhead implementation of a soft Int. Symp. Hardware-Oriented Security Trust, PUF key storage using limits of machine
decision helper data algorithm for SRAM 2011, pp. 128–133. learning,’’ in Cryptographic Hardware and
PUFs,’’ Cryptographic Hardware and Embedded [30] R. S. Pappu, P. S. Ravikanth, B. Recht, Embedded SystemsVCHES 2011, vol. 6917.
SystemsVCHES 2009, vol. 5747. Berlin, J. Taylor, and N. Gershenfeld, ‘‘Physical Berlin, Germany: Springer-Verlag, 2011,
Germany: Springer-Verlag, 2009, pp. 332–347, one-way functions,’’ Science, vol. 297, pp. 358–373, ser. Lecture Notes in Computer
ser. Lecture Notes in Computer Science. pp. 2026–2030, 2002. Science.
[19] A. Mahmoud, U. Rührmair, M. Majzoobi, and [31] M. Rostami, M. Majzoobi, F. Koushanfar, [42] M.-D. M. Yu, R. Sowell, A. Singh, D. M’Raihi,
F. Koushanfar, ‘‘Combined modeling and side D. Wallach, and S. Devadas, ‘‘Robust and S. Devadas, ‘‘Performance metrics and
channel attacks on strong PUFs,’’ Rep. 2013/ and reverse-engineering resilient PUF empirical results of a PUF cryptographic key
632, 2013. authentication and key-exchange by generation ASIC,’’ in Proc. IEEE Int. Symp.
[20] M. Majzoobi, A. Elnably, and F. Koushanfar, substring matching,’’ IEEE Trans. Emerging Hardware-Oriented Security Trust, 2012,
‘‘FPGA time-bounded unclonable Topics Comput., 2014, DOI: 10.1109/TETC. pp. 108–115.
authentication,’’ Information Hiding, vol. 6387. 2014.2300635.
Berlin, Germany: Springer-Verlag, 2010,
Farinaz Koushanfar received the B.S. degree in Srinivas Devadas (Fellow, IEEE) received the M.S.
electrical engineering from Sharif University of and Ph.D. degrees from the University of California
Technology, Tehran, Iran, in 1998, the M.S. degree Berkeley, Berkeley, CA, USA, in 1986 and 1988,
from the University of California Los Angeles respectively.
(UCLA), Los Angeles, CA, USA, and the M.A. degree He is the Webster Professor of Electrical
in statistics and the Ph.D. degree in electrical Engineering and Computer Science at the Massa-
engineering from the University of California chusetts Institute of Technology (MIT), Cambridge,
Berkeley, Berkeley, CA, USA in 2005. MA, USA. He joined MIT in 1988 and served as the
She is an Associate Professor of Electrical and Associate Head of the Department of Electrical
Computer Engineering (ECE) at Rice University, Engineering and Computer Science, with respon-
Houston, TX, USA. She is the Director of the Adaptive Computing and sibility for Computer Science, from 2005 to 2011. His research interests
Embedded Systems (ACES) Laboratory. include Computer-Aided Design, computer architecture, and computer
security.