Trust and Trustworthy Computing 5th International Conference Trust 2012 Vienna Austria June 1315 2012 Proceedings 1st Edition Janerik Ekberg Download
Trust and Trustworthy Computing 5th International Conference Trust 2012 Vienna Austria June 1315 2012 Proceedings 1st Edition Janerik Ekberg Download
https://ebookbell.com/product/trust-and-trustworthy-computing-third-
international-conference-trust-2010-berlin-germany-
june-2123-2010-proceedings-1st-edition-mohammad-nauman-2022834
https://ebookbell.com/product/trust-and-trustworthy-computing-4th-
international-conference-trust-2011-pittsburgh-pa-usa-
june-2224-2011-proceedings-1st-edition-cornelius-namiluko-2449920
https://ebookbell.com/product/trust-and-trustworthy-computing-4th-
international-conference-trust-2011-pittsburgh-pa-usa-
june-2224-2011-proceedings-1st-edition-cornelius-namiluko-4143822
https://ebookbell.com/product/trust-and-trustworthy-computing-6th-
international-conference-trust-2013-london-uk-
june-1719-2013-proceedings-1st-edition-zongwei-zhou-4241438
Trust And Trustworthy Computing 7th International Conference Trust
2014 Heraklion Crete June 30 July 2 2014 Proceedings 1st Edition
Thorsten Holz
https://ebookbell.com/product/trust-and-trustworthy-computing-7th-
international-conference-trust-2014-heraklion-crete-
june-30-july-2-2014-proceedings-1st-edition-thorsten-holz-4931758
https://ebookbell.com/product/trust-and-trustworthy-computing-8th-
international-conference-trust-2015-heraklion-greece-
august-2426-2015-proceedings-1st-edition-mauro-conti-5236080
https://ebookbell.com/product/trust-and-trustworthy-computing-9th-
international-conference-trust-2016-vienna-austria-
august-2930-2016-proceedings-1st-edition-michael-franz-5607764
https://ebookbell.com/product/trustworthy-ai-a-business-guide-for-
navigating-trust-and-ethics-in-ai-beena-ammanath-47120726
https://ebookbell.com/product/trustworthy-ai-a-business-guide-for-
navigating-trust-and-ethics-in-ai-beena-ammanath-73343120
Lecture Notes in Computer Science 7344
Commenced Publication in 1973
Founding and Former Series Editors:
Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board
David Hutchison
Lancaster University, UK
Takeo Kanade
Carnegie Mellon University, Pittsburgh, PA, USA
Josef Kittler
University of Surrey, Guildford, UK
Jon M. Kleinberg
Cornell University, Ithaca, NY, USA
Alfred Kobsa
University of California, Irvine, CA, USA
Friedemann Mattern
ETH Zurich, Switzerland
John C. Mitchell
Stanford University, CA, USA
Moni Naor
Weizmann Institute of Science, Rehovot, Israel
Oscar Nierstrasz
University of Bern, Switzerland
C. Pandu Rangan
Indian Institute of Technology, Madras, India
Bernhard Steffen
TU Dortmund University, Germany
Madhu Sudan
Microsoft Research, Cambridge, MA, USA
Demetri Terzopoulos
University of California, Los Angeles, CA, USA
Doug Tygar
University of California, Berkeley, CA, USA
Gerhard Weikum
Max Planck Institute for Informatics, Saarbruecken, Germany
Stefan Katzenbeisser Edgar Weippl
L. Jean Camp Melanie Volkamer
Mike Reiter Xinwen Zhang (Eds.)
Trust
and Trustworthy
Computing
5th International Conference, TRUST 2012
Vienna, Austria, June 13-15, 2012
Proceedings
13
Volume Editors
Stefan Katzenbeisser
Melanie Volkamer
Technical University Darmstadt, Germany
E-mail: [email protected]
and [email protected]
Edgar Weippl
Vienna University of Technology and SBA Research, Austria
E-mail: [email protected]
L. Jean Camp
Indiana University, Bloomington, IN, USA
E-mail: [email protected]
Mike Reiter
University of North Carolina at Chapel Hill, USA
E-mail: [email protected]
Xinwen Zhang
Huawei America R&D, Santa Clara, CA, USA
E-mail: [email protected]
Steering Committee
Alessandro Acquisti Carnegie Mellon University, USA
Boris Balacheff Hewlett Packard, UK
Paul England Microsoft, USA
Andrew Martin University of Oxford, UK
Chris Mitchell Royal Holloway, University of London, UK
Sean Smith Dartmouth College, USA
Ahmad-Reza Sadeghi TU Darmstadt / Fraunhofer SIT, Germany
Claire Vishik Intel, UK
General Chairs
Edgar Weippl Vienna University of Technology and
SBA Research, Austria
Stefan Katzenbeisser TU Darmstadt, Germany
Publicity Chair
Marcel Winandy Ruhr University Bochum, Germany
Table of Contents
Technical Strand
Authenticated Encryption Primitives for Size-Constrained Trusted
Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Jan-Erik Ekberg, Alexandra Afanasyeva, and N. Asokan
Socio-economic Strand
On the Practicality of Motion Based Keystroke Inference Attack . . . . . . . 273
Liang Cai and Hao Chen
1 Introduction
Trusted execution environments (TEEs) based on general-purpose secure hard-
ware incorporated into end user devices are widely deployed. There are two
dominant types of TEE designs. The first is as a self-contained stand-alone se-
cure hardware element like Trusted Platform Module (TPM) [15]. The second
is a design like M-Shield [14,11] and ARM TrustZone [1] which augment the
processor with a secure processing mode (Figure 1).
In these latter designs, during normal operation the processor runs the basic
operating software (like the device OS) but can enter the secure mode on-demand
to securely execute small pieces of sensitive code. Certain memory areas are only
accessible in secure mode. These can be used for persistent storage of long-term
secrets. Secure mode is typically combined with isolated RAM and ROM, re-
siding within the System-On-A-Chip (SoC), to protect code executing in the
TEE against memory-bus eavesdropping. The RAM available within this min-
imal TEE is usually quite small, as low as tens of kilobytes in contemporary
devices [9]. Often this constraint implies that only the basic cryptographic prim-
itives or only the specific parts of some security critical architecture (such as a
hypervisor) can be implemented within the TEE.
S. Katzenbeisser et al. (Eds.): TRUST 2012, LNCS 7344, pp. 1–18, 2012.
c Springer-Verlag Berlin Heidelberg 2012
2 J.-E. Ekberg, A. Afanasyeva, and N. Asokan
In most, if not all, of these hardware architectures ([1], [11], [8]) the primary
RAM on the device outside the TEE is addressable by secure mode code exe-
cuting within the TEE (see Figure 1). This unprotected, and hence potentially
”untrusted” RAM is significantly larger than the isolated (trusted) RAM. It is
used
– to transfer input parameters for secure execution within the TEE as well as
for receiving any computation results from the TEE.
– to implement secure virtual memory for secure mode programs running with
the TEE.
– to store and fetch state information when multiple different secure mode
programs execute in an interleaved fashion (when one program needs to
stop its execution in the TEE before it is fully completed, the full state
information needed to continue its execution later is too big to be retained
within the TEE).
In the latter two cases, the TEE must seal any such data before storing it in
the untrusted memory. Sealing means encrypting and integrity-protecting the
data using a key available only within the TEE so that (a) the sealed data is
cryptographically bound to additional information specifying who can use the
unsealed data and how, and (b) any modifications to the sealed data can be
detected when it is used within the TEE.
The basic requirements of a sealing primitive are confidentiality and integrity
of the sealed data. These can be met by using one of several well-known au-
thenticted encryption modes. Many authenticated encryption modes have been
proved secure using standard reduction techniques. However, the general as-
sumption and proof model for the execution of such a scheme is that its entire
execution sequence is carried out securely and in isolation: i.e., inputs are re-
ceived into isolated memory, the entire computation is securely run to completion
as an atomic operation producing an output in isolated memory, and only then
are outputs returned to insecure channels or untrusted RAM. This setting is
unreasonable in memory-constrained TEEs. They need a “pipelined” variant of
authenticated encryption modes where encryption and decryption can be done in
a piecemeal fashion where input is read from and/or output written to untrusted
Authenticated Encryption Primitives 3
than available isolated RAM. Such situations include the case where the TEE
program needs to access only a part of the seal or when it needs to produce a
large protected message say for transfer to another device or server.
We consider two models of pipelined sealing and unsealing. In system model 1
(Figure 2(b)), the plaintext data is made available in TEE isolated memory, i.e.
the decryption primitive decrypts into isolated memory from untrusted mem-
ory, and vice versa for encryption. This model is applicable e.g. for secret keys
generated in a TPM, but subsequently stored in sealed format within the OS.
Authenticated Encryption Primitives 5
In system model 2 (Figures 3(a) and 3(b)), the plaintext comes from or is re-
turned to untrusted memory. Use cases for this approach includes streaming
encrypted content, or encrypting data for network communication.
By necessity, we must assume that any long-term secrets (e.g., sealing keys)
that are applied to the processing are stored and handled in trusted memory only.
We also assume that stack and counters are fully contained in trusted memory.
As with trusted execution in general, the existence of a good (pseudo)random
data source inside the TEE domain is needed and assumed to be present.
For some cryptographic primitives, the system models we examine do not
imply any degradation in security. For example, pipelined variants of message
authentication codes like AES-CBC-MAC will not reveal any information outside
the TEE until all the input data has been processed and the result is computed.
This happens irrespectively of whether data input is carried out in a pipelined
way or by transmitting the complete data to the TEE prior to MAC calcula-
tion. Thus pipelined operation for MACs is from a security perspective trivially
equivalent to baseline operation. A similar argument holds for most common
encryption/decryption modes, such as cipher block chaining or counter modes.
As a rule only a few inputs and outputs for neighboring cryptoblocks affect the
input or output of a given block. Therefore, if the final result is secure when the
complete data is transferred to the TEE prior to the operation, so is trivially an
implementation that during encryption and decryption only temporarily buffers
the small set of blocks with interdependencies. In an AEAD the MAC is affected
by the complete data input, but in a pipelined setting the TEE will reveal parts
of the outputs prior to receiving all input for the computation of the AEAD
integrity tag. This combination of confidentiality and integrity is the cause for
the problem scope to be relevant, especially when applied in system model 2.
4 Proof of Security
In this section we will briefly introduce the standard reduction technique for rea-
soning about the security in cryptographic algorithms and protocols. Using this
method we present a adversary model definition and a proof outline that cov-
ers our assumptions and requirements listed in section 3, for the system models
introduced in section 2.
4.1 Technique
In this paper, we will use the same general proof method as was used for the
baseline EAX variant[4]. The proof in the standard complexity-theoretic as-
sumption, often called the “standard reduction technique”, is described in detail
in references [3] and [2]. On a high level the method is as follows: A security
proof can include two parts. The first one is a proof in the context of chosen-
plaintext attacks (CPA), where the adversary is given the ability to encrypt any
plaintext using the algorithmic primitive. The opposite, the chosen-ciphertext
attack (CCA) allows the adversary to set the ciphertext and observe the result-
ing plaintext. Each proof is constructed as a game between an adversary (A) and
a challenger (C) making use of Oracles (O) that abstract the evaluated algorith-
mic primitive in some way, depending on the properties that are proved. In our
Authenticated Encryption Primitives 7
models the oracles will represent the encryption and decryption initialized with
a key, the second model adds an oracle also for OMAC2 .
The CPA (privacy) security proof is modelled by the adversary using an en-
crypting Oracle (Oe ). The game is defined as follows:
The CCA (authenticity) security proof uses two oracles: an encrypting oracle
(Oe ) and a decrypting one (Od ). The slightly more complex game starts out like
the CPA game, but after receiving the result from C, A is allowed to continue,
and submit up to σ adaptive queries to the decryption oracle Od (of course the
return string from the challenger shall not be used). Only after these extended
queries A will guess the value of b. Again, the advantage of adversary A will be
calculated as the difference between its probability of success with oracles usage
and without it.
The baseline EAX mode of operation has been proved secure against CCA and
CPA attacks. Since the pipelined variant is a derivation of the standard EAX we
can use reduction to show that the pipelined variant is as secure as the baseline
one. In this proof by reduction, we use an adversary model where an adversary
B attacks baseline EAX E by using an adversary A attacking the new pipelined
EAX variant E , both set up with the same parameters (keys). For the game
it will also be necessary to show that the adversary B can simulate all oracles
that would be used by A. The game is set up as follows: suppose there exists
an adversary A which can attack algorithm E with advantage Adv(A) = .
Adversary B wants to break algorithm E (for which a security proof already
exists) by making use of A, such that
In other words the game shows that if we can attack the modified algorithm
E then we can attack the original system E in the way we built adversary B.
But as a security proof already exists for E, our premise of the existence of A is
disproved, thereby proving the security of E .
4.2 Analysis
Correctness of the pipelined EAX in our first system model (Figure 2(b)) is
straight-forward. Intuitively, this is because the attacker has no advantage in
the pipelined setting compared to the baseline setting because inputs and out-
puts are not interleaved. For the sake of completeness, we present the proof in
Appendix A.
In our second system model intermediate computation results are returned
to untrusted memory during algorithm execution. Thus the possibility of an
adaptive attack cannot be ruled out immediately. We use the terminology and
definitions from [4]. In all algorithms, the return statement denotes the return-
ing of data to untrusted memory, not the termination of algorithm execution.
The Read primitive is used to explicitly indicate data input from untrusted
memory. The interactions between A, B and g are shown in Figure 4.
Proof. We begin with the CPA (privacy) claim. Let A be an adversary using re-
sources (q, σ) that is trying to distinguish algorithm 1 from a source of random
bits. We will construct an adversary B with resources (σ1 , σ2 ) that distinguishes
Authenticated Encryption Primitives 9
the OMAC algorithm3 from a source of random bits. Adversary B has an oracle
g2 that responds to queries (t, M, s) ∈ {0, 1, 2} × {0, 1}∗ × N with a string
{M1 , S0 , S1 , . . . , Ss−1 }, where each named component is an l-bit string. Oracle
g2 is the OMAC algorithm. Algorithm 3 describes adversary B:
We may assume that A makes q > 1 queries, so adversary B uses 3q queries.
Then under the conventions for the data complexity, adversary B uses at most
(σ, σ2 ) resources. Observe that P r[AEnc2 = 1] = P r[B OMAC = 1] and P r[A$ =
1] = P r[B $ = 1]. Using Lemma 4 from [4] we conclude that
CP A
AdvAlg4 (A) = P r[AAlg4 = 1] − P r[A$ = 1] =
σ
= P r[B OMAC = 1] − P r[B $ = 1] ≤ AdvOMAC
dist
(σ, )
2
1, 5σ + 3
≤ ≤ AdvEAX
CP A
2l
This means that the pipelined EAX, described in Alg. 1 is as private as original
EAX. This completes the privacy claim.
3
The construction of adversary B is adapted to a specific proof setup presented in [4],
and uses a “tweakable OMAC extension” encapsulated in Lemma 4[4] and its proof.
Lemma 4 asserts the pseudorandomness of the OMAC algorithm and provides an
upper bound for the advantage of the adversary.
10 J.-E. Ekberg, A. Afanasyeva, and N. Asokan
For CCA (authenticity) and reusing the naming, let A be an adversary attack-
ing the authenticity of algorithms 1 and 2. To estimate the advantage of A, we
construct from A (the authenticity-attacking adversary) an adversary B (with
oracles for g2 and g3 , intended for forging the original AES-EAX primitive).
Algorithm 3 simulated oracle Oe , and algorithm 4 will simulate the decryption
oracle Od :
It is easy to see that adversary B can simulate both the oracles Oe and Od for
A indistinguishably from the real challenger of the AES-EAX primitive. Thus,
the advantage of adversary B in forging the authenticity algorithms 1 and 2 can
be calculated as follows:
Adv CCA (B) = P r{B EAX , f orge} − P r{B $ , f orge} =
= Adv CCA (A)
This completes the claim and the proof
5 Implementation Pitfalls
Although we proved the pipelined EAX variant correct, adequate care is needed
when it is incorporated into practice. In this section, we outline two potential
pitfalls.
the user is necessary in order to ensure security from the user’s perspective. This
applies both in the pipelined as well as in the baseline setting.
Although it has no bearing on the security from the perspective of the TEE,
the pipelined variant of the unsealing algorithm shown in Figure 3(b) is equiv-
alent to the baseline variant only if the series of ciphertexts {c0 , c1 , . . . , cn−1 }
in the first phase of the pipelined variant is exactly the same as the series of
ciphertexts in second phase (after T ag is validated as T rue). In practice this can
be ensured by using re-encryption: for example, in the first phase, the TEE will
output encrypted blocks ci when processing input ci and expects the set of ci to
be provided to the second phase.
6 Reference Implementation
Based on the proofs of Algorithm 7 and Algorithm 1, and the insight on pitfalls,
we have implemented and deployed EAX using AES-128 as shown in Algorithm
6. We apply a small simplification constraint to the EAX inputs. The length of
the EAX associated data as well as the nonce are required to be exactly the block
length of the underlying block cipher primitive. These conditions simplify the
internal structures of EAX significantly since two data padding code branches
can be omitted completely. Although this approach sacrifices generality, neither
compatibility nor the original security proofs are affected.
Authenticated Encryption Primitives 13
bytes (if any) in the last vector element provided that it is not block-sized. AD-
DPADBYTE(x) adds a termination marker to the vector block in accordance
with [4], and P ART indicates that the operation is applied to a byte vector
which is not block-sized. All temporary variables t1, t2, t3 and t4 are block-sized
units.
The innermost operation of EAX is clearly visible on lines 8-11. The counter
(in t1) drives the block cipher and produces a key stream into t4, and the CBC-
MAC is accumulated into t2 on each round. t3 is the temporary buffer that
guarantees the integrity of the ci as explained in Section 5.
The EAX implementation with the constraints outlined above is size-efficient.
The algorithm supporting both encryption and decryption and implemented in
C compiles to 742 bytes for an OMAP2/OMAP3 processor with ARM and an
embedded AES block implementation. Algorithm memory (stack) consumption
is a fixed 168 bytes, satisfying the Θ(1) requirement in Section 3.
7 Related Work
Since the concept of a hardware-assisted TCB was re-invigorated around a
decade ago, a number of techniques to secure the “virtual” memory of the trusted
execution environment have been proposed. One of the first results was the emer-
gence of execute only-virtual memory (XOM) [10], an important stepping stone
for trustworthy computing, but it does not consider data protection.
The work on the AEGIS secure processor [12] [13] introduced a secure com-
puting model that highlights the operation of a security kernel running in an
isolated environment, shielded from both physical and software attacks. Among
other features, AEGIS implemented a memory management unit (MMU) that
protects against physical attacks by deploying stateful, authenticated encryption
for virtual memory blocks stored in untrusted memory regions. A comparison of
cryptographic primitives suitable for implementing such a secure virtual memory
manager in hardware can be found in [16].
This work examines the implementation pitfalls and security proof in the
context of implementing EAX, one well-known AEAD. We prove security for
that AEAD in two given models, relevant to TEE implementation. Prior work
[6] [5] addressing the problem and provability of “online” encryption (system
model 2) in a wider context, take another route and also provide alternative
constructions for rendering a cryptographic primitive secure in this model.
9 Conclusion
We have described one example of an AEAD that can be proved correct in a com-
putation context where not all data memory during the algorithm computation
is assumed to be trustworthy. The hardware architecture introduced in Figure 1
is new to algorithm analysis, although devices with such properties are widely
deployed. We have proved AES-EAX secure in this setup, and provide an insight
into what modifications need to be done to a conventional EAX algorithm to
securely realize it in the pipelined setting.
The pipelined AES-EAX presented and analyzed in this paper is commercially
deployed as part of a trusted device architecture.
References
1. ARM. Trustzone-enabled processor,
http://www.arm.com/pdfs/DDI0301D_arm1176jzfs_r0p2_trm.pdf
2. Bellare, M., Rogaway, P.: The game playing technique (2004),
http://eprint.iacr.org/2004/331
16 J.-E. Ekberg, A. Afanasyeva, and N. Asokan
3. Bellare, M., Rogaway, P.: Random oracles are practical: a paradigm for design-
ing efficient protocols. In: CCS 1993: Proceedings of the 1st ACM Conference on
Computer and Communications Security, pp. 62–73. ACM, New York (1993)
4. Bellare, M., Rogaway, P., Wagner, D.: The EAX Mode of Operation. In: Roy, B.,
Meier, W. (eds.) FSE 2004. LNCS, vol. 3017, pp. 389–407. Springer, Heidelberg
(2004), doi:10.1007/978-3-540-25937-4-25
5. Boldyreva, A., Taesombut, N.: Online Encryption Schemes: New Security Notions
and Constructions. In: Okamoto, T. (ed.) CT-RSA 2004. LNCS, vol. 2964, pp. 1–14.
Springer, Heidelberg (2004), doi:10.1007/978-3-540-24660-2-1
6. Fouque, P.-A., Joux, A., Martinet, G., Valette, F.: Authenticated On-Line En-
cryption. In: Matsui, M., Zuccherato, R.J. (eds.) SAC 2003. LNCS, vol. 3006,
pp. 145–159. Springer, Heidelberg (2004), doi:10.1007/978-3-540-24654-1-11
7. GlobalPlatform Device Technology. TEE Internal API Specification. Global Plat-
form, vrtsion 0.27 edition (September 2011),
http://www.globalplatform.org/specificationform.asp?fid=7762
8. Intel Corporation. Trusted eXecution Technology (TXT) – Measured LaunchedEn-
vironment Developer’s Guide (December 2009)
9. Kostiainen, K., Ekberg, J.-E., Asokan, N., Rantala, A.: On-board credentials with
open provisioning. In: ASIACCS 2009: Proceedings of the 4th International Sym-
posium on Information, Computer, and Communications Security, pp. 104–115.
ACM, New York (2009)
10. Lie, D., Thekkath, C., Mitchell, M., Lincoln, P., Boneh, D., Mitchell, J., Horowitz,
M.: Architectural support for copy and tamper resistant software. SIGPLAN
Not. 35(11), 168–177 (2000)
11. Srage, J., Azema, J.: M-Shield mobile security technology, TI White paper (2005),
http://focus.ti.com/pdfs/wtbu/ti_mshield_whitepaper.pdf
12. Edward Suh, G., Clarke, D., Gassend, B., van Dijk, M., Devadas, S.: Efficient
memory integrity verification and encryption for secure processors. In: MICRO 36:
Proceedings of the 36th Annual IEEE/ACM International Symposium on Microar-
chitecture, p. 339. IEEE Computer Society, Washington, DC (2003)
13. Edward Suh, G., O’Donnell, C.W., Sachdev, I., Devadas, S.: Design and implemen-
tation of the aegis single-chip secure processor using physical random functions.
In: ISCA 2005: Proceedings of the 32nd Annual International Symposium on Com-
puter Architecture, pp. 25–36. IEEE Computer Society, Washington, DC (2005)
14. Sundaresan, H.: OMAP platform security features, TI White paper (July 2003),
http://focus.ti.com/pdfs/vf/wireless/platformsecuritywp.pdf
15. Trusted Platform Module (TPM) Specifications,
https://www.trustedcomputinggroup.org/specs/TPM/
16. Chenyu, Y., Rogers, B., Englender, D., Solihin, D., Prvulovic, M.: Improving cost,
performance, and security of memory encryption and authentication. In: 33rd
International Symposium on Computer Architecture, ISCA 2006, Boston, MA,
pp. 179–190 (2006)
Authenticated Encryption Primitives 17
The first model that we consider is the one where plaintext inside the TEE is
encrypted for storage in untrusted memory, and vice versa for decryption. For
the encryption primitive we will use the standard reduction technique to reason
about whether the encrypted content can be released to an adversary before the
whole primitive has completed.
In this model the decryption primitive is unmodified and need not be ana-
lyzed, as the decrypted plaintext is stored in the TEE and thus is not becoming
available to the adversary during the execution of the primitive. An implemen-
tation must still adhere to a similar rule as with encryption, i.e. any encrypted
block has to be moved to trusted memory prior to the integrity check and a
subsequent decryption - otherwise an adversary has the possibility to decouple
the data for the integrity check from the data being decrypted.
Algorithm 7 is an abstraction of the implementation of pipelined EAX, and
returns encrypted blocks as they have been generated.
Proof. We begin with the CPA claim. Let A be an adversary using resources
(q, σ) and is trying to distinguish algorithm 7 from a source of random bits. We
construct an adversary B that distinguishes the original EAX algorithm from
a source of random bits. Adversary B has an oracle g1 that responds to query
(N, H, M ) ∈ {0, 1}l × {0, 1}l × {0, 1}∗ with a string C = {c0 , c1 , . . . , cn−1 }, T ag.
Each named component is an l-bit string. Algorithm 8 describes the operation
of adversary B using g1 :
We may assume that A makes q > 1 queries to its oracle, and adversary B uses
the same number of queries. Also, Pr[AAlg2 = 1]=Pr[B EAX = 1]. We assume
that A is nonce-respecting4 , B is length-committing5 and Pr[A$ = 1]=Pr[B $ =
1]. Thus, we conclude that
CP A
AdvAlg2 (A) = P r[AAlg1 = 1] − P r[A$ = 1] =
4
An adversary is nonce-respecting if its queries never repeat a nonce value.
5
Adversary B is length-committing if it consults its own oracles with the appropriate
data block lengths implied by the needs of adversary A.
Auditable Envelopes: Tracking Anonymity
Revocation Using Trusted Computing
1 Introduction
S. Katzenbeisser et al. (Eds.): TRUST 2012, LNCS 7344, pp. 19–33, 2012.
c Springer-Verlag Berlin Heidelberg 2012
20 M. Smart and E. Ritter
wish to trace Alice’s ballot back to her. After the election, Alice may wish to see
whether her anonymity has been revoked or not. To do this, she merely requests
to see the appropriate envelope from the authorities (i.e., that with her ballot
serial number on it), and verifies that the envelope is still sealed.
We can apply this abstraction to a number of other fields, and it particularly
makes sense when considering payment for goods (we discuss this more in Section
5). However, digitising the (auditable) sealed envelope is not at all trivial: it is
intuitively not possible to simply give the authorities an encrypted copy of Alice’s
identity: if the key is provided with the ciphertext, then Alice has no way to know
whether it has been used. If the key is not provided, then the authorities cannot
do anything with the ciphertext anyway, without contacting Alice (who, as a
rogue user, may deliberately fail to provide information) [1]. As a result, we
must consider that some sort of trusted platform is required, in order for Alice
to be convinced that her anonymity has not been revoked. In this work, we detail
a protocol which uses trusted computing—specifically, the TPM—to assure Alice
in this way.
uses the TPM to assure a user of whether or not their identity has been revealed:
we call this property non-repudiation of anonymity revocation. Our motivation
is clear: if we are to have protocols providing anonymity revocation, then it
must be possible for a user to determine when their anonymity is revoked. The
reasoning for this is twofold: not only does a user have the right to know when
they have been identified (generally, as a suspect in a crime), but the fact that
anonymity revocation is traceable is also beneficial:
. . . the detectability of inappropriate actions and accountability for orig-
ination suffices to prevent misbehaviour from happening [22, p. 5]
Though protocols exist in electronic commerce which permit this ([11], for ex-
ample), the techniques used are not widely applicable, for reasons discussed
above. We consider preliminary discussions of “escrowed data” stored in a dig-
ital envelope which use monotonic counters [1], and discuss the use of virtual
monotonic counters [14] to allow multiple tokens to be securely stored by a single
entity.
1.3 Structure
In Section 2, we provide some background in Trusted Computing and the TPM.
In Section 3, we discuss our trust requirements for the protocol, which itself
is presented in Section 4. We discuss applicability of the protocol in Section 5,
give a short discussion on the security of the protocol in Section 6, and finally
conclude.
The value of a virtual monotonic counter is then the value of the global clock
at the last time the virtual counter’s IncrementCounter command was executed.
This consequently means that the value of a counter each time it is incremented
cannot be predicted deterministically—we can merely say with certainty that
the value of the counter will only monotonically increase. As we discuss further
in the conclusion, this does not present a problem for us.
The IncrementCounter operation is then implemented using the TPM’s API
command TPM IncrementCounter, inside an exclusive, logged transport session,
using the ID of the counter in question, and a nonce nS generated by the client
to prevent replay. The result of the final TPM ReleaseTransportSigned operation
is a data structure including the nonce, and a hash of the transport session log,
which is used to generate an IncrementCertificate.
The ReadCounter operation is more complex, and involves the host (the “iden-
tity provider”, idp, for us) keeping an array of the latest increment certificates
[14, p. 33] for each virtual counter, returning the right one when the client re-
quests it. In order to prevent reversal of the counter’s value, however, the host
must send the current time certificate, the current increment certificate, and all
of the previous increment certificates. Verification of the counter’s value then
involves checking that each previous increment certificate is not for the counter
whose ID has been requested.
We do not go into further implementation specifics, but instead refer interested
readers to [14, p. 32] for further information.
3 Trust Model
In our work, we make the following assumptions:
1. Alice and the identity provider idp (discussed in the next section) trust the
TPM in Alice’s machine, by virtue of it attesting to its state (and therefore,
the state of Alice’s machine)
2. All users trust idp, by virtue of it attesting to its state (and therefore, the
state of idp’s machine)
3. The judge is trusted to only authorise anonymity revocation where necessary
In a strict sense, it is not necessary for users to deliberately place trust in any
TPM (whether it is in the identity provider’s machine, or the user’s): both the
user’s and the identity provider’s TPMs have the ability to verify the correctness
of the other’s TPM and host machine, where the TPM itself is assumed to be a
tamper-resistant hardware module. Instead, therefore, any trust we place must
be in the manufacturer of the TPM, to construct such a device according to its
correct specification. Note as a consequence that idp is not a trusted third party:
the fact that it is worthy of trust can be determined by any user.
4 Protocol
We begin by explaining our protocol from a high level, and then go into more
implementation specific detail. Note that we assume the availability of standard
24 M. Smart and E. Ritter
3. (pkT A , skT A ) :=
TPM CreateWrapKey(binding,ALICE-PCR INFO,kT A ,...)
4. (pkI , skI ) :=
TPM CreateWrapKey(binding,IDP-PCR INFO,kI ,...)
5. Nonce nc
CreateCounter(nc )
7. idm = id := {id}pk
I
{id, CreateCertificate, signidp (hash(id||CounterID))}pk
TA
TPM LoadKey2(kT A , . . .)
TPM UnSeal(idm, kT A ) 8. ReadCounter(CounterID,na )
9. ReadCertificate
IncrementCounter(CounterID,nS )
TPM LoadKey2(kI , . . .)
TPM UnSeal(id, kI )
12. signidp ({id}s )
13. ReadCounter(CounterID,na )
14. ReadCertificate
public key cryptographic techniques, hashing and signature protocols. Our sce-
nario is as follows. Alice wishes to engage in a user-anonymous protocol with a
service provider, s: Alice normally remains anonymous, but s has some interest
in revoking her anonymity under certain circumstances (s can obtain a signed
request for the user’s identity from a judge). Alice would like to know whether
or not her anonymity has been revoked at some point after her interaction with
s is complete.
In order to present a solution, we introduce a third party, the identity provider,
idp. The identity provider runs trusted hardware, and attests to the state of
his machine in an authenticated encrypted transport session with Alice’s TPM
(again, it should be noted that this means idp is not a trusted third party,
but a party which proves that it is trustworthy). Once Alice is assured that
she can trust idp’s machine, and idp is likewise assured of the trustworthiness
of Alice’s machine, idp generates a virtual monotonic counter specifically for
Alice’s identity, using a nonce sent by Alice. He then encrypts Alice’s identity
using a key generated by Alice’s TPM. This is concatenated with a certificate
produced by the creation of the counter, hashed, and signed. The signature,
certificate and encrypted ID—which we will refer to as a pseudonym—are sent
to Alice, encrypted with a binding wrap public key to which only her TPM has
the private counterpart.
Alice now reads the counter generated for her. She can then send whatever
message is necessary to s, along with the particulars of the counter relating to
Auditable Envelopes: Tracking Anonymity Revocation 25
her ID, and idp’s signature thereof. The service provider is able to verify the
validity of the signed hash on Alice’s identity, and can store it for further use.
Should s request to view Alice’s identity, he contacts idp with a signature
generated by a judge, on the pseudonym and particulars of the certificate (the
details originally sent to him). The protocol dictates that idp first increments
the virtual monotonic counter associated with the certificate received, and can
then load the appropriate key, and decrypt Alice’s identity. Alice is later able to
request the value of her monotonic counter once again, allowing her to determine
whether or not her anonymity was revoked.
Stage 1. Alice begins with her TPM and the TPM of the identity provider,
idp, engaging in an encrypted transport session 2 . She invents a nonce, ca , and
challenges idp’s TPM to reveal the state of a number of its platform configuration
registers (PCRs—a set of protected memory registers inside the TPM, which
contain cryptographic hashes of measurements based on the current state of the
host system), using the TPM Quote command (with ca being used for freshness).
Alice can use this information to determine if the TPM is in a suitable state (i.e.,
if its host machine is running the correct software). The identity provider’s TPM
does the same with Alice’s TPM, using a different nonce ci . In this manner, both
platforms are assured of the trustworthiness of the other.
Alice proceeds to have idp’s TPM generate a fresh RSA keypair kI = (pkI , skI )
using the TPM CreateWrapKey command, binding the key to the PCR informa-
tion she acquired. This ensures that only a TPM in the same state as when
the TPM Quote command was executed is able to open anything sealed with
pkI . Similarly, idp’s TPM has Alice’s TPM generate a binding wrap keypair
kT A = (pkT A , skT A ), where the private key is accessible only to Alice’s TPM.
Next, idp receives a nonce nc from Alice. He then creates a virtual monotonic
counter [14], which he ‘ties’ to Alice’s identity, using the CreateNewCounter com-
mand with nc . This returns a CreateCertificate, detailing the ID number of the
counter, CounterID, and the nonce used to create it. idp proceeds to produce a
pseudonym id = {id}pkI for Alice, an encryption of her identity (which we assume
it knows) using the TPM Seal command and the binding wrap key pkI . id and
the ID of the counter, CounterID, are concatenated and hashed. The signed hash,
2
We note that idp could also undergo direct anonymous attestation [3] with Alice to
attest to the state of his machine. However, this is unnecessary for us, as neither
Alice nor idp need to (or could) be anonymous at this stage.
26 M. Smart and E. Ritter
Stage 2. The second stage, which can happen at any time in future, is where
Alice communicates with whichever service provider she chooses (note that she
may choose to use the same id token with multiple service providers, or may
generate a new token for each—it would obviously be sensible to do the latter,
to prevent linkability between service providers). Where Alice’s message (which
might be a tuple containing her vote, or a coin, or some exchangeable object) is
represented by m, she sends the tuple
to s. Note that the whole message is encrypted with the public key of the service
provider, preventing eavesdropping. The message m is further processed (how
is outside of the scope of this paper). The signed hash is examined to confirm
that it is indeed a valid signature, by idp, on the pseudonym and Counter ID
provided. The service provider can then store CounterID, id for later use.
Now, Alice can, at any point, check the value of her virtual monotonic
counter. The service provider may wish to discover her identity, and so will
seek a signed request from a judge, generating a nonce nS . He sends this re-
quest, signJudge (id, nS , CounterID), to idp. Note that in order to decrypt Alice’s
pseudonym, idp must use the key kI —bound to the correct state of his TPM’s
PCRs—which Alice selected. This means that he needs to be in the correct
state. He begins by incrementing Alice’s virtual monotonic counter using the
command IncrementCounter(CounterID, nS ), and then loads the appropriate key
kI using the TPM LoadKey2 command. He can then decrypt Alice’s identity using
TPM UnBind. Finally, idp returns id, encrypted for s. Again, what s does with
Alice’s identity is outside of the scope of this paper.
At any later time, Alice can check the virtual monotonic counter value, by
contacting idp and executing ReadCounter command with a fresh nonce na . If
idp was correctly following the protocol (which, using a verified TPM, he must
have been), Alice will know—by determining whether the value of the counter
has increased—if her identity has been revealed.
A key point of the protocol is that the identity provider is automatically
trusted to follow it, as a consequence of the encrypted transport session in Stage
1. When Alice quotes the PCRs of the identity provider’s TPM, she makes it gen-
erate a key bound to the correct machine state that it is currently in (presumably,
Auditable Envelopes: Tracking Anonymity Revocation 27
Alice would terminate any session where an erroneous result of TPM Quote was
reported). Even if idp were to become corrupted after the encrypted transport
session, this corruption would alter its TPM’s PCRs, protecting Alice’s identity
from rogue decryption.
5 Applicability
In this section, we discuss some use cases for the protocol: as mentioned earlier,
we believe it to have a number of areas of applicability. Here we focus on digital
cash and electronic voting, two classes of protocol where anonymity is critical.
If we take any digital cash protocol where the identity of the coin spender is
in some way encrypted (whether stored on a remote server [10] or encoded into
the coin itself [13]), we can envisage a situation in which a user either spends
a digital coin twice, or participates in an illegal transaction. An authority will
have some interest in this, and thus requests that the Bank trace the coins spent
by the user, in order to identify her.
In the case of the protocols listed above, the identity of the user is simply
decrypted (albeit by two separate authorities in the first case). The user has no
way to know that she was traced, until she is apprehended! Now, we modify each
protocol such that:
– in the case of protocols where the spender ID is encoded onto the coin, the
coins instead contain the user’s identity—encrypted using the wrap key made
for idp—and the CounterID, with the signed hash of both;
– in the case of a database storing the spender ID, with a lookup value in
each key, we proceed as discussed above, with the spender providing the
idp-encrypted ID token which is then stored in the database.
This done, the coin spender knows that each coin can only be linked back to
her with the cooperation of idp, who (since he is following the protocol) must
increment the appropriate counter, allowing the spender to know if she is iden-
tified. Note that a protocol providing revocation auditability already exists [11],
but requires knowledge a priori of who is to be traced, making the protocol
unsuitable for other applications.
6 Analysis
In this section we briefly discuss the security properties of the protocol. The
main property that we achieve is that Alice is always able to determine whether
her anonymity is revoked or not (non-repudiation of anonymity revocation). This
property is satisfied as a result of the knowledge that, having attested to the state
of his TPM (and hence, the software being run on the host), idp will either:
– act according to the protocol specification, or
– be unable to decrypt Alice’s identity.
Our reasoning is as follows. If the Identity Provider adheres to the specification,
he generates a counter for Alice’s identity using a nonce she supplies. He encrypts
her identity using a keypair which can only be used again by a TPM in the same
state which Alice originally accepted.
The information that idp generates to send to Alice must be correct, other-
wise idp is deviating from the protocol. It follows that, when s requests Alice’s
anonymity to be revoked, idp must first increment the associated counter. If idp
does deviate from the protocol, he will not be able to use the same key kI later
on to decrypt Alice’s identity, as that key is bound to his original TPM state
(which would change if different, or malicious, software were used).
Thus, the most a rogue idp could achieve is suggesting Alice’s anonymity has
been revoked when it has not (i.e., tampering with the counter), opening up idp
to further questioning (it is hence not in the identity provider’s interest to lie to
Alice in this way). Since the counter must always be incremented before Alice’s
identity is decrypted, Alice will always know when she has been identified, by
querying the counter.
We next consider Alice’s interaction with s. In her communication with s, Alice
provides her pseudonym and the counter ID tied to it, together with a signed
hash of these values (as originally provided to her by idp). This convinces s that
the identity provided is genuine. This leads us to the issue of eavesdropping at-
tacks, allowing a user to illegitimately obtain the pseudonym of another user,
and thus ‘frame’ an innocent victim for a crime. Note that without identifying
Alice immediately, s cannot be further convinced that the pseudonym is indeed
hers. However, our protocol prevents this problem from arising: in the message
30 M. Smart and E. Ritter
idm sent from idp to Alice, Alice’s pseudonym and counter information are en-
crypted using a binding wrap key, meaning that only her TPM can obtain these
values. The only other message where these two values are together is in Alice’s
communication with s, and here, the entire message is encrypted for s.
The message containing Alice’s actual identity is signed by idp before being
sent back to s. Hence, providing s trusts idp, he will always obtain Alice’s le-
gitimate identity by following the protocol. We might consider that s does not
trust idp, in which case we could request that s and idp also undergo some sort
of attestation, like that between Alice and idp. In the case of the digital cash
example presented earlier, we could require that the Bank and Ombudsman each
force idp to attest to its state.
message 10b as he never sees CounterID in the clear. Note that consequently,
message 11 also needs to change.
In this section, we have discussed the security properties of our work. Note
that changes to mitigate against a corrupt service provider are only appropriate
where untrustworthy service providers are a risk—hence we do not include these
changes in the main protocol.
References
1. Ables, K., Ryan, M.D.: Escrowed Data and the Digital Envelope. In: Acquisti, A.,
Smith, S.W., Sadeghi, A.-R. (eds.) TRUST 2010. LNCS, vol. 6101, pp. 246–256.
Springer, Heidelberg (2010)
2. Blackburn, R.: The Electoral System in Britain. Macmillan, London (1995)
3. Brickell, E., Camenisch, J., Chen, L.: Direct Anonymous Attestation. In: Proceed-
ings of the 11th ACM Conference on Computer and Communications Security,
CCS 2004, pp. 132–145. ACM (2004)
4. Camenisch, J., Maurer, U., Stadler, M.: Digital Payment Systems with Passive
Anonymity-Revoking Trustees. Journal of Computer Security 5(1), 69–89 (1997)
5. Challener, D., Yoder, K., Catherman, R., Safford, D., Doorn, L.V.: A Practical
Guide to Trusted Computing. IBM Press, Boston (2008)
6. Chen, Y., Chou, J.S., Sun, H.M., Cho, M.H.: A Novel Electronic Cash System
with Trustee-Based Anonymity Revocation From Pairing. Electronic Commerce
Research and Applications (2011), doi:10.1016/j.elerap.2011.06.002
7. Fan, C.I., Liang, Y.K.: Anonymous Fair Transaction Protocols Based on Electronic
Cash. International Journal of Electronic Commerce 13(1), 131–151 (2008)
8. Fuchsbauer, G., Pointcheval, D., Vergnaud, D.: Transferable Constant-Size Fair E-
Cash. In: Garay, J.A., Miyaji, A., Otsuka, A. (eds.) CANS 2009. LNCS, vol. 5888,
pp. 226–247. Springer, Heidelberg (2009)
9. Hou, X., Tan, C.H.: On Fair Traceable Electronic Cash. In: Proceedings, 3rd An-
nual Communication Networks and Services Research Conference, pp. 39–44. IEEE
(2005)
10. Jakobsson, M., Yung, M.: Revokable and Versatile Electronic Money (Extended
Abstract). In: CCS 1996: Proceedings of the 3rd ACM Conference on Computer
and Communications Security, pp. 76–87. ACM Press, New York (1996)
11. Kügler, D., Vogt, H.: Off-line Payments with Auditable Tracing. In: Blaze, M. (ed.)
FC 2002. LNCS, vol. 2357, pp. 269–281. Springer, Heidelberg (2003)
12. Moran, T., Naor, M.: Basing Cryptographic Protocols on Tamper-Evident Seals.
Theoretical Computer Science 411(10) (2010)
13. Pointcheval, D.: Self-Scrambling Anonymizers. In: Frankel, Y. (ed.) FC 2000.
LNCS, vol. 1962, pp. 259–275. Springer, Heidelberg (2001)
14. Sarmenta, L.F., van Dijk, M., O’Donnell, C.W., Rhodes, J., Devadas, S.: Virtual
Monotonic Counters and Count-Limited Objects using a TPM without a trusted
OS. In: Proceedings of the First ACM Workshop on Scalable Trusted Computing,
STC 2006, pp. 27–42. ACM, New York (2006)
15. Smart, M., Ritter, E.: Remote Electronic Voting with Revocable Anonymity. In:
Prakash, A., Sen Gupta, I. (eds.) ICISS 2009. LNCS, vol. 5905, pp. 39–54. Springer,
Heidelberg (2009)
16. Smart, M., Ritter, E.: True Trustworthy Elections: Remote Electronic Voting Using
Trusted Computing. In: Calero, J.M.A., Yang, L.T., Mármol, F.G., Garcı́a Villalba,
L.J., Li, A.X., Wang, Y. (eds.) ATC 2011. LNCS, vol. 6906, pp. 187–202. Springer,
Heidelberg (2011)
17. Tan, Z.: An Off-line Electronic Cash Scheme Based on Proxy Blind Signature. The
Computer Journal 54(4), 505–512 (2011)
18. TCG: Trusted Computing Group: TPM Main: Part 2: Structures of the TPM,
Version 1.2, Revision 103 (October 2006), http://bit.ly/camUwE
19. TCG: Trusted Computing Group: TPM Main: Part 3: Commands, Version 1.2,
Revision 103 (October 2006), http://bit.ly/camUwE
Auditable Envelopes: Tracking Anonymity Revocation 33
Amit Vasudevan1, Bryan Parno2, , Ning Qu3, , Virgil D. Gligor1 , and Adrian Perrig1
1
CyLab/Carnegie Mellon University
{amitvasudevan,gligor,perrig}@cmu.edu
2 Microsoft Research
[email protected]
3 Google Inc.
1 Introduction
Consumers currently use their general-purpose computers to perform many sensitive
tasks; they pay bills, fill out tax forms, check account balances, trade stocks, and access
medical data. Unfortunately, increasingly sophisticated and ubiquitous attacks under-
mine the security of these activities. Red/green systems [19,30] have been proposed as
a mechanism for improving user security without abandoning the generality that has
made computers so successful. They are based on the observation that users perform
security-sensitive transactions infrequently, and hence enhanced security protections
need only be provided on demand for a limited set of activities. Thus, with a red/green
system, the user spends most of her time in a general-purpose, untrusted (or “red”) en-
vironment which retains the full generality of her normal computer; i.e., she can install
arbitrary applications that run with good performance. When the user wishes to perform
a security sensitive transaction, she switches to a trusted (or “green”) environment that
includes stringent protections, managed code, network and services at the cost of some
performance degradation.
This work was done while Bryan Parno was still at CyLab/Carnegie Mellon University.
This work was done while Ning Qu was still at CyLab/Carnegie Mellon University.
S. Katzenbeisser et al. (Eds.): TRUST 2012, LNCS 7344, pp. 34–54, 2012.
c Springer-Verlag Berlin Heidelberg 2012
Lockdown: Towards a Safe and Practical Architecture 35
untrusted applications is the same or better with partitioning (as opposed to virtualiza-
tion). Lockdown only imposes a 3% average overhead for memory and 2-7% overhead
for disk operations for untrusted applications. Virtualization on the other hand imposes
overhead for all platform hardware with the overhead ranging from 3-81% depend-
ing on the resources being virtualized (§ 7.2). The primary limitation of partitioning
on current systems is the time (13–31 seconds) needed to switch between the two en-
vironments. While we describe several potential optimizations that could significantly
reduce this time, whether this tradeoff between security, performance, and usability is
acceptable remains an open question.
2 Problem Definition
Goals. The goal of a red/green system is to enable a set of trusted software to com-
municate with a set of trusted sites while preserving the secrecy and integrity of these
applications and the data they handle. Protecting trusted software that does not require
network access is a strict subset of this goal. Ideally, this should be achieved without
modifying any hardware or software the user already employs. In other words, a user
should be able to run the same OS (e.g., Windows), launch her favorite browser (e.g.,
Internet Explorer) and connect to her preferred site (e.g., a banking website) via the
Internet in a highly secure manner while maintaining the current level of performance
for applications that are not security-sensitive.
Adversary Model. We assume the adversary can execute arbitrary code within the
untrusted environment and may also monitor and manipulate network traffic to and from
the user’s machine. However, we assume the adversary is remote and cannot perform
physical attacks on the user’s machine.
Assumptions. The first three assumptions below are necessary for any red/green sys-
tem. The last two are particular to Lockdown’s implementation. (i) Trusted Software
and Sites: As we discuss in § 3.2, we assume certain software packages and certain
websites can be trusted to not deliberately leak private data; (ii) Reference Monitor
Security: We assume that our reference monitor code does not contain vulnerabili-
ties. Reducing the complexity and amount of code in the reference monitor (as we do
with Lockdown) allows manual audits and formal analysis to validate this assumption;
(iii) User Abilities: We assume the user can be trained to perform security-sensitive
operations in the trusted environment; (iv) Hardware Support: We assume the user’s
computer supports Hardware Virtualization Extensions (with Nested Page Table sup-
port [10]) and contains a Trusted Platform Module [44] chip. Both technologies are
ubiquitous; and (v) Trusted BIOS: Lockdown uses the BIOS during its installation and
to reset devices, so we must assume the BIOS has not been corrupted. Fortunately, most
modern BIOSes require signed updates [32], preventing most forms of attack.
3 Lockdown’s Architecture
At a high level (Figure 1), Lockdown splits system execution into two environments,
trusted and untrusted, that execute non-concurrently. This design is based on the belief
Lockdown: Towards a Safe and Practical Architecture 37
AppAppApp AppAppApp
1 2 3 1 2 3 Secure
uzze
r
Insecure
Operating Operating
System System
Lockdown
Memory,
Devices
CPU, TPM
Fig. 1. Lockdown System Architecture. Lockdown partitions the platform into two environ-
ments; only one environment executes at a time. An external device (which we call the Lock-
down Verifier) verifies the integrity of Lockdown, indicates which environment is active and can
be used to toggle between them. The shaded portions represent components that must be trusted
to maintain isolation between the environments.
that the user has a set of tasks (e.g., games, browsing for entertainment) that she wants
to run with maximum performance, and that she has a set of tasks that are security sen-
sitive (e.g., checking bank accounts, paying bills, making online purchases) which she
wants to run with maximum security and which are infrequent and less performance-
critical. The performance-sensitive applications run in the untrusted environment with
near-native speed, while security-sensitive applications run in the trusted environment,
which is kept pristine and protected by Lockdown. The Lockdown architecture is based
on two core concepts: (i) hyper-partitioning: system resources are partitioned as op-
posed to being virtualized. Among other benefits, this results in greater performance,
since it minimizes resource interpositioning, and it eliminates most side-channel attacks
possible with virtualization; and (ii) trusted environment protection: Lockdown lim-
its code execution in the trusted environment to a small set of trusted applications and
ensures that network communication is only permitted with trusted sites.
3.1 Hyper-partitioning
Since the untrusted environment may be infected with malware, Lockdown must iso-
late the trusted environment from the untrusted environment. Further, Lockdown must
isolate itself from both environments so that its functionality cannot be deliberately
or inadvertently modified. One way to achieve this isolation is to rely on the platform
hardware to partition resources. With platform capabilities such as Single-Root I/O Vir-
tualization (SR-IOV) [29] and additional hardware such as an IOMMU, it is possible
to assign physical devices directly to an environment (untrusted or trusted) [4,17]. This
hardware capability facilitates concurrent execution of multiple partitions without vir-
tualizing devices. Unfortunately, not all devices can be shared currently (e.g., video,
audio) [5] and such platform support is not widely available today [6,17].
38 A. Vasudevan et al.
CPU and Memory Partitioning. Lockdown partitions the CPU in time by only allow-
ing one environment to execute at a time. The available physical memory in the system
is partitioned into three areas: the Lockdown memory region, the untrusted environ-
ment’s memory region, and the trusted environment’s memory region1. Lockdown em-
ploys Nested Page Tables (NPT)2 [10] to restrict each environment to its own memory
region. In other words, the NPT for the untrusted environment does not map physical
memory pages that belong to the trusted environment and vice versa. Further, it employs
hardware-based DMA-protection within each environment to prevent DMA-based ac-
cess beyond each environment’s memory regions.
Device Partitioning. With hyper-partitioning, both the untrusted and trusted environ-
ments use the same set of physical devices. Devices that do not store persistent data,
such as video, audio, and input devices can be partitioned by saving and restoring their
states across environment switches. However, storage devices may contain persistent,
sensitive data from the trusted environment, or malicious data from the untrusted envi-
ronment. Thus, Lockdown ensures that each environment is provided with its own set
of storage devices and/or partitions. For example, Lockdown can assign a different hard
disk to each environment. Alternatively, Lockdown can assign a different partition on
the same hard disk to each environment. The challenge is to save and restore device
state in a device agnostic manner, and to partition storage devices without virtualizing
them, while providing strong isolation that cannot be bypassed by a malicious OS.
Lockdown leverages the Advanced Configuration and Power-management Interface
(ACPI) [14] to save and restore device states while partitioning non-storage devices.
The ACPI specification defines an ACPI subsystem (system BIOS and chipset) and an
Operating System Power Management (OSPM) subsystem. With an ACPI-compatible
OS, applications and device drivers interact with the OSPM code, which in turn inter-
acts with the low-level ACPI subsystem. ACPI defines four system sleep states which
an ACPI-compliant computer system can be in: S1 (power is maintained to all system
components, but the CPU stops executing instructions), S2 (the CPU is powered off),
S3 (standby), and S4 (hibernation: all of main memory is saved to the hard disk and
the system is powered down). Figure 2a shows how an OSPM handles ACPI Sleep
States S3 and S4. When a sleep command is initiated (e.g., when the user closes the
lid on a laptop), the OSPM first informs all currently executing user and kernel-mode
applications and drivers about the sleep signal. They, in turn, store the configuration in-
formation needed restore the system when it awakes. The device drivers use the OSPM
subsystem to set desired device power levels. The OSPM then signals the ACPI sub-
system, which ultimately performs chipset-specific operations to transition the system
into the desired sleep state. The OSPM polls the ACPI subsystem for a wake signal to
determine when it should reverse the process and wake the system. Note that with this
scheme, Lockdown does not need to include any device drivers or interpose on device
operations. The OS contains all the required drivers that deal directly with the devices
for normal operation and for saving and restoring device states.
1 An implementation using ACPI S4 state for hyper-partitioning (§ 6), requires only two memory
regions, Lockdown and the current environment (untrusted or trusted) since ACPI S4 results
in the current environment’s memory contents being saved and restored from the disk.
2 Also termed as Extended Page Tables on Intel platforms.
Lockdown: Towards a Safe and Practical Architecture 39
,QLWLDWH6OHHS
2630
7UXVWHG 8QWUXVWHG
(QYLURQPHQW (QYLURQPHQW
6WRUH&XUUHQW&RQILJXUDWLRQ
6HW'HYLFH3RZHU/HYHOV
,QYRNH$&3,6XEV\VWHP
$&3,
/RFNGRZQ
6XEV\VWHP
3ROO$&3,6XEV\VWHP
5HVWRUH&RQILJXUDWLRQDQG3RZHU
5HVXPH
D
'LVN
8QWUXVWHG(QYLURQPHQW 7UXVWHG(QYLURQPHQW &RQWUROOHU
2630 2630
,QLWLDWH6OHHS
6WRUH&XUUHQW 6WRUH&XUUHQW
&RQILJXUDWLRQ &RQILJXUDWLRQ
3ROO$&3,6XEV\VWHP 3ROO$&3,6XEV\VWHP
8SGDWH 6ZLWFK
5HVWRUH&RQILJXUDWLRQ 5HVWRUH&RQILJXUDWLRQ
&RPPDQG DQG3RZHU/HYHOV
DQG3RZHU/HYHOV 8QWUXVWHG'LVN 7UXVWHG'LVN
5HVXPH
/RFNGRZQ
5HVXPH
9HULILHU
F E
Fig. 2. Hyper-Partitioning. (a) Lockdown leverages the Advanced Configuration and Power-
management Interface (ACPI) OS sleep mechanism to partition (by saving and restoring states)
non-storage system devices while being device agnostic. (b) Storage devices (e.g., disk) are par-
titioned by intercepting the device selection requests and redirecting device operations to the
appropriate device, based on the current environment. (c) Environment switching is performed
upon receiving a command from the Lockdown Verifier. The OS ACPI sleep steps are modified
by Lockdown to transition between environments (untrusted and trusted).
taken for an environment switch, assuming the user starts in the untrusted environment.
When the user toggles the switch on the trusted Lockdown Verifier to initiate a switch to
the trusted environment (Step 1), the Lockdown Verifier communicates with Lockdown
which in turn instructs the OSPM in the untrusted environment to put the system to
sleep (Step 2). When the OSPM in the untrusted environment issues the sleep command
to the ACPI Subsystem, Lockdown intercepts the command (Step 3), resets all devices,
updates the output on the Lockdown Verifier (Step 4), and issues a wake command to the
OSPM in the trusted environment (Step 5). Switching back to the untrusted environment
follows an analogous procedure.
TII, 193.
de, fait, disposé pour..., apte à..., en passe de... I, 125;
aillé
U nes, une paire de. «Unes brayes qui pendoient.» II, 13.
Ung (A l'), également, d'une manière unie. I, 215.
V
V-a luy-dire, messager d'amour. I, 130.
Vaissel, vaisseau, vase. I, 14.
Variableté, condition de ce qui change facilement. II, 228.
Véez cy, voici.
Véez la, voilà.
Veil, volonté, vouloir. I, 145.
Veille, veux. I, 136.
Veillé, éveillé, vif, rusé. «Son varlet, qui estoit ung galant tout
veillé.» I, 96.
Veillote, petite vieille. I, 76.
Vensist, vînt. I, 296.
Verge, bague, anneau. I, 23.
Veyer (?). II, 126.
Viaire, visage. II, 174.
Viander, manger. II, 79.
Villanner, injurier, décrier, offenser grièvement de paroles. I, 30.
Villanie (Dire), dire des injures. I, 163, 245; II, 24.
Virer, tourner. I, 225.
Vireton, bâton. II, 205.
Vitailles, victuailles. II, 118.
Viveux, vif, éveillé. I, 67.
Voer, jurer, faire vœu. II, 13.
Voirré, garni de vitraux, de verrières. II, 35.
Voirrières, verrières, vitrages. I, 75.
Voulsist, voulût.
Vuider, quitter le lieu où l'on est. II, 116.
W alerant(Comte). I, 128.
Wrelenchem, près de Lille. I, 128.
Y
Yordinairement
, propre, approprié, convenable, suffisant. S'écrit
doine
idoine. II, 244.
TABLE DES MATIÈRES.
AVEC LES TITRES DES NOUVELLES ÉDITIONS DE COLOGNE ET DE
LA HAYE.
TOME I.
Pages.
Introduction v
Dédicace xxj
Table des
xxiij
sommaires
Nouvelle I. La médaille à revers 1
II. Le cordelier médecin 9
III. La pêche de l'anneau 16
IV. Le cocu armé 26
V. Le duel d'aiguillettes 32
VI. L'ivrogne au paradis 38
VII. Le charreton à l'arrière-garde 43
VIII. Garce pour garce 46
IX. Le mari maquereau de sa femme 50
X. Les pastés d'anguille 56
XI. L'encens au diable 61
XII. Le veau 63
XIII. Le clerc châtré 67
Le faiseur de papes, ou l'homme
XIV. 73
de Dieu
XV. La nonne savante 81
XVI. Le borgne aveugle 84
XVII. Le conseiller au bluteau 90
XVIII. La porteuse du ventre et du dos 95
XIX. L'enfant de neige 101
XX. Le mari médecin 107
XXI. L'abbesse guérie 114
XXII. L'enfant à deux pères 120
XXIII. La procureuse passe la raye 125
XXIV. La botte à demi 128
XXV. Forcée de gré 134
XXVI. La demoiselle cavalière 137
XXVII. Le seigneur au bahut 157
XXVIII. Le galant morfondu 166
XXIX. La vache et le veau 173
XXX. Les trois cordeliers 177
XXXI. La dame à deux 183
XXXII. Les dames dîmées 192
XXXIII. Madame tondue 204
XXXIV. Seigneur dessus, seigneur dessous 218
XXXV. L'échange 223
XXXVI. A la besogne 229
XXXVII. Le bénitier d'ordure 232
XXXVIII. Une verge pour l'autre 238
XXXIX. L'un et l'autre payé 245
La bouchère lutin dans la
XL. 251
cheminée
XLI. L'amour et l'aubergon en armes 256
XLII. Le mari curé 261
XLIII. Les cornes marchandes 267
XLIV. Le curé courtier 270
XLV. L'Ecossois lavendière 280
XLVI. Les poires payées 283
XLVII. Les deux mules noyées 287
XLVIII. La bouche honnête 292
XLIX. Le cul d'écarlate 295
L. Change pour change 301
ERRATA.
Tome I.
Pag. Ligne.
14 33 au lieu de ient lisez tient.
84 5 — prè set — près et.
134 24 — Etainçois — Et ainçois.
173 10 — a chef — à chef.
173 19 — baillés — bailles.
183 14 — La Barre — La Barde.
229 26 — quelque — quel que.
233 3 — advenues. Nostre — advenues, nostre.
252 27 — veoit — véoit.
275 7 — Quen — Qu'en.
283 2 — Thieuges — Thienges.
283 21 — l'abbesse qui veoit — l'abbesse, qui véoit.
301 2 — La Salle — La Sale.
Corrections
La première ligne indique l'original, la seconde la correction:
Glossaire:
Ung (A l'), également, d'une manière unie. 215.
Ung (A l'), également, d'une manière unie. I, 215.
*** END OF THE PROJECT GUTENBERG EBOOK LES CENT
NOUVELLES NOUVELLES, TOME I ***
Updated editions will replace the previous one—the old editions will
be renamed.
1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside the
United States, check the laws of your country in addition to the
terms of this agreement before downloading, copying, displaying,
performing, distributing or creating derivative works based on this
work or any other Project Gutenberg™ work. The Foundation makes
no representations concerning the copyright status of any work in
any country other than the United States.
ebookbell.com