0% found this document useful (0 votes)
6 views80 pages

Trust and Trustworthy Computing 5th International Conference Trust 2012 Vienna Austria June 1315 2012 Proceedings 1st Edition Janerik Ekberg Download

The document contains the proceedings of the 5th International Conference on Trust and Trustworthy Computing (TRUST 2012) held in Vienna, Austria, from June 13-15, 2012. It features a mix of technical and socio-economic tracks, with 20 selected papers from 36 submissions, addressing topics like trusted computing and usable security. The conference aimed to provide an interdisciplinary forum for researchers and practitioners to explore new ideas in trustworthy computing systems.

Uploaded by

bovepiji
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views80 pages

Trust and Trustworthy Computing 5th International Conference Trust 2012 Vienna Austria June 1315 2012 Proceedings 1st Edition Janerik Ekberg Download

The document contains the proceedings of the 5th International Conference on Trust and Trustworthy Computing (TRUST 2012) held in Vienna, Austria, from June 13-15, 2012. It features a mix of technical and socio-economic tracks, with 20 selected papers from 36 submissions, addressing topics like trusted computing and usable security. The conference aimed to provide an interdisciplinary forum for researchers and practitioners to explore new ideas in trustworthy computing systems.

Uploaded by

bovepiji
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 80

Trust And Trustworthy Computing 5th

International Conference Trust 2012 Vienna


Austria June 1315 2012 Proceedings 1st Edition
Janerik Ekberg download
https://ebookbell.com/product/trust-and-trustworthy-
computing-5th-international-conference-trust-2012-vienna-austria-
june-1315-2012-proceedings-1st-edition-janerik-ekberg-4143824

Explore and download more ebooks at ebookbell.com


Here are some recommended products that we believe you will be
interested in. You can click the link to download.

Trust And Trustworthy Computing Third International Conference Trust


2010 Berlin Germany June 2123 2010 Proceedings 1st Edition Mohammad
Nauman

https://ebookbell.com/product/trust-and-trustworthy-computing-third-
international-conference-trust-2010-berlin-germany-
june-2123-2010-proceedings-1st-edition-mohammad-nauman-2022834

Trust And Trustworthy Computing 4th International Conference Trust


2011 Pittsburgh Pa Usa June 2224 2011 Proceedings 1st Edition
Cornelius Namiluko

https://ebookbell.com/product/trust-and-trustworthy-computing-4th-
international-conference-trust-2011-pittsburgh-pa-usa-
june-2224-2011-proceedings-1st-edition-cornelius-namiluko-2449920

Trust And Trustworthy Computing 4th International Conference Trust


2011 Pittsburgh Pa Usa June 2224 2011 Proceedings 1st Edition
Cornelius Namiluko

https://ebookbell.com/product/trust-and-trustworthy-computing-4th-
international-conference-trust-2011-pittsburgh-pa-usa-
june-2224-2011-proceedings-1st-edition-cornelius-namiluko-4143822

Trust And Trustworthy Computing 6th International Conference Trust


2013 London Uk June 1719 2013 Proceedings 1st Edition Zongwei Zhou

https://ebookbell.com/product/trust-and-trustworthy-computing-6th-
international-conference-trust-2013-london-uk-
june-1719-2013-proceedings-1st-edition-zongwei-zhou-4241438
Trust And Trustworthy Computing 7th International Conference Trust
2014 Heraklion Crete June 30 July 2 2014 Proceedings 1st Edition
Thorsten Holz

https://ebookbell.com/product/trust-and-trustworthy-computing-7th-
international-conference-trust-2014-heraklion-crete-
june-30-july-2-2014-proceedings-1st-edition-thorsten-holz-4931758

Trust And Trustworthy Computing 8th International Conference Trust


2015 Heraklion Greece August 2426 2015 Proceedings 1st Edition Mauro
Conti

https://ebookbell.com/product/trust-and-trustworthy-computing-8th-
international-conference-trust-2015-heraklion-greece-
august-2426-2015-proceedings-1st-edition-mauro-conti-5236080

Trust And Trustworthy Computing 9th International Conference Trust


2016 Vienna Austria August 2930 2016 Proceedings 1st Edition Michael
Franz

https://ebookbell.com/product/trust-and-trustworthy-computing-9th-
international-conference-trust-2016-vienna-austria-
august-2930-2016-proceedings-1st-edition-michael-franz-5607764

Trustworthy Ai A Business Guide For Navigating Trust And Ethics In Ai


Beena Ammanath

https://ebookbell.com/product/trustworthy-ai-a-business-guide-for-
navigating-trust-and-ethics-in-ai-beena-ammanath-47120726

Trustworthy Ai A Business Guide For Navigating Trust And Ethics In Ai


Beena Ammanath

https://ebookbell.com/product/trustworthy-ai-a-business-guide-for-
navigating-trust-and-ethics-in-ai-beena-ammanath-73343120
Lecture Notes in Computer Science 7344
Commenced Publication in 1973
Founding and Former Series Editors:
Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen

Editorial Board
David Hutchison
Lancaster University, UK
Takeo Kanade
Carnegie Mellon University, Pittsburgh, PA, USA
Josef Kittler
University of Surrey, Guildford, UK
Jon M. Kleinberg
Cornell University, Ithaca, NY, USA
Alfred Kobsa
University of California, Irvine, CA, USA
Friedemann Mattern
ETH Zurich, Switzerland
John C. Mitchell
Stanford University, CA, USA
Moni Naor
Weizmann Institute of Science, Rehovot, Israel
Oscar Nierstrasz
University of Bern, Switzerland
C. Pandu Rangan
Indian Institute of Technology, Madras, India
Bernhard Steffen
TU Dortmund University, Germany
Madhu Sudan
Microsoft Research, Cambridge, MA, USA
Demetri Terzopoulos
University of California, Los Angeles, CA, USA
Doug Tygar
University of California, Berkeley, CA, USA
Gerhard Weikum
Max Planck Institute for Informatics, Saarbruecken, Germany
Stefan Katzenbeisser Edgar Weippl
L. Jean Camp Melanie Volkamer
Mike Reiter Xinwen Zhang (Eds.)

Trust
and Trustworthy
Computing
5th International Conference, TRUST 2012
Vienna, Austria, June 13-15, 2012
Proceedings

13
Volume Editors

Stefan Katzenbeisser
Melanie Volkamer
Technical University Darmstadt, Germany
E-mail: [email protected]
and [email protected]

Edgar Weippl
Vienna University of Technology and SBA Research, Austria
E-mail: [email protected]

L. Jean Camp
Indiana University, Bloomington, IN, USA
E-mail: [email protected]

Mike Reiter
University of North Carolina at Chapel Hill, USA
E-mail: [email protected]

Xinwen Zhang
Huawei America R&D, Santa Clara, CA, USA
E-mail: [email protected]

ISSN 0302-9743 e-ISSN 1611-3349


ISBN 978-3-642-30920-5 e-ISBN 978-3-642-30921-2
DOI 10.1007/978-3-642-30921-2
Springer Heidelberg Dordrecht London New York

Library of Congress Control Number: 2012938995

CR Subject Classification (1998): C.2, K.6.5, E.3, D.4.6, J.1, H.4

LNCS Sublibrary: SL 4 – Security and Cryptology

© Springer-Verlag Berlin Heidelberg 2012


This work is subject to copyright. All rights are reserved, whether the whole or part of the material is
concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting,
reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication
or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965,
in its current version, and permission for use must always be obtained from Springer. Violations are liable
to prosecution under the German Copyright Law.
The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply,
even in the absence of a specific statement, that such names are exempt from the relevant protective laws
and regulations and therefore free for general use.
Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India
Printed on acid-free paper
Springer is part of Springer Science+Business Media (www.springer.com)
Preface

This volume contains the proceedings of the 5th International Conference on


Trust and Trustworthy Computing (TRUST) held in Vienna, Austria, during
June 13–15, 2012. Continuing the tradition of the previous conferences, which
were held in Villach (2008), Oxford (2009), Berlin (2010) and Pittsburgh (2011),
TRUST 2012 featured both a technical and a socio-economic track. TRUST thus
continues to provide a unique interdisciplinary forum for researchers, practition-
ers and decision makers to explore new ideas in designing, building and using
trustworthy computing systems. This year’s technical track provided a good mix
of topics ranging from trusted computing and mobile devices to applied cryp-
tography and physically unclonable functions, while the socio-economic track
focused on the emerging field of usable security.
Out of 36 submissions to the technical track and 12 submissions to the socio-
economic track, we assembled a program consisting of 20 papers. In addition,
TRUST 2012 featured a poster session for rapid dissemination of the latest
research results, invited talks, as well as a panel discussion on future challenges
of trust in mobile and embedded devices.
We would like to thank everyone for their efforts in making TRUST 2012 a
success: the members of the Organizing Committee, in particular Yvonne Poul,
for their tremendous help with all aspects of the organization; the members
of the Program Committees of both tracks for their efforts in selecting high-
quality research papers to be presented at the conference; all external reviewers
who helped to maintain the quality of theconference; the keynote speakers and
panel members; and most importantly all authors who submitted their work
to TRUST 2012. Finally, we express our gratitude to our sponsors Intel and
Hewlett-Packard, whose support was crucial for the success of TRUST 2012.

April 2012 L. Jean Camp


Stefan Katzenbeisser
Mike Reiter
Melanie Volkamer
Edgar Weippl
Xinwen Zhang
Organization

Steering Committee
Alessandro Acquisti Carnegie Mellon University, USA
Boris Balacheff Hewlett Packard, UK
Paul England Microsoft, USA
Andrew Martin University of Oxford, UK
Chris Mitchell Royal Holloway, University of London, UK
Sean Smith Dartmouth College, USA
Ahmad-Reza Sadeghi TU Darmstadt / Fraunhofer SIT, Germany
Claire Vishik Intel, UK

General Chairs
Edgar Weippl Vienna University of Technology and
SBA Research, Austria
Stefan Katzenbeisser TU Darmstadt, Germany

Program Chairs (Technical Strand)


Mike Reiter University of North Carolina at Chapel Hill,
USA
Xinwen Zhang Huawei, USA

Program Committee (Technical Strand)


Srdjan Capkun ETHZ Zurich, Switzerland
Haibo Chen Fudan University, China
Xuhua Ding Singapore Management University, Singapore
Jan-Erik Ekberg Nokia Research Center
Cedric Fournet Microsoft Research, UK
Michael Franz UC Irvine, USA
Tal Garfinkel VMWare
Trent Jaeger Penn State University, USA
Xuxian Jiang NCSU, USA
Apu Kapadia Indiana University, USA
Jiangtao Li Intel Labs
Peter Loscocco NSA, USA
Heiko Mantel TU Darmstadt, Germany
Jonathan McCune Carnegie Mellon University, USA
VIII Organization

Bryan Parno Microsoft Research, UK


Reiner Sailer IBM Research, USA
Matthias Schunter IBM Zurich, Switzerland
Jean-Pierre Seifert DT-Lab, Germany
Elaine Shi PARC, USA
Sean Smith Dartmouth College, USA
Christian Stueble Sirrix AG, Germany
Edward Suh Cornell University, USA
Neeraj Suri TU Darmstadt, Germany
Jesse Walker Intel Labs
Andrew Warfield University of British Columbia, Canada

Program Chairs (Socio-economic Strand)


L. Jean Camp Indiana University, USA
Melanie Volkamer TU Darmstadt and CASED, Germany

Program Committee (Socio-economic Strand)


Alexander De Luca University of Munich, Germany
Angela Sasse University College London, UK
Artemios G. Voyiatzis Industrial Systems Institute/ATHENA R.C,
Greece
Eleni Kosta Katholieke Universiteit Leuven, Belgium
Gabriele Lenzini University of Luxembourg, Luxembourg
Guenther Pernul Regensburg University, Germany
Heather Lipford University of North Caronlina at Charlotte,
USA
Ian Brown University of Oxford, UK
Jeff Yan Newcastle University, UK
Kristiina Karvonen Helsinki Institute for Information Technology,
Finland
Mario Cagalj University of Split, Croatia
Mikko Siponen Universtiyof Oulu, Finland
Pam Briggs Northumbria University, UK
Peter Buxmann TU Darmstadt, Germany
Peter Y A Ryan University of Luxembourg, Luxembourg
Randi Markussen University of Copenhagen, Denmark
Simone Fischer-Huebner Karlstad University, Sweden
Organization IX

Sonia Chiasson Carleton University, Canada


Stefano Zanero Politecnico di Milano, Italy
Sven Dietrich Stevens Institute of Technology, USA
Tara Whalen Carleton University, Canada
Yolanta Beres HP Labs, USA
Yang Wang Carnegie Mellon University, USA
Debin Liu PayPal

Publicity Chair
Marcel Winandy Ruhr University Bochum, Germany
Table of Contents

Technical Strand
Authenticated Encryption Primitives for Size-Constrained Trusted
Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Jan-Erik Ekberg, Alexandra Afanasyeva, and N. Asokan

Auditable Envelopes: Tracking Anonymity Revocation Using Trusted


Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Matt Smart and Eike Ritter

Lockdown: Towards a Safe and Practical Architecture for Security


Applications on Commodity Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Amit Vasudevan, Bryan Parno, Ning Qu, Virgil D. Gligor, and
Adrian Perrig

Experimenting with Fast Private Set Intersection . . . . . . . . . . . . . . . . . . . . 55


Emiliano De Cristofaro and Gene Tsudik

Reliable Device Sharing Mechanisms for Dual-OS Embedded Trusted


Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Daniel Sangorrı́n, Shinya Honda, and Hiroaki Takada

Modelling User-Centered-Trust (UCT) in Software Systems: Interplay


of Trust, Affect and Acceptance Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Zahid Hasan, Alina Krischkowsky, and Manfred Tscheligi

Clockless Physical Unclonable Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110


Julian Murphy

Lightweight Distributed Heterogeneous Attested Android Clouds . . . . . . . 122


Martin Pirker, Johannes Winter, and Ronald Toegl

Converse PUF-Based Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142


Ünal Kocabaş, Andreas Peter, Stefan Katzenbeisser, and
Ahmad-Reza Sadeghi

Trustworthy Execution on Mobile Devices: What Security Properties


Can My Mobile Platform Give Me? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Amit Vasudevan, Emmanuel Owusu, Zongwei Zhou,
James Newsome, and Jonathan M. McCune

Verifying System Integrity by Proxy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179


Joshua Schiffman, Hayawardh Vijayakumar, and Trent Jaeger
XII Table of Contents

Virtualization Based Password Protection against Malware in


Untrusted Operating Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
Yueqiang Cheng and Xuhua Ding

SmartTokens: Delegable Access Control with NFC-Enabled


Smartphones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
Alexandra Dmitrienko, Ahmad-Reza Sadeghi,
Sandeep Tamrakar, and Christian Wachsmann

A Belief Logic for Analyzing Security of Web Protocols . . . . . . . . . . . . . . . 239


Apurva Kumar

Provenance-Based Model for Verifying Trust-Properties . . . . . . . . . . . . . . . 255


Cornelius Namiluko and Andrew Martin

Socio-economic Strand
On the Practicality of Motion Based Keystroke Inference Attack . . . . . . . 273
Liang Cai and Hao Chen

AndroidLeaks: Automatically Detecting Potential Privacy Leaks in


Android Applications on a Large Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
Clint Gibler, Jonathan Crussell, Jeremy Erickson, and Hao Chen

Why Trust Seals Don’t Work: A Study of User Perceptions and


Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
Iacovos Kirlappos, M. Angela Sasse, and Nigel Harvey

Launching the New Profile on Facebook: Understanding the Triggers


and Outcomes of Users’ Privacy Concerns . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
Saijing Zheng, Pan Shi, Heng Xu, and Cheng Zhang

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341


Authenticated Encryption Primitives
for Size-Constrained Trusted Computing

Jan-Erik Ekberg1 , Alexandra Afanasyeva2 , and N. Asokan1


1
Nokia Research Center, Helsinki
2
State University of Aerospace Instrumentation, Saint-Petersburg

Abstract. Trusted execution environments (TEEs) are widely deployed


both on mobile devices as well as in personal computers. TEEs typically
have a small amount of physically secure memory but they are not enough
to realize certain algorithms, such as authenticated encryption modes, in
the standard manner. TEEs can however access the much larger but
untrusted system memory using which “pipelined” variants of these al-
gorithms can be realized by gradually reading input from, and/or writing
output to the untrusted memory. In this paper, we motivate the need for
pipelined variants of authenticated encryption modes in TEEs, describe a
pipelined version of the EAX mode, and prove that it is as secure as stan-
dard, “baseline”, EAX. We point out potential pitfalls in mapping the
abstract description of a pipelined variant to concrete implementation
and discuss how these can be avoided. We also discuss other algorithms
which can be adapted to the pipelined setting and proved correct in a
similar fashion.

Keywords: Trusted Computing, Platform Security, Cryptography.

1 Introduction
Trusted execution environments (TEEs) based on general-purpose secure hard-
ware incorporated into end user devices are widely deployed. There are two
dominant types of TEE designs. The first is as a self-contained stand-alone se-
cure hardware element like Trusted Platform Module (TPM) [15]. The second
is a design like M-Shield [14,11] and ARM TrustZone [1] which augment the
processor with a secure processing mode (Figure 1).
In these latter designs, during normal operation the processor runs the basic
operating software (like the device OS) but can enter the secure mode on-demand
to securely execute small pieces of sensitive code. Certain memory areas are only
accessible in secure mode. These can be used for persistent storage of long-term
secrets. Secure mode is typically combined with isolated RAM and ROM, re-
siding within the System-On-A-Chip (SoC), to protect code executing in the
TEE against memory-bus eavesdropping. The RAM available within this min-
imal TEE is usually quite small, as low as tens of kilobytes in contemporary
devices [9]. Often this constraint implies that only the basic cryptographic prim-
itives or only the specific parts of some security critical architecture (such as a
hypervisor) can be implemented within the TEE.

S. Katzenbeisser et al. (Eds.): TRUST 2012, LNCS 7344, pp. 1–18, 2012.

c Springer-Verlag Berlin Heidelberg 2012
2 J.-E. Ekberg, A. Afanasyeva, and N. Asokan

Fig. 1. TEE architecture variant: secure processor mode

In most, if not all, of these hardware architectures ([1], [11], [8]) the primary
RAM on the device outside the TEE is addressable by secure mode code exe-
cuting within the TEE (see Figure 1). This unprotected, and hence potentially
”untrusted” RAM is significantly larger than the isolated (trusted) RAM. It is
used
– to transfer input parameters for secure execution within the TEE as well as
for receiving any computation results from the TEE.
– to implement secure virtual memory for secure mode programs running with
the TEE.
– to store and fetch state information when multiple different secure mode
programs execute in an interleaved fashion (when one program needs to
stop its execution in the TEE before it is fully completed, the full state
information needed to continue its execution later is too big to be retained
within the TEE).
In the latter two cases, the TEE must seal any such data before storing it in
the untrusted memory. Sealing means encrypting and integrity-protecting the
data using a key available only within the TEE so that (a) the sealed data is
cryptographically bound to additional information specifying who can use the
unsealed data and how, and (b) any modifications to the sealed data can be
detected when it is used within the TEE.
The basic requirements of a sealing primitive are confidentiality and integrity
of the sealed data. These can be met by using one of several well-known au-
thenticted encryption modes. Many authenticated encryption modes have been
proved secure using standard reduction techniques. However, the general as-
sumption and proof model for the execution of such a scheme is that its entire
execution sequence is carried out securely and in isolation: i.e., inputs are re-
ceived into isolated memory, the entire computation is securely run to completion
as an atomic operation producing an output in isolated memory, and only then
are outputs returned to insecure channels or untrusted RAM. This setting is
unreasonable in memory-constrained TEEs. They need a “pipelined” variant of
authenticated encryption modes where encryption and decryption can be done in
a piecemeal fashion where input is read from and/or output written to untrusted
Authenticated Encryption Primitives 3

RAM gradually as the computation proceeds. In fact, interfaces specifying this


sort of pipelined authenticated encryption operations are starting to appear in
TEE standard specifications [7]. A natural question is whether these pipelined
variants of authenticated encryption modes are as secure as the original, “base-
line”, variants.
In this paper, we make three contributions. First, we highlight the problem of
finding secure pipelined implementations of authenticated encryption primitives
in the context of memory-constrained TEEs. Second, we describe how a concrete
provably secure authenticated encryption mode (EAX) can be adapted for use in
a pipelined fashion in memory-constrained TEEs. We prove the security of the
pipelined variant by showing that it is as secure as the baseline EAX variant. We
discuss other cryptographic primitives where the same approach for pipelining
and security proof may apply. Third, we point out that naive realizations of
pipelined EAX can be vulnerable to information leakage and describe a secure
implementation.
We begin by introducing our platform model in section 2, and list the assump-
tions we make regarding the computing environment and algorithm implemen-
tation in section 3. In section 4 we provide proofs for pipelined EAX variants.
In section 5 we discuss implementation pitfalls, and describe the full reference
implementation in section 6. Related work, further work and conclusions are
discussed in sections 7, 8 and 9.

2 Motivation and System Models


The hardware architecture we consider is shown in Figure 1. Authenticated se-
cure mode programs allowed to run inside the TEE often need to store data or
pass it via untrusted RAM to itself or other programs that will run in the same
TEE later. In the figure this is shown by arrows labelled “sealed data”: data is
encrypted in trusted, isolated memory to be stored in untrusted memory and
is later correspondingly retrieved and decrypted from untrusted memory to be
further processed inside the TEE.
Our work is motivated by this need to produce efficient cryptographic seals
for computer programs executing within a TEE on a mobile phone. The memory
constraints in the TEE (isolated memory) are often severe. For example, accord-
ing to [9], TEE programs in their scenario must fit into roughly 10kB of machine
code and is limited to around 1-2kB of heap memory and 4kB of stack space1 .
The choice to analyze EAX rather than e.g. the more widely used CCM also
stems from such constraints - EAX allows for more compact implementation.
The problem of allocating isolated memory for the ciphertext and plaintext
separately, mandated by (the proof of) baseline operation of encryption primi-
tives, can in some scenarios be replaced by in-place sealing/unsealing. In-place
operation is however impractical in cases where the sealed data needs to be used
also after sealing and it is never viable in cases where the seal size is larger
1
The comparably lavish stack space has to be shared by e.g. cryptographic primitives
when invoked, so the effective stack size is counted in hundreds of bytes.
4 J.-E. Ekberg, A. Afanasyeva, and N. Asokan

(a) EAX mode outline (b) Sealing in system model 1

Fig. 2. EAX mode[4] and system model 1 sealing

(a) Sealing in system model 2 (b) Unsealing in system model 2

Fig. 3. Sealing and unsealing in system model 2

than available isolated RAM. Such situations include the case where the TEE
program needs to access only a part of the seal or when it needs to produce a
large protected message say for transfer to another device or server.
We consider two models of pipelined sealing and unsealing. In system model 1
(Figure 2(b)), the plaintext data is made available in TEE isolated memory, i.e.
the decryption primitive decrypts into isolated memory from untrusted mem-
ory, and vice versa for encryption. This model is applicable e.g. for secret keys
generated in a TPM, but subsequently stored in sealed format within the OS.
Authenticated Encryption Primitives 5

In system model 2 (Figures 3(a) and 3(b)), the plaintext comes from or is re-
turned to untrusted memory. Use cases for this approach includes streaming
encrypted content, or encrypting data for network communication.

3 Assumptions and Requirements


With the motivations above, we define our problem scope:

1. The device includes a TEE that provides cryptographic services, specifically


a symmetric sealing primitive, to the caller without revealing the keys used.
2. The TEE is extremely memory-constrained: It only includes a small amount
(a few kilobytes) of trusted memory, but has the possibility to use external,
untrusted RAM to aid the computation.
3. Encryption/decryption inside isolated memory is not an option; the amount
of memory needed for the seal/unseal operations should be constant (Θ(1),
rather than Θ(n) or higher) in terms of the size of the input data.

The specific problem we address is whether we can realize a pipelined variant


of authenticated encryption with associated data (AEAD) with the same level
of security as for the baseline (non-pipelined) in the two system models dis-
cussed above. We define “pipelined” in the computational sense: inputs to the
encryption primitive are channeled from the source memory to the primitive as
a stream of blocks, and equivalently that the results of the AEAD algorithm
(i.e. output blocks) are channeled to target memory as they are being produced
rather than when the operation completes.
The baseline setting for AEAD is one where inputs are first retrieved into
the TEE, then the operation is carried out, possibly making use of secrets and
a random source, and finally the results are released. This is the setting in
which cryptographic primitives are usually proved correct, since it is the “natural
model” for, e.g., communication security. The use of untrusted memory during
algorithm execution (otherwise the pipelined setting is no different from the
baseline setting) implies that more information will certainly be available to an
adversary in the pipelined alternative.
We are interested in security from the perspective of the TEE : for a given
input at the TEE boundary the pipelined variant of the AEAD implementation
is as secure as the baseline variant if both produce the same output at the TEE
boundary.
The code running in the TEE can be considered immutable. However such
code may use two types of memory locations: isolated memory within the TEE
and untrusted memory outside. We assume that an adversary can freely read and
modify any untrusted memory. The classification of memory can be done for any
memory location used by the AEAD implementation, including local state vari-
ables, processed input data as well as any intermediate or final result. We limit
ourselves to this binary categorization, although a more complete model would
also include statistical considerations caused by indirect information leakage e.g.
in the form of side-channel attacks.
6 J.-E. Ekberg, A. Afanasyeva, and N. Asokan

By necessity, we must assume that any long-term secrets (e.g., sealing keys)
that are applied to the processing are stored and handled in trusted memory only.
We also assume that stack and counters are fully contained in trusted memory.
As with trusted execution in general, the existence of a good (pseudo)random
data source inside the TEE domain is needed and assumed to be present.
For some cryptographic primitives, the system models we examine do not
imply any degradation in security. For example, pipelined variants of message
authentication codes like AES-CBC-MAC will not reveal any information outside
the TEE until all the input data has been processed and the result is computed.
This happens irrespectively of whether data input is carried out in a pipelined
way or by transmitting the complete data to the TEE prior to MAC calcula-
tion. Thus pipelined operation for MACs is from a security perspective trivially
equivalent to baseline operation. A similar argument holds for most common
encryption/decryption modes, such as cipher block chaining or counter modes.
As a rule only a few inputs and outputs for neighboring cryptoblocks affect the
input or output of a given block. Therefore, if the final result is secure when the
complete data is transferred to the TEE prior to the operation, so is trivially an
implementation that during encryption and decryption only temporarily buffers
the small set of blocks with interdependencies. In an AEAD the MAC is affected
by the complete data input, but in a pipelined setting the TEE will reveal parts
of the outputs prior to receiving all input for the computation of the AEAD
integrity tag. This combination of confidentiality and integrity is the cause for
the problem scope to be relevant, especially when applied in system model 2.

4 Proof of Security
In this section we will briefly introduce the standard reduction technique for rea-
soning about the security in cryptographic algorithms and protocols. Using this
method we present a adversary model definition and a proof outline that cov-
ers our assumptions and requirements listed in section 3, for the system models
introduced in section 2.

4.1 Technique
In this paper, we will use the same general proof method as was used for the
baseline EAX variant[4]. The proof in the standard complexity-theoretic as-
sumption, often called the “standard reduction technique”, is described in detail
in references [3] and [2]. On a high level the method is as follows: A security
proof can include two parts. The first one is a proof in the context of chosen-
plaintext attacks (CPA), where the adversary is given the ability to encrypt any
plaintext using the algorithmic primitive. The opposite, the chosen-ciphertext
attack (CCA) allows the adversary to set the ciphertext and observe the result-
ing plaintext. Each proof is constructed as a game between an adversary (A) and
a challenger (C) making use of Oracles (O) that abstract the evaluated algorith-
mic primitive in some way, depending on the properties that are proved. In our
Authenticated Encryption Primitives 7

models the oracles will represent the encryption and decryption initialized with
a key, the second model adds an oracle also for OMAC2 .
The CPA (privacy) security proof is modelled by the adversary using an en-
crypting Oracle (Oe ). The game is defined as follows:

1. A examines Oe by making q adaptive queries to it, i.e. sending any chosen


plaintext to Oe and as response receiving the corresponding ciphertext.
2. In a second phase, A selects a plaintext not generated in the first step and
sends it to C. C then ‘tosses a coin’ b and depending on the outcome either
returns to A the result of submitting the received input to Oe , or in the
second case a random bit string of an equivalent length.
3. Finally, A tries to determine whether the result returned from C was the
random string or the actual result from Oe . The so called advantage of the
adversary A is computed as Adv(A) = P r{AOe = 1} − P r{A$ = 1}, i.e. the
difference in success probability for A correctly determining b and making a
random choice.

The CCA (authenticity) security proof uses two oracles: an encrypting oracle
(Oe ) and a decrypting one (Od ). The slightly more complex game starts out like
the CPA game, but after receiving the result from C, A is allowed to continue,
and submit up to σ adaptive queries to the decryption oracle Od (of course the
return string from the challenger shall not be used). Only after these extended
queries A will guess the value of b. Again, the advantage of adversary A will be
calculated as the difference between its probability of success with oracles usage
and without it.

Adv(A) = P r{AOe ,Od = 1} − P r{A$ = 1}

The baseline EAX mode of operation has been proved secure against CCA and
CPA attacks. Since the pipelined variant is a derivation of the standard EAX we
can use reduction to show that the pipelined variant is as secure as the baseline
one. In this proof by reduction, we use an adversary model where an adversary
B attacks baseline EAX E by using an adversary A attacking the new pipelined
EAX variant E  , both set up with the same parameters (keys). For the game
it will also be necessary to show that the adversary B can simulate all oracles
that would be used by A. The game is set up as follows: suppose there exists
an adversary A which can attack algorithm E  with advantage Adv(A) = .
Adversary B wants to break algorithm E (for which a security proof already
exists) by making use of A, such that

1. B has access to the oracles used in the proof of E


2. B forges all oracles used by A, by simulating the work those oracles would
do for A, only based on its own knowledge about the baseline system E
and its own oracles. This can be done with a non-negligible probability
(P r{OracleSim}).
2
OMAC is a provably secure cryptographic hash construct based on the CBC-MAC
primitive. Definition in [4].
8 J.-E. Ekberg, A. Afanasyeva, and N. Asokan

3. If there exists a probabilistic polynomial time algorithm for B to attack E us-


ing A’s advantage, then Adv(B) = ∗P r{OracleSim}. If P r{OracleSim} =
1 then the respective attack advantages and thereby the security of systems
E and E  are equal.

In other words the game shows that if we can attack the modified algorithm
E  then we can attack the original system E in the way we built adversary B.
But as a security proof already exists for E, our premise of the existence of A is
disproved, thereby proving the security of E  .

4.2 Analysis
Correctness of the pipelined EAX in our first system model (Figure 2(b)) is
straight-forward. Intuitively, this is because the attacker has no advantage in
the pipelined setting compared to the baseline setting because inputs and out-
puts are not interleaved. For the sake of completeness, we present the proof in
Appendix A.
In our second system model intermediate computation results are returned
to untrusted memory during algorithm execution. Thus the possibility of an
adaptive attack cannot be ruled out immediately. We use the terminology and
definitions from [4]. In all algorithms, the return statement denotes the return-
ing of data to untrusted memory, not the termination of algorithm execution.
The Read primitive is used to explicitly indicate data input from untrusted
memory. The interactions between A, B and g are shown in Figure 4.

Fig. 4. Proof outline

Theorem 41. The pipelined EAX variant presented in Algorithms 1 and 2 is


as secure as original baseline EAX.

Proof. We begin with the CPA (privacy) claim. Let A be an adversary using re-
sources (q, σ) that is trying to distinguish algorithm 1 from a source of random
bits. We will construct an adversary B with resources (σ1 , σ2 ) that distinguishes
Authenticated Encryption Primitives 9

Algorithm 1. Encryption, model 2 Algorithm 2. Decryption, model 2


Input: N, H, K, M = {m0 , m1 , . . . , mn−1 } Input: N, H, K, C =
Output: C = {c0 , c1 , . . . , cn−1 }, T ag {c0 , c1 , . . . , cn−1 }, T ag
1: Read(N, H, n) Output: M =
0
2: N ⇐ OM ACK (N ) {m0 , m1 , . . . , mn−1 } or Invalid
1
3: H ⇐ OM ACK (H) 1: Read(N, H, n, T ag)
0
4: C ⇐ 0 2: N ⇐ OM ACK (N )
1
5: for all i ∈ 0 . . . n − 1 do 3: H ⇐ OM ACK (H)
6: Read(mi ) 4: C ⇐ 0
7: ci ⇐ mi ⊕ E(N + i)K 5: for all i ∈ 0 . . . n − 1 do
8: return ci 6: Read(ci )
2 2
9: C ⇐ OM ACK (ci , C) 7: C ⇐ OM ACK (ci , C)
10: end for 8: end for
11: T ag ⇐ N ⊕ C ⊕ H 9: T ⇐ N ⊕ C ⊕ H
12: return T ag 10: if T  = T ag then
11: return Invalid
12: else
13: for all i ∈ 1 . . . n − 1 do
14: Read(ci )
15: mi ⇐ ci ⊕ E(N + i)K
16: return mi
17: end for
18: end if

the OMAC algorithm3 from a source of random bits. Adversary B has an oracle
g2 that responds to queries (t, M, s) ∈ {0, 1, 2} × {0, 1}∗ × N with a string
{M1 , S0 , S1 , . . . , Ss−1 }, where each named component is an l-bit string. Oracle
g2 is the OMAC algorithm. Algorithm 3 describes adversary B:
We may assume that A makes q > 1 queries, so adversary B uses 3q queries.
Then under the conventions for the data complexity, adversary B uses at most
(σ, σ2 ) resources. Observe that P r[AEnc2 = 1] = P r[B OMAC = 1] and P r[A$ =
1] = P r[B $ = 1]. Using Lemma 4 from [4] we conclude that
CP A
AdvAlg4 (A) = P r[AAlg4 = 1] − P r[A$ = 1] =

σ
= P r[B OMAC = 1] − P r[B $ = 1] ≤ AdvOMAC
dist
(σ, )
2
1, 5σ + 3
≤ ≤ AdvEAX
CP A
2l
This means that the pipelined EAX, described in Alg. 1 is as private as original
EAX. This completes the privacy claim.
3
The construction of adversary B is adapted to a specific proof setup presented in [4],
and uses a “tweakable OMAC extension” encapsulated in Lemma 4[4] and its proof.
Lemma 4 asserts the pseudorandomness of the OMAC algorithm and provides an
upper bound for the advantage of the adversary.
10 J.-E. Ekberg, A. Afanasyeva, and N. Asokan

Algorithm 3. Algorithm B g simulating Oe


1: Run A
2: for all Oracle Oe calls Nj , Hj , nj , j ∈ 0 . . . q − 1 from A do
3: N S0 S1 . . . , Snj −1 ⇐ g2 (0, Nj , nj )
4: for all i ∈ 0 . . . n − 1 do
5: ci,j ⇐ mi,j ⊕ Sj
6: return ci,j , in response to each Oracle Oe query mi,j from A
7: end for
8: Hj ⇐ g2 (1, Hj , 0)
9: Cj ⇐ g2 (2, Cj , 0)
10: T agj ⇐ H ⊕ N ⊕ C
11: return T agj
12: end for
13: When A halts, get bit b
14: return b

For CCA (authenticity) and reusing the naming, let A be an adversary attack-
ing the authenticity of algorithms 1 and 2. To estimate the advantage of A, we
construct from A (the authenticity-attacking adversary) an adversary B (with
oracles for g2 and g3 , intended for forging the original AES-EAX primitive).
Algorithm 3 simulated oracle Oe , and algorithm 4 will simulate the decryption
oracle Od :
It is easy to see that adversary B can simulate both the oracles Oe and Od for
A indistinguishably from the real challenger of the AES-EAX primitive. Thus,
the advantage of adversary B in forging the authenticity algorithms 1 and 2 can
be calculated as follows:
Adv CCA (B) = P r{B EAX , f orge} − P r{B $ , f orge} =
= Adv CCA (A)
This completes the claim and the proof

5 Implementation Pitfalls
Although we proved the pipelined EAX variant correct, adequate care is needed
when it is incorporated into practice. In this section, we outline two potential
pitfalls.

5.1 Security for the External User


At the outset, we mentioned that our goal is to guarantee security from the per-
spective of the TEE. In practice, one also needs to worry about ensuring security
from the perspective of the external “TEE user”, for example, an application
running on the operating system. As the external memory is untrusted from the
perspective of the user, some form of security association between the TEE and
Authenticated Encryption Primitives 11

Algorithm 4. Algorithm: B g simulating Od


1: Run A
2: for all Oe requests from A do
3: Run simulator from 3
4: end for
5: for all Od requests Nj , Hj , Cj ||T ag, j ∈ 0 . . . q − 1 from A do
6: Mj ← g3 (Nj , Hj , Cj , T ag)
7: if Mj = Invalid then
8: return Invalid
9: else
10: KeyStr ← Mj ⊕ Cj
11: for all i ∈ 0 . . . n − 1 do
12: Return ci,j ⊕ KeyStri , in response to each Oracle Od query ci,j from A
13: end for
14: end if
15: end for
16: When A halts, get bit b
17: return bs

the user is necessary in order to ensure security from the user’s perspective. This
applies both in the pipelined as well as in the baseline setting.
Although it has no bearing on the security from the perspective of the TEE,
the pipelined variant of the unsealing algorithm shown in Figure 3(b) is equiv-
alent to the baseline variant only if the series of ciphertexts {c0 , c1 , . . . , cn−1 }
in the first phase of the pipelined variant is exactly the same as the series of
ciphertexts in second phase (after T ag is validated as T rue). In practice this can
be ensured by using re-encryption: for example, in the first phase, the TEE will
output encrypted blocks ci when processing input ci and expects the set of ci to
be provided to the second phase.

5.2 Mapping of Memory Locations


The risk of implementation pitfalls when mapping idealized protocols used in
proofs to a real protocol is well known. Our target architecture hides an issue
of such nature. Even as we now can use the reduction proofs to argue that
pipelined operation of AES-EAX is secure in system models 1 and 2, a naive
pipelined variant implementation unfortunately leads to a severe security flaw.
Consider lines 7-10 of Algorithm 5 which illustrates how a naive implementor
would map the inner loop of EAX encryption (lines 4-6 of Algorithm 7, and lines
6-9 of Algorithm 1)
At first glance, Algorithm 5 looks like a reasonable EAX implementation as
shown in Figure 2(a)). It writes out each block of the ciphertext to untrusted
RAM as soon as it is calculated. Step 8 corresponds to the encryption of a single
block (Algorithm 7/Step 5 or Algorithm 1/Step 8). Step 10 corresponds to the
incremental construction of the MAC (Algorithm 7/Step 6 or Algorithm 1/Step
9). As Algorithm 5 is realized on the architecture shown in Figure 1, the variable
12 J.-E. Ekberg, A. Afanasyeva, and N. Asokan

Algorithm 5. Pipelined EAX Encryption: naive realization


Input: k, h, n, M = {m0 , m1 , . . . , mn−1 }
Output: C = {c0 , c1 , . . . , cn−1 }, T ag
1: L ← Ek (0); B ← 2L; P ← 4L
2: N ← Ek (Ek (0) ⊕ n ⊕ B) OMAC0k (n)
3: H ← Ek (Ek (1) ⊕ h ⊕ B) OMAC1k (h)
4: t1 ← N
5: t2 ← Ek (2)
6: for i ← 0 to n − 1 do
7: t4 ← Ek (t1)
8: ci ← mi ⊕ t4
9: t1 ← t1 + 1
10: t2 ← Ek (t2 ⊕ ci ) OMAC2k (ci , C)
11: end for
12: . . .

ci will be mapped to a memory location in untrusted memory. So an attacker who


controls the untrusted RAM will now be in a position to manipulate ci after it
is generated in step 8 but before it is used as input to OM AC 2 K in step 10.
Clearly, the sealing primitive should release the encrypted block to untrusted
memory only after both the encryption as well as the data inclusion into the
integrity check value has been performed. Even though this is the intent in the
abstract descriptions of Algorithms 7 and 1, the violation of this rule while
mapping the algorithms to concrete realizations for our target architecture is
not immediately or automatically evident to the programmer. In the baseline
setting, where inputs and outputs as well as state variables are all in isolated
memory this consideration causes no security issues, even for pipelined operation.
In fact pipelining (or rather the fact that the input length need not be known in
advance) is listed as a particular advantage of AES-EAX [4]. However, realization
of pipelined EAX in our target architecture raises this subtle security issue.
The correct way of pipelining EAX sealing is outlined in Algorithm 6 in Sec-
tion 6. The solution is to add an intermediary buffer in isolated memory to
hold the encrypted block. For unsealing, such a buffer is also needed, but its
placement is different, since the confidentiality and integrity primitives are then
invoked in opposite order.

6 Reference Implementation
Based on the proofs of Algorithm 7 and Algorithm 1, and the insight on pitfalls,
we have implemented and deployed EAX using AES-128 as shown in Algorithm
6. We apply a small simplification constraint to the EAX inputs. The length of
the EAX associated data as well as the nonce are required to be exactly the block
length of the underlying block cipher primitive. These conditions simplify the
internal structures of EAX significantly since two data padding code branches
can be omitted completely. Although this approach sacrifices generality, neither
compatibility nor the original security proofs are affected.
Authenticated Encryption Primitives 13

Algorithm 6. Pipelined EAX Encryption


Input: k, h, n, M = {m0 , m1 , . . . , mn−1 }
Output: C = {c0 , c1 , . . . , cn−1 }, T ag
1: L ← Ek (0); B ← 2L; P ← 4L
2: N ← Ek (Ek (0) ⊕ n ⊕ B) OMAC0k (n)
3: H ← Ek (Ek (1) ⊕ h ⊕ B) OMAC1k (h)
4: t1 ← N
5: t2 ← Ek (2)
6: t3 ← 0
7: for i ← 0 to F U LLBL(M ) − 1 do
8: t4 ← Ek (t1)
9: t3 ← mi ⊕ t4
10: ci ← t3
11: t1 ← t1 + 1
12: if i < N P ADBL(M ) − 1 then
13: t2 ← Ek (t2 ⊕ t3) OMAC2k (ci , C)
14: end if
15: end for
16: if REM BY T (M ) > 0 then
17: t3 ← 0
18: str ← Ek (t1)
19: P ART (t3 ← mF U LLBL ⊕ t4)
20: P ART (cF U LLBL ← t3)
21: end if
22: if REM BY T (M ) = 0 ∧ F U LLBL(M ) > 0 then
23: C ← Ek (t2 ⊕ t3 ⊕ B) OMAC2k (ci , C)
24: else
25: t3 ← ADDP ADBY T E(t3)
26: C ← Ek (t2 ⊕ t3 ⊕ P ) OMAC2k (ci , C)
27: end if
28: Tag ← C ⊕ N ⊕ H

In Algorithm 6, input parameters consist of a key k, a block-sized header


h, and a block-sized nonce n. The input data vector M = {m0 , m1 , . . . m∗n−1 }
is a list of block-sized units where each element is a full block except possibly
the last element which may be shorter. The resulting ciphertext vector C has a
similar construct. The resulting message integrity code m is a block-sized result.
The OMAC sub-primitive calculations are marked in bold, right justified. The
multiplications of value L are defined by polynomial multiplication in GF (2) as
defined by [4].
For increased readability we introduce a few convenience macros that hide
block length calculations as well as detailed loops for simple operations over
bytes in partially filled blocks. Pipelined versions are trivially constructed cor-
responding to the “values-known-in advance” versions listed in Algorithm 6 for
readability. F U LLBL denotes the number of full blocks in the input data vector,
and the function NPADBL(x) will for the vector x give the number of blocks that
are not padded with a termination marker. REM BY T (x) gives the number of
14 J.-E. Ekberg, A. Afanasyeva, and N. Asokan

bytes (if any) in the last vector element provided that it is not block-sized. AD-
DPADBYTE(x) adds a termination marker to the vector block in accordance
with [4], and P ART indicates that the operation is applied to a byte vector
which is not block-sized. All temporary variables t1, t2, t3 and t4 are block-sized
units.
The innermost operation of EAX is clearly visible on lines 8-11. The counter
(in t1) drives the block cipher and produces a key stream into t4, and the CBC-
MAC is accumulated into t2 on each round. t3 is the temporary buffer that
guarantees the integrity of the ci as explained in Section 5.
The EAX implementation with the constraints outlined above is size-efficient.
The algorithm supporting both encryption and decryption and implemented in
C compiles to 742 bytes for an OMAP2/OMAP3 processor with ARM and an
embedded AES block implementation. Algorithm memory (stack) consumption
is a fixed 168 bytes, satisfying the Θ(1) requirement in Section 3.

7 Related Work
Since the concept of a hardware-assisted TCB was re-invigorated around a
decade ago, a number of techniques to secure the “virtual” memory of the trusted
execution environment have been proposed. One of the first results was the emer-
gence of execute only-virtual memory (XOM) [10], an important stepping stone
for trustworthy computing, but it does not consider data protection.
The work on the AEGIS secure processor [12] [13] introduced a secure com-
puting model that highlights the operation of a security kernel running in an
isolated environment, shielded from both physical and software attacks. Among
other features, AEGIS implemented a memory management unit (MMU) that
protects against physical attacks by deploying stateful, authenticated encryption
for virtual memory blocks stored in untrusted memory regions. A comparison of
cryptographic primitives suitable for implementing such a secure virtual memory
manager in hardware can be found in [16].
This work examines the implementation pitfalls and security proof in the
context of implementing EAX, one well-known AEAD. We prove security for
that AEAD in two given models, relevant to TEE implementation. Prior work
[6] [5] addressing the problem and provability of “online” encryption (system
model 2) in a wider context, take another route and also provide alternative
constructions for rendering a cryptographic primitive secure in this model.

8 Interpretation and Proposal


The proof approach (and the implementation pitfall) described in this paper are
more generally applicable to other authenticated encryption modes as well. For
example, AES-CCM, the most widely used AEAD today, uses the same con-
fidentiality and integrity primitives as AES-EAX (AES-CTR and AES-CBC-
MAC, respectively), with the main difference that in AES-CCM the integrity is
calculated over the plaintext rather than over the ciphertext. Thus, the extra
Authenticated Encryption Primitives 15

buffer in isolated memory needed in the implementation will still be required,


although its placement in AES-CCM will, with respect to sealing/unsealing, be
the mirror image of its application in AES-EAX. The Model 1 proofs are trivially
adaptable to AES-CCM, but most likely also model 2 proof constructs would be
similar when applied to AES-CCM.
Standardized AEAD APIs, like the Global Platform (GP) TEE API [7], in-
cludes APIs for pipelined AES-CGM and AES-CCM primitives modelled after
interfaces for hash functions, i.e. with separate functions for Init, Update and
Finalization. The Update function encrypt or decrypts data in pieces. These
functions trivially map to a TEE implementation for pipelined encryption (Fig-
ure 3(a)). A TEE AEAD decryption primitive (Figure 3(b)) can in our model be
implemented with the GP API by invoking the set of Init, Update and Finaliza-
tion twice, and binding the Init parameters between the two invocation sets. It
is however evidently clear that the API, as it is defined now, easily stimulates an
unwary implementor to release decrypted plaintext to untrusted memory before
the tag is checked, and in doing that he/she breaks the property of plaintext
awareness for the AEAD primitive.
In the light of the findings in this paper, we propose that APIs for AEAD
decryption inside TEE:s are changed. One option is to re-encrypt the decrypted
content with a temporary key that is given out as a side-effect of a properly
validated tag (integrity check) in the Finalization API method. Alternatively,
the decryption Update API should not return any decrypted data at all, instead
a new Keystream method would be added to return the message XOR keystream
to the caller after the tag has been properly validated. Either of these solutions
would force the API user to model his decryption operation in a manner that is
secure from the TEE perspective.

9 Conclusion
We have described one example of an AEAD that can be proved correct in a com-
putation context where not all data memory during the algorithm computation
is assumed to be trustworthy. The hardware architecture introduced in Figure 1
is new to algorithm analysis, although devices with such properties are widely
deployed. We have proved AES-EAX secure in this setup, and provide an insight
into what modifications need to be done to a conventional EAX algorithm to
securely realize it in the pipelined setting.
The pipelined AES-EAX presented and analyzed in this paper is commercially
deployed as part of a trusted device architecture.

References
1. ARM. Trustzone-enabled processor,
http://www.arm.com/pdfs/DDI0301D_arm1176jzfs_r0p2_trm.pdf
2. Bellare, M., Rogaway, P.: The game playing technique (2004),
http://eprint.iacr.org/2004/331
16 J.-E. Ekberg, A. Afanasyeva, and N. Asokan

3. Bellare, M., Rogaway, P.: Random oracles are practical: a paradigm for design-
ing efficient protocols. In: CCS 1993: Proceedings of the 1st ACM Conference on
Computer and Communications Security, pp. 62–73. ACM, New York (1993)
4. Bellare, M., Rogaway, P., Wagner, D.: The EAX Mode of Operation. In: Roy, B.,
Meier, W. (eds.) FSE 2004. LNCS, vol. 3017, pp. 389–407. Springer, Heidelberg
(2004), doi:10.1007/978-3-540-25937-4-25
5. Boldyreva, A., Taesombut, N.: Online Encryption Schemes: New Security Notions
and Constructions. In: Okamoto, T. (ed.) CT-RSA 2004. LNCS, vol. 2964, pp. 1–14.
Springer, Heidelberg (2004), doi:10.1007/978-3-540-24660-2-1
6. Fouque, P.-A., Joux, A., Martinet, G., Valette, F.: Authenticated On-Line En-
cryption. In: Matsui, M., Zuccherato, R.J. (eds.) SAC 2003. LNCS, vol. 3006,
pp. 145–159. Springer, Heidelberg (2004), doi:10.1007/978-3-540-24654-1-11
7. GlobalPlatform Device Technology. TEE Internal API Specification. Global Plat-
form, vrtsion 0.27 edition (September 2011),
http://www.globalplatform.org/specificationform.asp?fid=7762
8. Intel Corporation. Trusted eXecution Technology (TXT) – Measured LaunchedEn-
vironment Developer’s Guide (December 2009)
9. Kostiainen, K., Ekberg, J.-E., Asokan, N., Rantala, A.: On-board credentials with
open provisioning. In: ASIACCS 2009: Proceedings of the 4th International Sym-
posium on Information, Computer, and Communications Security, pp. 104–115.
ACM, New York (2009)
10. Lie, D., Thekkath, C., Mitchell, M., Lincoln, P., Boneh, D., Mitchell, J., Horowitz,
M.: Architectural support for copy and tamper resistant software. SIGPLAN
Not. 35(11), 168–177 (2000)
11. Srage, J., Azema, J.: M-Shield mobile security technology, TI White paper (2005),
http://focus.ti.com/pdfs/wtbu/ti_mshield_whitepaper.pdf
12. Edward Suh, G., Clarke, D., Gassend, B., van Dijk, M., Devadas, S.: Efficient
memory integrity verification and encryption for secure processors. In: MICRO 36:
Proceedings of the 36th Annual IEEE/ACM International Symposium on Microar-
chitecture, p. 339. IEEE Computer Society, Washington, DC (2003)
13. Edward Suh, G., O’Donnell, C.W., Sachdev, I., Devadas, S.: Design and implemen-
tation of the aegis single-chip secure processor using physical random functions.
In: ISCA 2005: Proceedings of the 32nd Annual International Symposium on Com-
puter Architecture, pp. 25–36. IEEE Computer Society, Washington, DC (2005)
14. Sundaresan, H.: OMAP platform security features, TI White paper (July 2003),
http://focus.ti.com/pdfs/vf/wireless/platformsecuritywp.pdf
15. Trusted Platform Module (TPM) Specifications,
https://www.trustedcomputinggroup.org/specs/TPM/
16. Chenyu, Y., Rogers, B., Englender, D., Solihin, D., Prvulovic, M.: Improving cost,
performance, and security of memory encryption and authentication. In: 33rd
International Symposium on Computer Architecture, ISCA 2006, Boston, MA,
pp. 179–190 (2006)
Authenticated Encryption Primitives 17

A First System Model Analysis

The first model that we consider is the one where plaintext inside the TEE is
encrypted for storage in untrusted memory, and vice versa for decryption. For
the encryption primitive we will use the standard reduction technique to reason
about whether the encrypted content can be released to an adversary before the
whole primitive has completed.
In this model the decryption primitive is unmodified and need not be ana-
lyzed, as the decrypted plaintext is stored in the TEE and thus is not becoming
available to the adversary during the execution of the primitive. An implemen-
tation must still adhere to a similar rule as with encryption, i.e. any encrypted
block has to be moved to trusted memory prior to the integrity check and a
subsequent decryption - otherwise an adversary has the possibility to decouple
the data for the integrity check from the data being decrypted.
Algorithm 7 is an abstraction of the implementation of pipelined EAX, and
returns encrypted blocks as they have been generated.

Theorem A1. The pipelined EAX encryption variant presented in Algorithm


7 is as secure as the original baseline EAX encryption.

Proof. We begin with the CPA claim. Let A be an adversary using resources
(q, σ) and is trying to distinguish algorithm 7 from a source of random bits. We
construct an adversary B that distinguishes the original EAX algorithm from
a source of random bits. Adversary B has an oracle g1 that responds to query
(N, H, M ) ∈ {0, 1}l × {0, 1}l × {0, 1}∗ with a string C = {c0 , c1 , . . . , cn−1 }, T ag.
Each named component is an l-bit string. Algorithm 8 describes the operation
of adversary B using g1 :

Algorithm 7. Encryption, model 1 Algorithm 8. Algorithm B g simulating


Input: N, H, K, M = {m0 , m1 , . . . , mn−1 } Oe
Output: C = {c0 , c1 , . . . , cn−1 }, T ag 1: Run A
0
1: N ⇐ OM ACK (N ) 2: for all Oracle calls (Nj , Hj , Mj ), j ∈
1
2: H ⇐ OM ACK (H) 0 . . . n − 1 from A do
3: for all i ∈ 0 . . . n − 1 do 3: Cj ||T agj ⇐ g1 (Nj , Hj , Mj )
N
4: ci ⇐ CT RK (mi ) 4: for all i ∈ 0 . . . n − 1 do
5: return ci 5: return ci,j in response to A’s
2
6: C ⇐ OM ACK (ci , C) query
7: end for 6: end for
8: T ag ⇐ N ⊕ C ⊕ H 7: return T agj in response to A’s
9: return T ag query
8: end for
9: When A halts, read its output bit b
10: return b
18 J.-E. Ekberg, A. Afanasyeva, and N. Asokan

We may assume that A makes q > 1 queries to its oracle, and adversary B uses
the same number of queries. Also, Pr[AAlg2 = 1]=Pr[B EAX = 1]. We assume
that A is nonce-respecting4 , B is length-committing5 and Pr[A$ = 1]=Pr[B $ =
1]. Thus, we conclude that
CP A
AdvAlg2 (A) = P r[AAlg1 = 1] − P r[A$ = 1] =

= P r[B EAX = 1] − P r[B $ = 1] = AdvEAX(B)


CP A

This completes the claim.


It is easy to see that the CCA proof follows from the CPA proof, since the de-
cryption procedure remains unmodified. Thus, using the same logic it is possible
to show that
CCA CCA
AdvAlg2 (A) = AdvEAX (B)
and this completes the proof.

4
An adversary is nonce-respecting if its queries never repeat a nonce value.
5
Adversary B is length-committing if it consults its own oracles with the appropriate
data block lengths implied by the needs of adversary A.
Auditable Envelopes: Tracking Anonymity
Revocation Using Trusted Computing

Matt Smart and Eike Ritter

School of Computer Science, University of Birmingham, UK


[email protected],
[email protected]

Abstract. In this paper, we discuss a protocol allowing the remote user


of a system providing revocable anonymity to be assured of whether or
not her anonymity is revoked. We achieve this via a novel use of Trusted
Computing and Virtual Monotonic Counters. The protocol has wide-
ranging scope in a variety of computer security fields, such as electronic
cash, fair exchange and electronic voting.

1 Introduction

A number of fields in computer security consider the anonymity of protocol


users to be of critical importance: in digital cash and electronic commerce, it is
important that rogue users should not be able to trace the spender of a coin, or
to link coins that user has spent with each other. In anonymous fair exchange
protocols, multiple parties exchange items with one another, whilst wishing to
remain anonymous (sometimes for obvious reasons). In electronic voting, the
voter must remain unlinkable to their vote.
However, designers of each of these classes of protocol must consider that there
are sometimes occasions when a user’s anonymity must be revoked — a coin
might be maliciously double-spent, or used for an illegal purchase; a party could
renege on their promise as part of an exchange protocol; a voter may attempt to
vote twice, or may not be a legitimate voter at all1 . The point of this paper is not
to consider for what reason anonymity revocation is required, though: instead,
we note that users whose anonymities are revoked should be made aware of this
fact. In this work, we present a solution to this problem, which is essentially a
digitized version of the “sealed envelope problem” discussed in [1].
Let us consider the physical, paper abstraction of the problem. Alice lives in a
country where it must be possible to link her identity to her vote (though only
authorised entities should be able to make this distinction). When she collects
her ballot paper, her identity is sealed inside a tamper-evident envelope, and
the serial number of her ballot paper is written on the outside. The envelope is
stored securely. Alice votes. Some time later, for whatever reason, someone may
1
The ability to link a voter to their ballot is actually a legal requirement in the UK
[2, 20, 15, 16].

S. Katzenbeisser et al. (Eds.): TRUST 2012, LNCS 7344, pp. 19–33, 2012.

c Springer-Verlag Berlin Heidelberg 2012
20 M. Smart and E. Ritter

wish to trace Alice’s ballot back to her. After the election, Alice may wish to see
whether her anonymity has been revoked or not. To do this, she merely requests
to see the appropriate envelope from the authorities (i.e., that with her ballot
serial number on it), and verifies that the envelope is still sealed.
We can apply this abstraction to a number of other fields, and it particularly
makes sense when considering payment for goods (we discuss this more in Section
5). However, digitising the (auditable) sealed envelope is not at all trivial: it is
intuitively not possible to simply give the authorities an encrypted copy of Alice’s
identity: if the key is provided with the ciphertext, then Alice has no way to know
whether it has been used. If the key is not provided, then the authorities cannot
do anything with the ciphertext anyway, without contacting Alice (who, as a
rogue user, may deliberately fail to provide information) [1]. As a result, we
must consider that some sort of trusted platform is required, in order for Alice
to be convinced that her anonymity has not been revoked. In this work, we detail
a protocol which uses trusted computing—specifically, the TPM—to assure Alice
in this way.

1.1 Related Work


This paper is potentially relevant to a wide range of fields where revocable
anonymity is important: digital cash, fair exchange, and electronic voting. We
do not specifically address any of these areas, as the way in which they use the
identity of the user is unimportant to us: it is the similarity in the need for the
user’s anonymity that matters. Very little existing work considers auditable re-
vocable anonymity: Kugler and Vögt [11] discuss an electronic payment protocol
in which the spender of a coin can determine (within a fixed period) whether
their anonymity is revoked or not. Although the protocol is attractive, it requires
knowledge a priori of who is to be traced—something which is not possible in
fields such as electronic voting. More generally, Moran and Naor [12] discuss
many high-level theoretical implementations of cryptographic “tamper-evident
seals”, but do not go into detail as to how these would be realised (and seemingly
place a lot of trust in the entity responsible for generating seals).
Ables and Ryan [1] discuss several implementations of a “digital envelope” for
the storage of escrowed data using the TPM. Their second solution is appealing,
and uses a third party with monotonic counters. However, their solution allows
only a single envelope at a time to be stored (as the TPM only permits the
usage of one monotonic counter at a time), and also would require Alice herself
to generate her identity (something which would not be appropriate for us).
The work of Sarmenta et al. [14] on virtual monotonic counters using a TPM is
crucial to our work, as we use a new monotonic counter for each anonymous user,
allowing each to track their own anonymity. We discuss this more in Section 2.1.

1.2 Motivation and Contribution


In this work, we introduce a new protocol, not tied to any specific class of user-
anonymous security protocols (electronic commerce, voting, et cetera), which
Auditable Envelopes: Tracking Anonymity Revocation 21

uses the TPM to assure a user of whether or not their identity has been revealed:
we call this property non-repudiation of anonymity revocation. Our motivation
is clear: if we are to have protocols providing anonymity revocation, then it
must be possible for a user to determine when their anonymity is revoked. The
reasoning for this is twofold: not only does a user have the right to know when
they have been identified (generally, as a suspect in a crime), but the fact that
anonymity revocation is traceable is also beneficial:
. . . the detectability of inappropriate actions and accountability for orig-
ination suffices to prevent misbehaviour from happening [22, p. 5]
Though protocols exist in electronic commerce which permit this ([11], for ex-
ample), the techniques used are not widely applicable, for reasons discussed
above. We consider preliminary discussions of “escrowed data” stored in a dig-
ital envelope which use monotonic counters [1], and discuss the use of virtual
monotonic counters [14] to allow multiple tokens to be securely stored by a single
entity.

1.3 Structure
In Section 2, we provide some background in Trusted Computing and the TPM.
In Section 3, we discuss our trust requirements for the protocol, which itself
is presented in Section 4. We discuss applicability of the protocol in Section 5,
give a short discussion on the security of the protocol in Section 6, and finally
conclude.

2 Background: Trusted Computing

Trusted Computing is the notion that it is possible to enforce the behaviour of a


computer, through the provision of specific “trustworthy” hardware. This allows
users of a machine to be convinced that it is in the correct state, and is not com-
promised. Trusted Computing requirements are generally realised via the use of
a Trusted Platform Module (TPM) [18, 19], a tamper-resistant secure coproces-
sor responsible for a number of functions, including random number generation,
RSA key generation, and encryption/decryption. The TPM is capable of remote
attestation as to the state of its registers, and of sealing data: encrypting it such
that it can only be opened by a TPM in the correct state.
The TPM has many other functionalities, including Direct Anonymous At-
testation, used to anonymously attest to the state of a machine [3]. These func-
tionalities are accessed by the host through a predefined set of commands (or
API). For brevity we do not expand further on these functionalities, but instead
direct the interested reader to [5], which provides a solid introduction to Trusted
Computing and the TPM. It suffices to state that we do not modify the API in
any way with our work.
22 M. Smart and E. Ritter

2.1 Physical and Virtual Monotonic Counters


For us, one of the most important capabilities of the TPM is the availability
of secure monotonic counters. Monotonic counters are tamper-resistant coun-
ters embedded in the TPM, which, once incremented, cannot be reverted to a
previous value: this reduces the likelihood of replay attacks, for many applica-
tions [14].
Unfortunately, the 1.2 version of the TPM, being a low-cost piece of hardware,
has only four monotonic counters, of which only one can be used in any boot
cycle. As noted by Sarmenta et al., the intention here was to implement a higher
number of virtual monotonic counters on a trusted operating system. We would
rather not require trusted operating systems, however. The work of Sarmenta et
al. [14] demonstrates the creation of an unbounded number of virtual monotonic
counters with a non-trusted OS.
A virtual monotonic counter is a mechanism (in untrusted hardware or soft-
ware) which stores a counter value, and provides two commands to access it:
ReadCounter, which returns the current value, and IncrementCounter, which in-
creases the counter’s value. The counter’s value must be non-volatile, increments
and reads must be atomic, and changes must be irreversible. Note that virtual
monotonic counters are not stored on the TPM, but instead on untrusted stor-
age, allowing a far higher number of simultaneous counters to be used.
The manner in which Sarmenta et al. implement their solution means that the
counter is not tamper-resistant, but merely tamper-evident. This is sufficient for
our purposes. The counter produces verifiable output in the form of unforgeable
execution certificates, via a dedicated attestation identity key (AIK) for each
counter. The counter uses this key, together with nonces, to produce signed
execution certificates to send to users.
In the implementation of virtual monotonic counters suggested by Sarmenta
et al. [14, p. 31], the counter mechanism is stored in full on the host (rather than
on the host’s TPM), and supports the following functions:
– CreateNewCounter(nonce): returns a CreateCertificate containing the ID num-
ber of the counter, and the nonce given as a parameter
– ReadCounter(CounterID,Nonce): returns a ReadCertificate containing the value
of the counter, the counter’s ID and the given nonce
– IncrementCounter(CounterID,Nonce): increments the counter, and returns an
IncrementCertificate containing the new value of the counter, counter ID and
nonce
– DestroyCounter(CounterID,Nonce): destroys the counter.
In this work, we assume availability of the virtual monotonic counters defined
by Sarmenta et al.. To avoid use of commands that are not included in the
TPM API, we adopt the first, log-based scheme which they define [14, p. 32]. As
noted earlier, the TPM has a limited number of physical monotonic counters, of
which only one at a time can be used. The log-based implementation of virtual
monotonic counters uses a physical monotonic counter as a “global clock”, where
the time t is simply the value of the TPM’s physical counter at a given time.
Auditable Envelopes: Tracking Anonymity Revocation 23

The value of a virtual monotonic counter is then the value of the global clock
at the last time the virtual counter’s IncrementCounter command was executed.
This consequently means that the value of a counter each time it is incremented
cannot be predicted deterministically—we can merely say with certainty that
the value of the counter will only monotonically increase. As we discuss further
in the conclusion, this does not present a problem for us.
The IncrementCounter operation is then implemented using the TPM’s API
command TPM IncrementCounter, inside an exclusive, logged transport session,
using the ID of the counter in question, and a nonce nS generated by the client
to prevent replay. The result of the final TPM ReleaseTransportSigned operation
is a data structure including the nonce, and a hash of the transport session log,
which is used to generate an IncrementCertificate.
The ReadCounter operation is more complex, and involves the host (the “iden-
tity provider”, idp, for us) keeping an array of the latest increment certificates
[14, p. 33] for each virtual counter, returning the right one when the client re-
quests it. In order to prevent reversal of the counter’s value, however, the host
must send the current time certificate, the current increment certificate, and all
of the previous increment certificates. Verification of the counter’s value then
involves checking that each previous increment certificate is not for the counter
whose ID has been requested.
We do not go into further implementation specifics, but instead refer interested
readers to [14, p. 32] for further information.

3 Trust Model
In our work, we make the following assumptions:
1. Alice and the identity provider idp (discussed in the next section) trust the
TPM in Alice’s machine, by virtue of it attesting to its state (and therefore,
the state of Alice’s machine)
2. All users trust idp, by virtue of it attesting to its state (and therefore, the
state of idp’s machine)
3. The judge is trusted to only authorise anonymity revocation where necessary
In a strict sense, it is not necessary for users to deliberately place trust in any
TPM (whether it is in the identity provider’s machine, or the user’s): both the
user’s and the identity provider’s TPMs have the ability to verify the correctness
of the other’s TPM and host machine, where the TPM itself is assumed to be a
tamper-resistant hardware module. Instead, therefore, any trust we place must
be in the manufacturer of the TPM, to construct such a device according to its
correct specification. Note as a consequence that idp is not a trusted third party:
the fact that it is worthy of trust can be determined by any user.

4 Protocol
We begin by explaining our protocol from a high level, and then go into more
implementation specific detail. Note that we assume the availability of standard
24 M. Smart and E. Ritter

Alice ID Provider (idp) Service Provider (s)


Encrypted Transport Session
1. IDP-PCR INFO:=TPM Quote(. . . ,ca ,. . . )

2. ALICE-PCR INFO:=TPM Quote(. . . ,ci ,. . . )

3. (pkT A , skT A ) :=
TPM CreateWrapKey(binding,ALICE-PCR INFO,kT A ,...)

4. (pkI , skI ) :=
TPM CreateWrapKey(binding,IDP-PCR INFO,kI ,...)

5. Nonce nc
CreateCounter(nc )
7. idm = id := {id}pk
I
{id, CreateCertificate, signidp (hash(id||CounterID))}pk
TA
TPM LoadKey2(kT A , . . .)
TPM UnSeal(idm, kT A ) 8. ReadCounter(CounterID,na )

9. ReadCertificate

10. {m,CounterID,id, signidp (hash(id||CounterID))}s

11. signJudge (id, CounterID, nS )

IncrementCounter(CounterID,nS )
TPM LoadKey2(kI , . . .)
TPM UnSeal(id, kI )
12. signidp ({id}s )
13. ReadCounter(CounterID,na )

14. ReadCertificate

Fig. 1. Our Revocation Audit Protocol

public key cryptographic techniques, hashing and signature protocols. Our sce-
nario is as follows. Alice wishes to engage in a user-anonymous protocol with a
service provider, s: Alice normally remains anonymous, but s has some interest
in revoking her anonymity under certain circumstances (s can obtain a signed
request for the user’s identity from a judge). Alice would like to know whether
or not her anonymity has been revoked at some point after her interaction with
s is complete.
In order to present a solution, we introduce a third party, the identity provider,
idp. The identity provider runs trusted hardware, and attests to the state of
his machine in an authenticated encrypted transport session with Alice’s TPM
(again, it should be noted that this means idp is not a trusted third party,
but a party which proves that it is trustworthy). Once Alice is assured that
she can trust idp’s machine, and idp is likewise assured of the trustworthiness
of Alice’s machine, idp generates a virtual monotonic counter specifically for
Alice’s identity, using a nonce sent by Alice. He then encrypts Alice’s identity
using a key generated by Alice’s TPM. This is concatenated with a certificate
produced by the creation of the counter, hashed, and signed. The signature,
certificate and encrypted ID—which we will refer to as a pseudonym—are sent
to Alice, encrypted with a binding wrap public key to which only her TPM has
the private counterpart.
Alice now reads the counter generated for her. She can then send whatever
message is necessary to s, along with the particulars of the counter relating to
Auditable Envelopes: Tracking Anonymity Revocation 25

her ID, and idp’s signature thereof. The service provider is able to verify the
validity of the signed hash on Alice’s identity, and can store it for further use.
Should s request to view Alice’s identity, he contacts idp with a signature
generated by a judge, on the pseudonym and particulars of the certificate (the
details originally sent to him). The protocol dictates that idp first increments
the virtual monotonic counter associated with the certificate received, and can
then load the appropriate key, and decrypt Alice’s identity. Alice is later able to
request the value of her monotonic counter once again, allowing her to determine
whether or not her anonymity was revoked.

4.1 Implementation Steps


We now present a more detailed implementation. A diagram for the protocol is
give in Figure 1. The protocol can be split into two stages: in the first, Alice
registers her identity with idp, and receives a pointer to a virtual monotonic
counter back. In the second, she interacts with s, who may wish to obtain her
identity. She is then able to audit this process.

Stage 1. Alice begins with her TPM and the TPM of the identity provider,
idp, engaging in an encrypted transport session 2 . She invents a nonce, ca , and
challenges idp’s TPM to reveal the state of a number of its platform configuration
registers (PCRs—a set of protected memory registers inside the TPM, which
contain cryptographic hashes of measurements based on the current state of the
host system), using the TPM Quote command (with ca being used for freshness).
Alice can use this information to determine if the TPM is in a suitable state (i.e.,
if its host machine is running the correct software). The identity provider’s TPM
does the same with Alice’s TPM, using a different nonce ci . In this manner, both
platforms are assured of the trustworthiness of the other.
Alice proceeds to have idp’s TPM generate a fresh RSA keypair kI = (pkI , skI )
using the TPM CreateWrapKey command, binding the key to the PCR informa-
tion she acquired. This ensures that only a TPM in the same state as when
the TPM Quote command was executed is able to open anything sealed with
pkI . Similarly, idp’s TPM has Alice’s TPM generate a binding wrap keypair
kT A = (pkT A , skT A ), where the private key is accessible only to Alice’s TPM.
Next, idp receives a nonce nc from Alice. He then creates a virtual monotonic
counter [14], which he ‘ties’ to Alice’s identity, using the CreateNewCounter com-
mand with nc . This returns a CreateCertificate, detailing the ID number of the
counter, CounterID, and the nonce used to create it. idp proceeds to produce a
pseudonym id = {id}pkI for Alice, an encryption of her identity (which we assume
it knows) using the TPM Seal command and the binding wrap key pkI . id and
the ID of the counter, CounterID, are concatenated and hashed. The signed hash,

2
We note that idp could also undergo direct anonymous attestation [3] with Alice to
attest to the state of his machine. However, this is unnecessary for us, as neither
Alice nor idp need to (or could) be anonymous at this stage.
26 M. Smart and E. Ritter

pseudonym id and the aforementioned CreateCertificate are sent to Alice, en-


crypted with the binding wrap key pkT A generated for her TPM. The ID provider
stores CounterID and id locally. Alice has her TPM decrypt the message she re-
ceives, and then verifies the hash. Note that only Alice’s TPM, in the correct
state, can decrypt the message sent to her.
Finally, Alice generates a fresh nonce na , and contacts idp to request the value
of the counter, via the ReadCounter(CounterID, Nonce) command. She receives
back a ReadCertificate containing the counter’s value, the CounterID and the
nonce she sent.

Stage 2. The second stage, which can happen at any time in future, is where
Alice communicates with whichever service provider she chooses (note that she
may choose to use the same id token with multiple service providers, or may
generate a new token for each—it would obviously be sensible to do the latter,
to prevent linkability between service providers). Where Alice’s message (which
might be a tuple containing her vote, or a coin, or some exchangeable object) is
represented by m, she sends the tuple

{m, CounterID, id, signidp (hash(id||CounterID))}s

to s. Note that the whole message is encrypted with the public key of the service
provider, preventing eavesdropping. The message m is further processed (how
is outside of the scope of this paper). The signed hash is examined to confirm
that it is indeed a valid signature, by idp, on the pseudonym and Counter ID
provided. The service provider can then store CounterID, id for later use.
Now, Alice can, at any point, check the value of her virtual monotonic
counter. The service provider may wish to discover her identity, and so will
seek a signed request from a judge, generating a nonce nS . He sends this re-
quest, signJudge (id, nS , CounterID), to idp. Note that in order to decrypt Alice’s
pseudonym, idp must use the key kI —bound to the correct state of his TPM’s
PCRs—which Alice selected. This means that he needs to be in the correct
state. He begins by incrementing Alice’s virtual monotonic counter using the
command IncrementCounter(CounterID, nS ), and then loads the appropriate key
kI using the TPM LoadKey2 command. He can then decrypt Alice’s identity using
TPM UnBind. Finally, idp returns id, encrypted for s. Again, what s does with
Alice’s identity is outside of the scope of this paper.
At any later time, Alice can check the virtual monotonic counter value, by
contacting idp and executing ReadCounter command with a fresh nonce na . If
idp was correctly following the protocol (which, using a verified TPM, he must
have been), Alice will know—by determining whether the value of the counter
has increased—if her identity has been revealed.
A key point of the protocol is that the identity provider is automatically
trusted to follow it, as a consequence of the encrypted transport session in Stage
1. When Alice quotes the PCRs of the identity provider’s TPM, she makes it gen-
erate a key bound to the correct machine state that it is currently in (presumably,
Auditable Envelopes: Tracking Anonymity Revocation 27

Alice would terminate any session where an erroneous result of TPM Quote was
reported). Even if idp were to become corrupted after the encrypted transport
session, this corruption would alter its TPM’s PCRs, protecting Alice’s identity
from rogue decryption.

5 Applicability
In this section, we discuss some use cases for the protocol: as mentioned earlier,
we believe it to have a number of areas of applicability. Here we focus on digital
cash and electronic voting, two classes of protocol where anonymity is critical.

5.1 When Does Alice Request a Pseudonym?


We mentioned in Section 4.1 that Alice is free to have idp generate an unlimited
number of pseudonyms for her, or just one, depending on her preference. Com-
mon sense dictates that, should Alice wish the services she interacts with to be
unable to link her transactions together, she should generate a fresh pseudonym
for each service she uses. For services which a user uses only once (say, par-
ticipating in an election), this solution is sufficient. For those which she uses
multiple times—such as spending multiple coins in a digital cash system—we
consider whether a solution requiring Alice to contact idp multiple times for dif-
ferent pseudonyms is suitable. Digital cash protocols such as [10] typically secure
a spender’s identity by encrypting it with a key to which only one, trusted, entity
has access. When coins are withdrawn, the identities of those coins are stored
with the encrypted ID of their owners in a database. Consequently, as in [10],
though the digital coin itself does not contain Alice’s identity, it contains pointers
which which her identity can be looked up in the database.
We note that, in [10], whenever Alice withdraws a coin, she encrypts her
identity using fresh symmetric keys for two separate parties: the Bank and the
Ombudsman, both of whom have to cooperate to later retrieve her anonymity.
In fact, our protocol fits very well into this model. Alice still selects two fresh
symmetric keys, but now encrypts not her plaintext ID, but the tuple
CounterID, id, signidp (hash(id||CounterID)),
obtained from idp. As idp is trusted to legitimately produce signatures on identi-
ties, the Bank and Ombudsman can trust the encrypted ID to be legitimate, and
issue the coin as before. Should revocation be required, the Bank now simply
contacts idp, allowing Alice to determine that this has occurred.
The advantage here is that Alice’s withdrawn coins remain unlinkable—her
ID is not encoded into them, and every instance of her ID stored by the Bank is
not only encrypted with the key idp generated for it, but also with session keys
generated by Alice. We note, of course, that [10] is now quite dated. However,
it represents a class of digital cash protocol in which the spender’s identity is
stored encrypted in a database, and is used here for its simplicity. A range of
other digital cash systems could use our protocol in the same way [4, 6, 17, 21],
or by simply storing the pseudonym in the coin [7–9, 13].
28 M. Smart and E. Ritter

5.2 Digital Cash Examples

If we take any digital cash protocol where the identity of the coin spender is
in some way encrypted (whether stored on a remote server [10] or encoded into
the coin itself [13]), we can envisage a situation in which a user either spends
a digital coin twice, or participates in an illegal transaction. An authority will
have some interest in this, and thus requests that the Bank trace the coins spent
by the user, in order to identify her.
In the case of the protocols listed above, the identity of the user is simply
decrypted (albeit by two separate authorities in the first case). The user has no
way to know that she was traced, until she is apprehended! Now, we modify each
protocol such that:
– in the case of protocols where the spender ID is encoded onto the coin, the
coins instead contain the user’s identity—encrypted using the wrap key made
for idp—and the CounterID, with the signed hash of both;
– in the case of a database storing the spender ID, with a lookup value in
each key, we proceed as discussed above, with the spender providing the
idp-encrypted ID token which is then stored in the database.
This done, the coin spender knows that each coin can only be linked back to
her with the cooperation of idp, who (since he is following the protocol) must
increment the appropriate counter, allowing the spender to know if she is iden-
tified. Note that a protocol providing revocation auditability already exists [11],
but requires knowledge a priori of who is to be traced, making the protocol
unsuitable for other applications.

5.3 Electronic Voting Example


Voting is generally considered to be an area where anonymity of the user (voter)
should be unequivocal. However, in some countries (such as the UK, and New
Zealand), it is a legal requirement that a voter’s ballot paper must be linkable
back to them [20]. Smart and Ritter’s work on revocable anonymity in electronic
voting [15, 16] stores the voter’s identity in an encrypted manner in the ballot. If
instead we store the encrypted ID, with the CounterID and signed hash of both,
we achieve the same property as above: if the authorities need to trace a voter,
they contact the identity provider. If a voter is traced, they know that they will
be able to determine this was the case, because the identity provider will have
incremented their virtual monotonic counter.
An interesting problem is how to deal with coercion resistance: if Alice receives
an encrypted identity from idp, and then sends it to a vote tallier who places it on
the bulletin board unchanged, then a coercer can see that Alice has voted (this is
undesirable if we wish to prevent forced-abstention attacks). In protocol vote2,
permitting revocable anonymity [16, p. 197–9], revocation is effected by having
Alice send the tuple id = {id}Judge , SignR (id) to the talliers. The ciphertext id
is produced by the registrar, R, during registration.
Auditable Envelopes: Tracking Anonymity Revocation 29

This is followed by an encrypted transport session between the voter’s TPM


and a Tallier, in which a sealing wrap key used to encrypt designated verifier
proofs of re-encryption is produced. Our change to the protocol is again quite
small. In the registration phase, once the “join” stage of the protocol is complete,
Alice sends her idp-encrypted id to R, who performs an ElGamal encryption of
it using the Judge’s public key. Before the talliers post this ciphertext to the
bulletin board, it is randomly re-encrypted. Should revocation be required, the
co-operation of both the Judge and idp is required, and Alice will again be able
to see that this has occurred.

6 Analysis
In this section we briefly discuss the security properties of the protocol. The
main property that we achieve is that Alice is always able to determine whether
her anonymity is revoked or not (non-repudiation of anonymity revocation). This
property is satisfied as a result of the knowledge that, having attested to the state
of his TPM (and hence, the software being run on the host), idp will either:
– act according to the protocol specification, or
– be unable to decrypt Alice’s identity.
Our reasoning is as follows. If the Identity Provider adheres to the specification,
he generates a counter for Alice’s identity using a nonce she supplies. He encrypts
her identity using a keypair which can only be used again by a TPM in the same
state which Alice originally accepted.
The information that idp generates to send to Alice must be correct, other-
wise idp is deviating from the protocol. It follows that, when s requests Alice’s
anonymity to be revoked, idp must first increment the associated counter. If idp
does deviate from the protocol, he will not be able to use the same key kI later
on to decrypt Alice’s identity, as that key is bound to his original TPM state
(which would change if different, or malicious, software were used).
Thus, the most a rogue idp could achieve is suggesting Alice’s anonymity has
been revoked when it has not (i.e., tampering with the counter), opening up idp
to further questioning (it is hence not in the identity provider’s interest to lie to
Alice in this way). Since the counter must always be incremented before Alice’s
identity is decrypted, Alice will always know when she has been identified, by
querying the counter.
We next consider Alice’s interaction with s. In her communication with s, Alice
provides her pseudonym and the counter ID tied to it, together with a signed
hash of these values (as originally provided to her by idp). This convinces s that
the identity provided is genuine. This leads us to the issue of eavesdropping at-
tacks, allowing a user to illegitimately obtain the pseudonym of another user,
and thus ‘frame’ an innocent victim for a crime. Note that without identifying
Alice immediately, s cannot be further convinced that the pseudonym is indeed
hers. However, our protocol prevents this problem from arising: in the message
30 M. Smart and E. Ritter

idm sent from idp to Alice, Alice’s pseudonym and counter information are en-
crypted using a binding wrap key, meaning that only her TPM can obtain these
values. The only other message where these two values are together is in Alice’s
communication with s, and here, the entire message is encrypted for s.
The message containing Alice’s actual identity is signed by idp before being
sent back to s. Hence, providing s trusts idp, he will always obtain Alice’s le-
gitimate identity by following the protocol. We might consider that s does not
trust idp, in which case we could request that s and idp also undergo some sort
of attestation, like that between Alice and idp. In the case of the digital cash
example presented earlier, we could require that the Bank and Ombudsman each
force idp to attest to its state.

Trustworthiness of the Service Provider. Note that, as we have already


mentioned, we do not consider how s behaves, as it is outside of the scope of
this protocol. However, we now discuss a possible course of action to prevent a
rogue s replaying the counter and pseudonym values sent to him by an honest
user. In order to mitigate this issue, we need to force the pseudonym’s actual
owner to prove her ownership. We therefore alter some of the messages in the
protocol (numbered according to Figure 1, where messages 10a–d come between
messages 10 and 11):
7. idp→Alice: {id, CreateCertificate, signidp (hash(id  hash(CounterID)))}pkT A
8. Alice→idp: {ReadCounter(CounterID, na )}pkI
9. idp→Alice: {ReadCertificate}pkT A
10. Alice→s: {m, id, hash(CounterID), signidp (id  hash(CounterID))}s
10a. s→Alice: cctr
10b. Alice→s: hash(CounterID  cctr )
10c. s→idp: id, cctr
10d. idp→s: hash(CounterID  cctr )
11. s→idp: signJudge (id, nS )
These changes are appropriate if we wish to prevent a rogue s from gaining
an id, CounterID pair with which to frame another user. We begin by altering
what idp sends to Alice, such that the signed hash now itself contains a hash of
CounterID. Both the request and result of reading the counter are encrypted for
idp’s and Alice’s TPM respectively.
The messages from 10 onwards are the most important. Rather than sending
her counter’s ID in the clear for s, Alice sends a hash of it, which fits in with the
signed hash provided by idp. s now returns a challenge cctr , which Alice hashes
with CounterID and returns. In 10c and 10d, s sends the pair id, cctr  to idp, who
looks up id and returns a hash of its associated CounterID concatenated with the
challenge. This allows s to ensure that Alice really is the owner of the pseudonym
and counter ID she provided. No further changes are necessary, as this prevents
s from stealing Alice’s pseudonym and counter ID: s would be unable to generate
Auditable Envelopes: Tracking Anonymity Revocation 31

message 10b as he never sees CounterID in the clear. Note that consequently,
message 11 also needs to change.
In this section, we have discussed the security properties of our work. Note
that changes to mitigate against a corrupt service provider are only appropriate
where untrustworthy service providers are a risk—hence we do not include these
changes in the main protocol.

7 Conclusions and Future Work

In this paper, we have presented work on a protocol which allows users of a


protocol providing revocable anonymity to audit whether or not their anonymity
is revoked. We have shown how virtual monotonic counters can be used on an
authenticated host to track anonymity revocation, for use with any other class
of security protocol requiring revocable anonymity. Further, we addressed how
to mitigate the actions of a corrupt service provider. This work makes significant
steps in auditable anonymity revocation, a field which has not been considered
in detail before.
There are factors which we would like to consider in future work. Some of
those are motivated by the issues Sarmenta et al. discuss regarding log-based
virtual monotonic counters in [14]. The counters are non-deterministic, being
based on the single counter in use by the TPM in any one power cycle. This
means that counter increment values are unpredictable—not a problem for our
application, but potentially a cause of high overhead. Indeed, the ReadCertificate
for a counter would include “the log of all increments of all counters. . . since the
last increment”. The size of such a certificate could be substantial. Power failures
mid-cycle on idp could also cause the counters to become untrustworthy.
These issues are mitigated by the idea of Merkle hash tree-based counters [14,
pp. 34–6] which would require changes to the TPM’s API. It is for this reason
that we did not adopt this solution, but would instead look to it for future work.
We would also like to consider a formal analysis of the security properties of the
protocol.
One might also consider whether the third party, idp, is required for this pro-
tocol to work: an exemplar alternative might be in which Alice and s interact
only with each other, assuring trustworthiness via a protocol such as DAA [3].
Alice seals her identity using a key generated by her TPM, meaning that interac-
tion with her TPM is again required to reveal her identity (and thereby, Alice is
informed that this has happened). This solution will not work: as we mentioned
earlier, a rogue Alice would rather switch her machine off than risk detection. Us-
ing a high-availability third party, which proves itself to be following the correct
protocol, mitigates this problem.
We feel the protocol we have presented has wide-ranging applicability to a
number of user-anonymous protocols—particularly those in digital cash and elec-
tronic voting—allowing all users subject to revocable anonymity to be assured
of whether or not they can be identified.
32 M. Smart and E. Ritter

References
1. Ables, K., Ryan, M.D.: Escrowed Data and the Digital Envelope. In: Acquisti, A.,
Smith, S.W., Sadeghi, A.-R. (eds.) TRUST 2010. LNCS, vol. 6101, pp. 246–256.
Springer, Heidelberg (2010)
2. Blackburn, R.: The Electoral System in Britain. Macmillan, London (1995)
3. Brickell, E., Camenisch, J., Chen, L.: Direct Anonymous Attestation. In: Proceed-
ings of the 11th ACM Conference on Computer and Communications Security,
CCS 2004, pp. 132–145. ACM (2004)
4. Camenisch, J., Maurer, U., Stadler, M.: Digital Payment Systems with Passive
Anonymity-Revoking Trustees. Journal of Computer Security 5(1), 69–89 (1997)
5. Challener, D., Yoder, K., Catherman, R., Safford, D., Doorn, L.V.: A Practical
Guide to Trusted Computing. IBM Press, Boston (2008)
6. Chen, Y., Chou, J.S., Sun, H.M., Cho, M.H.: A Novel Electronic Cash System
with Trustee-Based Anonymity Revocation From Pairing. Electronic Commerce
Research and Applications (2011), doi:10.1016/j.elerap.2011.06.002
7. Fan, C.I., Liang, Y.K.: Anonymous Fair Transaction Protocols Based on Electronic
Cash. International Journal of Electronic Commerce 13(1), 131–151 (2008)
8. Fuchsbauer, G., Pointcheval, D., Vergnaud, D.: Transferable Constant-Size Fair E-
Cash. In: Garay, J.A., Miyaji, A., Otsuka, A. (eds.) CANS 2009. LNCS, vol. 5888,
pp. 226–247. Springer, Heidelberg (2009)
9. Hou, X., Tan, C.H.: On Fair Traceable Electronic Cash. In: Proceedings, 3rd An-
nual Communication Networks and Services Research Conference, pp. 39–44. IEEE
(2005)
10. Jakobsson, M., Yung, M.: Revokable and Versatile Electronic Money (Extended
Abstract). In: CCS 1996: Proceedings of the 3rd ACM Conference on Computer
and Communications Security, pp. 76–87. ACM Press, New York (1996)
11. Kügler, D., Vogt, H.: Off-line Payments with Auditable Tracing. In: Blaze, M. (ed.)
FC 2002. LNCS, vol. 2357, pp. 269–281. Springer, Heidelberg (2003)
12. Moran, T., Naor, M.: Basing Cryptographic Protocols on Tamper-Evident Seals.
Theoretical Computer Science 411(10) (2010)
13. Pointcheval, D.: Self-Scrambling Anonymizers. In: Frankel, Y. (ed.) FC 2000.
LNCS, vol. 1962, pp. 259–275. Springer, Heidelberg (2001)
14. Sarmenta, L.F., van Dijk, M., O’Donnell, C.W., Rhodes, J., Devadas, S.: Virtual
Monotonic Counters and Count-Limited Objects using a TPM without a trusted
OS. In: Proceedings of the First ACM Workshop on Scalable Trusted Computing,
STC 2006, pp. 27–42. ACM, New York (2006)
15. Smart, M., Ritter, E.: Remote Electronic Voting with Revocable Anonymity. In:
Prakash, A., Sen Gupta, I. (eds.) ICISS 2009. LNCS, vol. 5905, pp. 39–54. Springer,
Heidelberg (2009)
16. Smart, M., Ritter, E.: True Trustworthy Elections: Remote Electronic Voting Using
Trusted Computing. In: Calero, J.M.A., Yang, L.T., Mármol, F.G., Garcı́a Villalba,
L.J., Li, A.X., Wang, Y. (eds.) ATC 2011. LNCS, vol. 6906, pp. 187–202. Springer,
Heidelberg (2011)
17. Tan, Z.: An Off-line Electronic Cash Scheme Based on Proxy Blind Signature. The
Computer Journal 54(4), 505–512 (2011)
18. TCG: Trusted Computing Group: TPM Main: Part 2: Structures of the TPM,
Version 1.2, Revision 103 (October 2006), http://bit.ly/camUwE
19. TCG: Trusted Computing Group: TPM Main: Part 3: Commands, Version 1.2,
Revision 103 (October 2006), http://bit.ly/camUwE
Auditable Envelopes: Tracking Anonymity Revocation 33

20. The Electoral Commission: Factsheet: Ballot Secrecy (December 2006),


http://www.electoralcommission.org.uk/ data/assets/
electoral commission pdf file/0020/13259/Ballot-Secrecy-2006-12
23827-6127 E N S W .pdf
21. Wang, C., Lu, R.: An ID-based Transferable Off-Line e-Cash System with Revok-
able Anonymity. In: Proceedings, International Symposium on Electronic Com-
merce and Security, ISECS 2008, pp. 758–762. IEEE (2008)
22. Weber, S.G., Mühlhäuser, M.: Multilaterally Secure Ubiquitous Auditing. In:
Caballé, S., Xhafa, F., Abraham, A. (eds.) Intelligent Networking, Collaborative
Systems and Applications. SCI, vol. 329, pp. 207–233. Springer, Heidelberg (2010)
Lockdown: Towards a Safe and Practical Architecture
for Security Applications on Commodity Platforms

Amit Vasudevan1, Bryan Parno2, , Ning Qu3, , Virgil D. Gligor1 , and Adrian Perrig1
1
CyLab/Carnegie Mellon University
{amitvasudevan,gligor,perrig}@cmu.edu
2 Microsoft Research

[email protected]
3 Google Inc.

[email protected]

Abstract. We investigate a new point in the design space of red/green sys-


tems [19,30], which provide the user with a highly-protected, yet also highly-
constrained trusted (“green”) environment for performing security-sensitive
transactions, as well as a high-performance, general-purpose environment for all
other (non-security-sensitive or “red”) applications. Through the design and im-
plementation of the Lockdown architecture, we evaluate whether partitioning,
rather than virtualizing, resources and devices can lead to better security or per-
formance for red/green systems. We also design a simple external interface to
allow the user to securely learn which environment is active and easily switch
between them. We find that partitioning offers a new tradeoff between security,
performance, and usability. On the one hand, partitioning can improve the secu-
rity of the “green” environment and the performance of the “red” environment (as
compared with a virtualized solution). On the other hand, with current systems,
partitioning makes switching between environments quite slow (13-31 seconds),
which may prove intolerable to users.

1 Introduction
Consumers currently use their general-purpose computers to perform many sensitive
tasks; they pay bills, fill out tax forms, check account balances, trade stocks, and access
medical data. Unfortunately, increasingly sophisticated and ubiquitous attacks under-
mine the security of these activities. Red/green systems [19,30] have been proposed as
a mechanism for improving user security without abandoning the generality that has
made computers so successful. They are based on the observation that users perform
security-sensitive transactions infrequently, and hence enhanced security protections
need only be provided on demand for a limited set of activities. Thus, with a red/green
system, the user spends most of her time in a general-purpose, untrusted (or “red”) en-
vironment which retains the full generality of her normal computer; i.e., she can install
arbitrary applications that run with good performance. When the user wishes to perform
a security sensitive transaction, she switches to a trusted (or “green”) environment that
includes stringent protections, managed code, network and services at the cost of some
performance degradation.
 This work was done while Bryan Parno was still at CyLab/Carnegie Mellon University.
 This work was done while Ning Qu was still at CyLab/Carnegie Mellon University.

S. Katzenbeisser et al. (Eds.): TRUST 2012, LNCS 7344, pp. 34–54, 2012.
c Springer-Verlag Berlin Heidelberg 2012
Lockdown: Towards a Safe and Practical Architecture 35

The typical approach to creating a red/green system relies on virtualization to isolate


the trusted and untrusted environments [19,30]. While straightforward to implement,
this approach has several drawbacks. First, it requires virtualizing all of the system re-
sources and devices that may be shared between the two environments. From a security
perspective, this introduces considerable complexity [16] into the reference monitor
(i.e., the virtual machine monitor) responsible for keeping the two environments sep-
arate. In addition, even without compromising a reference monitor, actively sharing
resources by allowing both environments to run simultaneously exposes side-channels
that can be used to learn confidential information [36,9,31,18]. From a performance per-
spective, the interposition necessary to virtualize devices adds overhead to both trusted
and untrusted applications [16].
Through our design and implementation of the Lockdown architecture, we investi-
gate whether partitioning resources can overcome these drawbacks. In particular, Lock-
down employs a light-weight hypervisor to partition system resources across time, so
that only one environment (trusted or untrusted) runs at a time. When switching between
the two environments, Lockdown resets the state of the system (including devices) and
leverages existing support for platform power-management to save and restore device
state. This approach makes Lockdown device agnostic, removes considerable complex-
ity from the hypervisor, and yet maintains binary compatibility with existing free and
commercial operating systems (e.g., Windows and Linux run unmodified). It also al-
lows the untrusted environment to have unfettered access to devices, resulting in near
native performance for most applications, although a small performance degradation
is necessary to protect Lockdown from the untrusted environment. In the trusted en-
vironment, Lockdown employs more expensive mechanisms to keep the environment
pristine. For example, Lockdown only permits known, trusted code to execute. Since
this trusted code may still contain bugs, Lockdown ensures that trusted applications can
only communicate with trusted sites. This prevents malicious sites from corrupting the
applications, and ensures that even if a trusted application is corrupted, it can only leak
data to sites the user already trusts with her data.
As an additional contribution, we study the design and implementation of a user
interface for red/green systems that is independent of the choice of virtualization versus
partitioning. Our design results in a small, external USB device that communicates the
state of the system (i.e, trusted or untrusted) to the user. The security display is beyond
the control of an adversary and cannot be spoofed or manipulated. Its simple interface
(providing essentially one bit of input and one bit of output), makes it easy to understand
and use, and overcomes the challenges in user-based attestation [26] to create a trusted
communication channel between the user and the red/green system.
We have implemented and evaluated a full prototype of our user interface (which
we call the Lockdown Verifier) plus Lockdown for Windows and Linux on commodity
x86 platforms (AMD and Intel). To the best of our knowledge, this represents the first
complete, end-to-end design, implementation and evaluation of a red/green system on
commodity platforms; we discuss related work in § 8. The Lockdown hypervisor im-
plementation has 10K lines of code, including the code on the Lockdown Verifier. The
small size and simple design supports our hypothesis that partitioning (instead of vir-
tualization) can improve security. Our evaluation also indicates that the performance of
36 A. Vasudevan et al.

untrusted applications is the same or better with partitioning (as opposed to virtualiza-
tion). Lockdown only imposes a 3% average overhead for memory and 2-7% overhead
for disk operations for untrusted applications. Virtualization on the other hand imposes
overhead for all platform hardware with the overhead ranging from 3-81% depend-
ing on the resources being virtualized (§ 7.2). The primary limitation of partitioning
on current systems is the time (13–31 seconds) needed to switch between the two en-
vironments. While we describe several potential optimizations that could significantly
reduce this time, whether this tradeoff between security, performance, and usability is
acceptable remains an open question.

2 Problem Definition
Goals. The goal of a red/green system is to enable a set of trusted software to com-
municate with a set of trusted sites while preserving the secrecy and integrity of these
applications and the data they handle. Protecting trusted software that does not require
network access is a strict subset of this goal. Ideally, this should be achieved without
modifying any hardware or software the user already employs. In other words, a user
should be able to run the same OS (e.g., Windows), launch her favorite browser (e.g.,
Internet Explorer) and connect to her preferred site (e.g., a banking website) via the
Internet in a highly secure manner while maintaining the current level of performance
for applications that are not security-sensitive.
Adversary Model. We assume the adversary can execute arbitrary code within the
untrusted environment and may also monitor and manipulate network traffic to and from
the user’s machine. However, we assume the adversary is remote and cannot perform
physical attacks on the user’s machine.
Assumptions. The first three assumptions below are necessary for any red/green sys-
tem. The last two are particular to Lockdown’s implementation. (i) Trusted Software
and Sites: As we discuss in § 3.2, we assume certain software packages and certain
websites can be trusted to not deliberately leak private data; (ii) Reference Monitor
Security: We assume that our reference monitor code does not contain vulnerabili-
ties. Reducing the complexity and amount of code in the reference monitor (as we do
with Lockdown) allows manual audits and formal analysis to validate this assumption;
(iii) User Abilities: We assume the user can be trained to perform security-sensitive
operations in the trusted environment; (iv) Hardware Support: We assume the user’s
computer supports Hardware Virtualization Extensions (with Nested Page Table sup-
port [10]) and contains a Trusted Platform Module [44] chip. Both technologies are
ubiquitous; and (v) Trusted BIOS: Lockdown uses the BIOS during its installation and
to reset devices, so we must assume the BIOS has not been corrupted. Fortunately, most
modern BIOSes require signed updates [32], preventing most forms of attack.

3 Lockdown’s Architecture
At a high level (Figure 1), Lockdown splits system execution into two environments,
trusted and untrusted, that execute non-concurrently. This design is based on the belief
Lockdown: Towards a Safe and Practical Architecture 37

Untrusted Environment Trusted Environment Lockdown Verifier

AppAppApp AppAppApp
1 2 3 1 2 3 Secure
uzze

r
Insecure
Operating Operating
System System

Lockdown

Memory,
Devices
CPU, TPM

Fig. 1. Lockdown System Architecture. Lockdown partitions the platform into two environ-
ments; only one environment executes at a time. An external device (which we call the Lock-
down Verifier) verifies the integrity of Lockdown, indicates which environment is active and can
be used to toggle between them. The shaded portions represent components that must be trusted
to maintain isolation between the environments.

that the user has a set of tasks (e.g., games, browsing for entertainment) that she wants
to run with maximum performance, and that she has a set of tasks that are security sen-
sitive (e.g., checking bank accounts, paying bills, making online purchases) which she
wants to run with maximum security and which are infrequent and less performance-
critical. The performance-sensitive applications run in the untrusted environment with
near-native speed, while security-sensitive applications run in the trusted environment,
which is kept pristine and protected by Lockdown. The Lockdown architecture is based
on two core concepts: (i) hyper-partitioning: system resources are partitioned as op-
posed to being virtualized. Among other benefits, this results in greater performance,
since it minimizes resource interpositioning, and it eliminates most side-channel attacks
possible with virtualization; and (ii) trusted environment protection: Lockdown lim-
its code execution in the trusted environment to a small set of trusted applications and
ensures that network communication is only permitted with trusted sites.

3.1 Hyper-partitioning
Since the untrusted environment may be infected with malware, Lockdown must iso-
late the trusted environment from the untrusted environment. Further, Lockdown must
isolate itself from both environments so that its functionality cannot be deliberately
or inadvertently modified. One way to achieve this isolation is to rely on the platform
hardware to partition resources. With platform capabilities such as Single-Root I/O Vir-
tualization (SR-IOV) [29] and additional hardware such as an IOMMU, it is possible
to assign physical devices directly to an environment (untrusted or trusted) [4,17]. This
hardware capability facilitates concurrent execution of multiple partitions without vir-
tualizing devices. Unfortunately, not all devices can be shared currently (e.g., video,
audio) [5] and such platform support is not widely available today [6,17].
38 A. Vasudevan et al.

CPU and Memory Partitioning. Lockdown partitions the CPU in time by only allow-
ing one environment to execute at a time. The available physical memory in the system
is partitioned into three areas: the Lockdown memory region, the untrusted environ-
ment’s memory region, and the trusted environment’s memory region1. Lockdown em-
ploys Nested Page Tables (NPT)2 [10] to restrict each environment to its own memory
region. In other words, the NPT for the untrusted environment does not map physical
memory pages that belong to the trusted environment and vice versa. Further, it employs
hardware-based DMA-protection within each environment to prevent DMA-based ac-
cess beyond each environment’s memory regions.
Device Partitioning. With hyper-partitioning, both the untrusted and trusted environ-
ments use the same set of physical devices. Devices that do not store persistent data,
such as video, audio, and input devices can be partitioned by saving and restoring their
states across environment switches. However, storage devices may contain persistent,
sensitive data from the trusted environment, or malicious data from the untrusted envi-
ronment. Thus, Lockdown ensures that each environment is provided with its own set
of storage devices and/or partitions. For example, Lockdown can assign a different hard
disk to each environment. Alternatively, Lockdown can assign a different partition on
the same hard disk to each environment. The challenge is to save and restore device
state in a device agnostic manner, and to partition storage devices without virtualizing
them, while providing strong isolation that cannot be bypassed by a malicious OS.
Lockdown leverages the Advanced Configuration and Power-management Interface
(ACPI) [14] to save and restore device states while partitioning non-storage devices.
The ACPI specification defines an ACPI subsystem (system BIOS and chipset) and an
Operating System Power Management (OSPM) subsystem. With an ACPI-compatible
OS, applications and device drivers interact with the OSPM code, which in turn inter-
acts with the low-level ACPI subsystem. ACPI defines four system sleep states which
an ACPI-compliant computer system can be in: S1 (power is maintained to all system
components, but the CPU stops executing instructions), S2 (the CPU is powered off),
S3 (standby), and S4 (hibernation: all of main memory is saved to the hard disk and
the system is powered down). Figure 2a shows how an OSPM handles ACPI Sleep
States S3 and S4. When a sleep command is initiated (e.g., when the user closes the
lid on a laptop), the OSPM first informs all currently executing user and kernel-mode
applications and drivers about the sleep signal. They, in turn, store the configuration in-
formation needed restore the system when it awakes. The device drivers use the OSPM
subsystem to set desired device power levels. The OSPM then signals the ACPI sub-
system, which ultimately performs chipset-specific operations to transition the system
into the desired sleep state. The OSPM polls the ACPI subsystem for a wake signal to
determine when it should reverse the process and wake the system. Note that with this
scheme, Lockdown does not need to include any device drivers or interpose on device
operations. The OS contains all the required drivers that deal directly with the devices
for normal operation and for saving and restoring device states.
1 An implementation using ACPI S4 state for hyper-partitioning (§ 6), requires only two memory
regions, Lockdown and the current environment (untrusted or trusted) since ACPI S4 results
in the current environment’s memory contents being saved and restored from the disk.
2 Also termed as Extended Page Tables on Intel platforms.
Lockdown: Towards a Safe and Practical Architecture 39

,QLWLDWH6OHHS
2630
7UXVWHG 8QWUXVWHG
(QYLURQPHQW (QYLURQPHQW
6WRUH&XUUHQW&RQILJXUDWLRQ

6HW'HYLFH3RZHU/HYHOV

,QYRNH$&3,6XEV\VWHP
$&3,
/RFNGRZQ
6XEV\VWHP
3ROO$&3,6XEV\VWHP

5HVWRUH&RQILJXUDWLRQDQG3RZHU
5HVXPH
D
'LVN
8QWUXVWHG(QYLURQPHQW 7UXVWHG(QYLURQPHQW &RQWUROOHU
2630 2630
,QLWLDWH6OHHS
6WRUH&XUUHQW 6WRUH&XUUHQW
&RQILJXUDWLRQ &RQILJXUDWLRQ

6HW'HYLFH3RZHU ,QLWLDWH 6HW'HYLFH3RZHU


/HYHOV ,QWHUFHSW :DNH /HYHOV

,QYRNH$&3,6XEV\VWHP /RFNGRZQ ,QYRNH$&3,6XEV\VWHP

3ROO$&3,6XEV\VWHP 3ROO$&3,6XEV\VWHP
8SGDWH 6ZLWFK
5HVWRUH&RQILJXUDWLRQ 5HVWRUH&RQILJXUDWLRQ
&RPPDQG DQG3RZHU/HYHOV
DQG3RZHU/HYHOV 8QWUXVWHG'LVN 7UXVWHG'LVN
5HVXPH
/RFNGRZQ
5HVXPH
9HULILHU

F E

Fig. 2. Hyper-Partitioning. (a) Lockdown leverages the Advanced Configuration and Power-
management Interface (ACPI) OS sleep mechanism to partition (by saving and restoring states)
non-storage system devices while being device agnostic. (b) Storage devices (e.g., disk) are par-
titioned by intercepting the device selection requests and redirecting device operations to the
appropriate device, based on the current environment. (c) Environment switching is performed
upon receiving a command from the Lockdown Verifier. The OS ACPI sleep steps are modified
by Lockdown to transition between environments (untrusted and trusted).

Lockdown efficiently partitions storage devices by interposing on device selection,


rather than device usage. It takes advantage of the fact that modern storage devices rely
on a controller that implements the storage protocol (e.g., ATA, SATA) and directs stor-
age operations to the attached devices. When the operating system writes to the storage
controller’s I/O registers (a standard set for a given controller type), Lockdown inter-
cepts the write and manipulates the device controller to select the appropriate device for
the currently executing environment (see Figure 2b). All other device operations (e.g.,
reads and writes) proceed unimpeded by Lockdown. A similar scheme can be adopted
for two partitions on the same hard disk by manipulating sector requests. Our evaluation
(§ 7) shows that interposing on device/sector selection has a minimal effect on perfor-
mance. Since we assume the BIOS is trusted (§ 2), we can be sure that Lockdown will
always be started first, and hence will always maintain its protections over the trusted
disk.
Environment Switching. Lockdown performs an environment switch by transitioning
the current environment to sleep and waking up the other. Figure 2c shows the steps
40 A. Vasudevan et al.

taken for an environment switch, assuming the user starts in the untrusted environment.
When the user toggles the switch on the trusted Lockdown Verifier to initiate a switch to
the trusted environment (Step 1), the Lockdown Verifier communicates with Lockdown
which in turn instructs the OSPM in the untrusted environment to put the system to
sleep (Step 2). When the OSPM in the untrusted environment issues the sleep command
to the ACPI Subsystem, Lockdown intercepts the command (Step 3), resets all devices,
updates the output on the Lockdown Verifier (Step 4), and issues a wake command to the
OSPM in the trusted environment (Step 5). Switching back to the untrusted environment
follows an analogous procedure.

3.2 Trusted Environment Protection


Lockdown’s trusted environment runs a commodity OS and applications. Lockdown
verifies the integrity of all the files of the trusted environment during Lockdown’s in-
stallation. Further, Lockdown trusts the software in the trusted environment to not leak
data deliberately. However, vulnerabilities within the OS or an application in the trusted
environment can be exploited either locally or remotely to execute malicious code. Fur-
ther, since the trusted environment and untrusted environment use the same devices, the
untrusted environment could change a device’s firmware to act maliciously. Lockdown
uses approved code execution and network protection to ensure that only trusted code
(including device firmware code) can be executed and only trusted sites can be visited
while in the trusted environment, as explained below.
Approved Code Execution. For non-firmware code, Lockdown uses Nested Page Ta-
bles (NPT) to enforce a W ⊕ X policy on physical memory pages used within the
trusted environment. Thus, a page within the trusted environment may be executed or
written, but not both. Prior to converting a page to executable status, Lockdown checks
the memory region against a list of trusted software (§ 3.2 describes how this list is es-
tablished). Execution is permitted only if this check succeeds. Previous work enforces
a similar policy only on the kernel [37], or uses it to determine what applications are
running [21]. In contrast, Lockdown uses these page protections to restrict the OS and
the applications to a limited set of trusted code. For device firmware code, Lockdown,
during installation, scans all installed hardware and enumerates all system and device
firmware code regions. It assumes this code has not yet been tampered with and uses
NPTs to prevent either environment from writing to these regions.
Network Protection. Since users perform many security-sensitive activities online, ap-
plications executing in the trusted environment need to communicate with remote sites
via the network. However, permitting network communication exposes the trusted en-
vironment to external attacks. Remote attackers may exploit flaws in the OS’s network
stack, or the user may inadvertently access a malicious site, or a network-based attacker
may perform SSL-based attacks (e.g., tricking a user into accepting a bogus certificate).
While approved code execution prevents many code-based attacks, the trusted environ-
ment may still be vulnerable to script-based attacks (e.g., Javascript) and return-oriented
programming attacks [38].
To forestall such attacks, Lockdown restricts the trusted environment to communi-
cate only with a limited set of trusted sites. It imposes these restrictions by interposing
Another Random Scribd Document
with Unrelated Content
Nouvelleté, nouveauté. II, 47.
Nuncier, annoncer. I, 115.

O bstant,nonobstant. II, 115.


Oder, fatiguer. «Se oda et tanna.» II, 228. Voy. Hodé.
Œufs. Prov. Voy. Beurre.
Oez, entendez. II, 115.
Offerende, offrande. «Il alloit devant eulx à l'offerende.» Il étoit
le préféré. I, 144.
Oignemens, onguents. I, 12.
Oncques, jamais.
Orde, sale, grossiere. «Une orde excusance.» I, 157.
Ordoyer, salir, souiller. II, 95.
Ores, maintenant. II, 40.
Orinal, vase à uriner. I, 111.
Orphenin, orphelin. I, 106.
Orra, entendra. II, 33.
Orrez, entendrez. II, 164.
Ostel, maison.
Ostellerie, nom de lieu. I, 192, 252.
Ou, avec.
Oustillé, garni d'instrument naturel. II, 84.
Ouvrant, travaillant. II, 202.
Ouvrer, travailler.
Ouvrouer, laboratoire. Est dit de la partie naturelle d'une femme.
II, 117.
Oye, ouïe. I, 128.

P ain. «Tenoit à pain et à pot une donzelle belle et gente»,


l'entretenoit. II, 128.
Parachever, terminer, accomplir. II, 13. On rencontre souvent
dans ce livre cette syllable explètive par, qui indique
l'achévement, la perfection. Voy. les mots ci-après.
Paraccomplir, accomplir. II, 91.
Paraffoler, affoler, martyriser complétement. I, 110.
Parbondy, bondi, sauté. II, 131.
Parceurent, aperçurent. I, 114.
Parchon, partage. II, 120, 193.
Pardedans (En son), intérieurement, à part soi. I, 242, 259.
Pardehors, extérieur, mine, apparence. I, 258.
Paremens, vêtements, parures. II, 242.
Parentage, parenté. II, 211.
Parestoit, étoit complétement. II, 112.
Parface, accomplisse. I, 249.
Parfait, achévement. «Mais du parfait, nichil!» I, 170.
Parfin (En la), à la fin. II, 94.
Parfond, profond. I, 121.
Parforcer, forcer complétement. I, 40.
Parfournir, compléter. II, 34.
Parlement, discours, conversation. I, 234.
Parmentier, passementier. II, 198.
Paroultrer, passer outre complétement, accomplir. I, 132.
Partement, départ. I, 86.
Partissons, partagions. II, 119.
Partuer, tuer tout à fait. I, 112.
Party, partagé, pourvu. I, 228; II, 145.
Pasque (Blanche), le jour des Rameaux. II, 182.
Pasques communiaux, le jour de Paques. II, 183.
Pasques flories, les Rameaux. II, 182.
Passe route, expert, routier. «Tout ce que bon et sage chien doit
et scet faire il estoit le passe route.» II, 205.
Passionner, souffrir. II, 203.
Paste, prov. «Porter la paste au four.» I, 288.
Pastoure, conductrice d'un troupeau. Est dit d'une abbesse. I,
119.
Patars, sorte de monnaie. II, 157.
Patoys, langage de paysan. «Et les servit grandement en son
patoys à ce disner.» I, 112.
Paulmé, pâmé. II, 100, 113.
Paumoison, pâmoison. I, 108.
Peche, pièce. I, 2.
Péché. Etre mis avec les péchés oubliés (I, 148), être
complétement oublié.
Peleterie, mauvaise situation. «Il ronge son frain aux dens et
tout vif enrage quand il se voit en celle peleterie.» I, 156.
Pennes, pièces de drap. «Pluseurs pennes entières et de très
bonne valeur.» II, 95.
Pensemens, pensées, soucis. II, 226.
Penultime, avant-dernier. II, 248.
Percevant, adroit, pénétrant. II, 140.
Percevoir, apercevoir. II, 9.
Perchant, bâton, perche; pris pour membre viril. II, 204.
Percha, perça. II, 217.
Perilleux, dangereux. I, 131.
Pertuiser, percer. II, 14.
Pertus, trous. I, 178.
Phisicien, médecin. I, 11.
Pie, boisson. «En pluseurs religions y a de bons compaignons à
la pie et au jeu des bas instrumens.» II, 201. On disoit aussi
pier, boire; piot, vin; croquer la pie, boire.
Pieça, il y a longtemps, il y a pièce de temps. I, 3.
Piez, pieds. «Vous ne saulterez jamais d'icy sinon les piez
devant.» C'est-à-dire: Vous ne sortirez que mort. I, 197.
Pigne, peigne. «Trousser pignes et miroirs» (I, 123), faire ses
paquets.
Piller, prendre. II, 81.
Plaisance, volonté, fantaisie. I, 65.
Plastrier, homme grossier, malpropre. «Les villains plastriers.»
II, 216.
Pléger, tenir tête à quelqu'un qui boit à notre santé. I, 176.
Plorerie, action de pleurer. I, 116.
Pluc, ce qu'on a recueilli. Cueillir se dit en allemand pflücken. En
gascon, manger un raisin grain à grain se dit pluca. Cotgrave
donne le mot plucquoter... «Car du pluc et butin qu'elle avoit
à la force de ses reins conquesté avoit acquis vaisselle et
tapisserie.» II, 136.
Poisson. «S'en revint devers son maistre à tout ce qu'il avoit de
poisson, car à char avoit-il failli», est dit d'un entremetteur. I,
130.
Porcionner, faire des parts, partager. II, 120.
Posterne, poterne. I, 1.
Pot (A pain et à). Voy. Pain.
Pou, peu.
Pourchaz, recherche, diligence. I, 133, 267.
Poursuir, poursuivre. I, 96.
Premisse, discours, prologue, exorde. I, 129, 274.
Prenist, prît. II, 177.
Preschement, sermon, remonstrance. I, 95.
Preu, profit, avantage. «Bon preu vous fasse!» I, 189.
Prins, prov. «Cy prins cy mis.» II, 134, 149. On dit aujourd'hui:
«Sitôt pris, sitôt pendu.»
Prinsault, prime abord. I, 3.
Procurer, plaider, intercéder, servir de procureur. I, 166.
Publicqueroit, divulgueroit, publieroit. II, 233.
Puis, après. I, 178.
Puis, dès. I, 195.
Pute, méchant, pervers. Des deux genres. I, 235.
Puterie, mauvaise vie, débauche. I, 288.
Putier, débauché. I, 4.

QII, 178. .C'est-à-dire:


uaresme Prov. «Il sembloit qu'ils voulsissent tuer Quaresme.»
Ce sont des gens déterminez, qui
tueroient tout, même Carême, tout maigre qu'il est.
Quaresmeaulx, jours maigres. I, 212.
Quans, combien de. I, 92.
Que, comme. «Et s'asseura que celuy qui en beaucop de perilz
s'estoit trouvé.» II, 28.
Querelle, recherche, demande, prétention. I, 233, 258.
Querir, vouloir, chercher. «Je ne vous le quier jà celer.» I, 186.
Voy. I, 222; II, 11.
Quesnoy (Le). I, 134.
Quetaille (?). «Se tenoit comme une droite statue ou ydole en
quetaille.» I, 175.
Quibus, argent. II, 136.
Quis, cherché. I, 163.
Quitter, abandonner, céder. I, 68.
Quoniam, parties naturelles de la femme. II, 135.

R acaner, braire. II, 143.


Racoler, faire l'acte amoureux. I, 113.
Radde, vif, alerte. «Radde du pyé», agile. I, 302.
Radoubter, radoter. I, 181.
Raffroigné, refrogné. II, 86.
Rafrescher, rappeler, renouveler, rafraîchir. I, 289.
Raherce (?). I, 183.
Raidz, rayons. I, 105.
Ramentevoir, rappeler. I, 121.
Ramonné. Prov. «Se trouver en place ramonnée» (I, 67), en lieu
propre, favorable.
Ramponner, quereller, gronder. I, 176.
Raroit, auroit de nouveau. I, 21.
Rasiere, mesure de blé. I, 270.
Rastelée, râtelée, ce qu'on sait, ce qu'on pense d'une chose.
«Compta sa rastelée à madamoiselle.» II, 128.
Rasure, mesure de blé. I, 268.
Rataindit, ratteignit. I, 157.
Rate, un peu. Espagnol, rato. «A rate de temps.» I, 180.
Ratelée. Voy. Rastelée.
Ratoille, réattelle. I, 54.
Reboutement, action de rebuter, repousser. I, 251; II, 98.
Rebouter, remettre. I, 292.
Rebouter, rebuter, repousser. I, 70.
Recaner, braire. II, 59.
Receveur, celui qui reçoit des coups. «Il compteroit avecques luy
et le feroit receveur oultre son plaisir.» II, 115.
Rechap, action de réchapper d'un danger. «Elle est morte, et n'y
a pas de rechap.» I, 111.
Reclusage, ermitage. I, 75.
Recors (Etre), se souvenir. II, 183.
Recréant, las, lâche. «Plus que tous aultres recréant et las.» I, 7,
61; II, 104, 225.
Refroidement, refroidissement. II, 42.
Rehouser, remettre les bottes. I, 133.
Religion, couvent. II, 201.
Remanent, restant, demeurant. II, 192.
Rembatre, revenir sur ses anciens errements (?). I, 127.
Renard, fin, rusé. I, 70.
Renchoir, retomber. II, 15.
Rencouler, roucouler. II, 180.
Rendy (?). «Car la mercy Dieu elle avoit rendy et couru païs plus
tant que du monde ne savoit que trop.» II, 128.
Rengreger, aggraver. II, 91.
Renuré, profondément gravé. I, 231.
Repaire, visite, fréquentation. II, 115.
Repatrier. I, 262.
Reprinse, repréhension. «Pour bien se venger de luy à son aise
et sans reprinse.» I, 27.
Reproché, reprouché, blâmé, décrié, diffamé. I, 31, 118, 300.
Requerre, requérir. I, 118.
Requestes. «Passer les requestes de quelqu'un», lui accorder ce
qu'il demande. «Elle passa legerement les requestes de ceulx
qui mieulx luy pleurent.» II, 5.
Rescourre, secourir. I, 291.
Rescripsit, écrivit en réponse. I, 144.
Reserrer, refermer. I, 186.
Resne, rêne. Employé dans le sens de membre viril. I, 230.
Retollir, reprendre ce qu'on avoit donné. I, 99.
Retour, retraite, habitude, amourette. «J'ay ung retour en ceste
ville dont je suis beaucop assoté.» I, 184.
Retourner, retour. II, 110.
Revirer, retourner. I, 178.
Reut, eut de nouveau. I, 14.
Ribauld, homme de mauvaise vie. I, 4.
Ribauldelle, ribaude, femme de mauvaise vie. II, 215.
Rien, chose. «La rien en ce monde dont la présence plus luy
plaist.» I, 121.
Rigoler, railler. Verbe actif. I, 176.
Rire. Prov.: «Qui à ceste heure l'eust veu rire, jamais n'eust eu
les fièvres.» I, 133.
Risit, rit. II, 22.
Riz, indiqué comme une marchandise dont l'Angleterre
fournissoit les autres pays. I, 101.
Roe, roue. I, 134.
Rompture, rupture. I, 77.
Ronteure, rupture. I, 181.
Rote, troupe. I, 34.
Roucyner, rousyner, faire l'acte amoureux. I, 111, 280.
Rouil, rouille. II, 177.
Routier (?). «Son mary avoit demouré deux ou trois jours
routiers.» II, 178.
Ruer, jeter. II, 176.
Ruse (?). «Car il estoit ferme en la ruse que d'estre confessé.» I,
39.

S acqua, tira. II, 219.


Saillir, sortir. II, 12.
Saillir sus, se lever vivement. II, 43.
Saillir avant, s'avancer vivement. I, 117.
Sains, saints. «La devocion que monseigneur avoit aux sains de
sa meschine de jour en jour croissoit.» I, 91.
Saint Anthoine. «Saint Anthoine arde la louve!» I, 231.
Saint Nicolas de Warengeville. I, 141.
Saint Pol (comte de). Voy. Walerant. I, 128.
Saint Remy, époque d'échéance. I, 269.
Saint Trignan. I, 29.
Sainte Goule, sainte Gudule. II, 15.
Salade, sorte de casque. I, 28.
Sanchié (?) «Son mary retourna de la ville comme sanchié de
son courroux, pource qu'il s'en estoit vengé.» I, 243.
Sané, guéri. II, 174.
Sangles, simples. II, 247.
Sara, saura. II, 244.
Sauldrez, sortirez, sauterez. II, 124.
Sault, sort. I, 92.
Saulterez, sortirez. I, 197.
Saulx, saule. «Charbon de saulx.» I, 43.
Sauvement, salutairement. «Si sauvement preservé.» II, 105.
Sayoit, scioit. I, 130.
Sçaras, sauras.
Scera, saura. I, 60.
Se, ce.
Séans, céans, ici. I, 6.
Seaumes, psaumes. I, 105.
Seclus, exclu. I, 192; II, 236.
Semonce, invitation. I, 169.
Semonnez, invitez, engagez. I, 176.
Semons, invité. II, 34.
Senestre, gauche.
Sengloutir, jeter des sanglots. II, 245.
Sente, route, sentier. I, 139.
Sentement, sentiment. II, 131.
Sequestre. «Et si emporte la verge qu'elle luy donna, qu'il avoit
desja mise en main sequestre.» I, 155.
Serain, soir. I, 38.
Sercher, chercher. I, 23.
Serre. Prov.: «Elle ne tenoit serre non plus qu'une vieille
arbaleste.» I, 295.
Ses, ces.
Seuffrir, souffrir. I, 226.
Si, oui, oui certes. «Le musnier demanda à madame se elle
l'avoit à l'entrée du baing, et elle dit que si.» I, 24.
Si que, jusqu'à ce que. I, 131.
Siet, est assis, situé. I, 114.
Signifiance, signification. II, 13.
Simplesse, simplicité, bêtise. II, 181.
Singe. Prov.: «Pour qui elle ne feroit néant plus que le singe pour
les mauvais.» I, 130.
Soef, doucement. I, 100.
Soichons, compagnons. II, 227.
Solaz, plaisir, réjouissance. I, 159.
Solier, soulier. Prov.: «Doubtant qu'il ne soit pas bien solier à
son pié.» I, 83.
Sonnet, pet. I, 14, 100.
Sorner, se moquer. II, 46.
Souef, doucement. I, 178.
Souffisaument, suffisamment. I, 19.
Souffle-en-cul, nom donné à l'acte amoureux. I, 279.
Souffrance, patience. Employé ironiquement, II, 209.
Soulas, plaisir, réjouissance. II, 232.
Souloir, avoir coutume. I, 284.
Souprins, surpris. I, 75.
Sourdoient, provenoient. II, 137.
Sourdre, naissance, origine. «Quelque menace qui sourdre
prist.» II, 115.
Sourvenistes, survîntes. I, 269.
Souvyne, sur le dos. I, 131.
Subtilier, chercher des détours. II, 187.
Sure, aigre. «Une sure et matte chère.» I, 208.
Sus, chez. II, 50, 54.
Sus (En), loin. I, 126.
Susciter, réveiller, ranimer. II, 186.
Suspeçonner, soupçonner. I, 8.
Suspeçonneux, soupçonneux.
Suspicion, soupçon. I, 267; II, 118.
Suspicionné, soupçonné. II, 114.
Suyr, suivre. II, 106.

TII, 193.
de, fait, disposé pour..., apte à..., en passe de... I, 125;
aillé

Talebot, Talbot. I, 32.


Tamburch, bruit. II, 176.
Tancer, gronder, quereller, parler. «Et je m'en iray en ma
chambrette là derrière tancer à Dieu.» I, 249.
Tanné, lassé, fatigué. II, 228.
Tantes, tant de. II, 109.
Tapinage (En), en cachette, en tapinois. I, 130.
Tas. «Monter sur le tas pourvoir plus loin», est dit d'un homme
qui caresse une femme. II, 131.
Tasseau, pièce. I, 299.
Tauxé, taxé. I, 269.
Taye, grand'mère, ayeule. I, 302.
Teins, veillé. I, 206. Voy. Tenir.
Tendreur, tendresse. I, 154.
Tenir sur quelqu'un, le surveiller. «Car je tendray sur luy.» I,
212.
Tenser, tancer, quereller. I, 30.
Tente, instrument de chirurgie, appareil. «Et d'un tel oustil fit il
la tente pour querir et pescher le dyamant.» I, 25.
Termes (Mis en), proposé, convenu. II, 52.
Terminé (?). «Ensuyvant le terminé propos.» II, 159.
Terrien, terrestre. I, 194.
Tiers, Tierce, troisième.
Tollu, enlevé. II, 131.
Toust, ôte. I, 211.
Train. «Tirer au train de derrière» (I, 126), être enclin à l'amour.
Trainnée, traynée, traynnée, intrigue, secret. Allusion à la traînée
de poudre d'une mine. II, 59, 113, 117. On dit aujourd'hui:
«Eventer la mine.»
Transmuer, changer. I, 138.
Traveiller, fatiguer. «Il n'est jà mestier que vous traveillez plus
monseigneur.» I, 21.
Traynée, traynnée. Voy. Trainnée.
Trespasser, passer outre. I, 137.
Trestous, tous. II, 124.
Triumphe, joie, allégresse. I, 11.
Tromper (Se) de quelqu'un, s'en moquer. I, 207.
Tyne, tonneau. I, 238.

U nes, une paire de. «Unes brayes qui pendoient.» II, 13.
Ung (A l'), également, d'une manière unie. I, 215.

V
V-a luy-dire, messager d'amour. I, 130.
Vaissel, vaisseau, vase. I, 14.
Variableté, condition de ce qui change facilement. II, 228.
Véez cy, voici.
Véez la, voilà.
Veil, volonté, vouloir. I, 145.
Veille, veux. I, 136.
Veillé, éveillé, vif, rusé. «Son varlet, qui estoit ung galant tout
veillé.» I, 96.
Veillote, petite vieille. I, 76.
Vensist, vînt. I, 296.
Verge, bague, anneau. I, 23.
Veyer (?). II, 126.
Viaire, visage. II, 174.
Viander, manger. II, 79.
Villanner, injurier, décrier, offenser grièvement de paroles. I, 30.
Villanie (Dire), dire des injures. I, 163, 245; II, 24.
Virer, tourner. I, 225.
Vireton, bâton. II, 205.
Vitailles, victuailles. II, 118.
Viveux, vif, éveillé. I, 67.
Voer, jurer, faire vœu. II, 13.
Voirré, garni de vitraux, de verrières. II, 35.
Voirrières, verrières, vitrages. I, 75.
Voulsist, voulût.
Vuider, quitter le lieu où l'on est. II, 116.

W alerant(Comte). I, 128.
Wrelenchem, près de Lille. I, 128.

Y
Yordinairement
, propre, approprié, convenable, suffisant. S'écrit
doine
idoine. II, 244.
TABLE DES MATIÈRES.
AVEC LES TITRES DES NOUVELLES ÉDITIONS DE COLOGNE ET DE
LA HAYE.
TOME I.

Pages.
Introduction v
Dédicace xxj
Table des
xxiij
sommaires
Nouvelle I. La médaille à revers 1
II. Le cordelier médecin 9
III. La pêche de l'anneau 16
IV. Le cocu armé 26
V. Le duel d'aiguillettes 32
VI. L'ivrogne au paradis 38
VII. Le charreton à l'arrière-garde 43
VIII. Garce pour garce 46
IX. Le mari maquereau de sa femme 50
X. Les pastés d'anguille 56
XI. L'encens au diable 61
XII. Le veau 63
XIII. Le clerc châtré 67
Le faiseur de papes, ou l'homme
XIV. 73
de Dieu
XV. La nonne savante 81
XVI. Le borgne aveugle 84
XVII. Le conseiller au bluteau 90
XVIII. La porteuse du ventre et du dos 95
XIX. L'enfant de neige 101
XX. Le mari médecin 107
XXI. L'abbesse guérie 114
XXII. L'enfant à deux pères 120
XXIII. La procureuse passe la raye 125
XXIV. La botte à demi 128
XXV. Forcée de gré 134
XXVI. La demoiselle cavalière 137
XXVII. Le seigneur au bahut 157
XXVIII. Le galant morfondu 166
XXIX. La vache et le veau 173
XXX. Les trois cordeliers 177
XXXI. La dame à deux 183
XXXII. Les dames dîmées 192
XXXIII. Madame tondue 204
XXXIV. Seigneur dessus, seigneur dessous 218
XXXV. L'échange 223
XXXVI. A la besogne 229
XXXVII. Le bénitier d'ordure 232
XXXVIII. Une verge pour l'autre 238
XXXIX. L'un et l'autre payé 245
La bouchère lutin dans la
XL. 251
cheminée
XLI. L'amour et l'aubergon en armes 256
XLII. Le mari curé 261
XLIII. Les cornes marchandes 267
XLIV. Le curé courtier 270
XLV. L'Ecossois lavendière 280
XLVI. Les poires payées 283
XLVII. Les deux mules noyées 287
XLVIII. La bouche honnête 292
XLIX. Le cul d'écarlate 295
L. Change pour change 301
ERRATA.
Tome I.

Pag. Ligne.
14 33 au lieu de ient lisez tient.
84 5 — prè set — près et.
134 24 — Etainçois — Et ainçois.
173 10 — a chef — à chef.
173 19 — baillés — bailles.
183 14 — La Barre — La Barde.
229 26 — quelque — quel que.
233 3 — advenues. Nostre — advenues, nostre.
252 27 — veoit — véoit.
275 7 — Quen — Qu'en.
283 2 — Thieuges — Thienges.
283 21 — l'abbesse qui veoit — l'abbesse, qui véoit.
301 2 — La Salle — La Sale.

Corrections
La première ligne indique l'original, la seconde la correction:
Glossaire:
Ung (A l'), également, d'une manière unie. 215.
Ung (A l'), également, d'une manière unie. I, 215.
*** END OF THE PROJECT GUTENBERG EBOOK LES CENT
NOUVELLES NOUVELLES, TOME I ***

Updated editions will replace the previous one—the old editions will
be renamed.

Creating the works from print editions not protected by U.S.


copyright law means that no one owns a United States copyright in
these works, so the Foundation (and you!) can copy and distribute it
in the United States without permission and without paying
copyright royalties. Special rules, set forth in the General Terms of
Use part of this license, apply to copying and distributing Project
Gutenberg™ electronic works to protect the PROJECT GUTENBERG™
concept and trademark. Project Gutenberg is a registered trademark,
and may not be used if you charge for an eBook, except by following
the terms of the trademark license, including paying royalties for use
of the Project Gutenberg trademark. If you do not charge anything
for copies of this eBook, complying with the trademark license is
very easy. You may use this eBook for nearly any purpose such as
creation of derivative works, reports, performances and research.
Project Gutenberg eBooks may be modified and printed and given
away—you may do practically ANYTHING in the United States with
eBooks not protected by U.S. copyright law. Redistribution is subject
to the trademark license, especially commercial redistribution.

START: FULL LICENSE


THE FULL PROJECT GUTENBERG LICENSE
PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK

To protect the Project Gutenberg™ mission of promoting the free


distribution of electronic works, by using or distributing this work (or
any other work associated in any way with the phrase “Project
Gutenberg”), you agree to comply with all the terms of the Full
Project Gutenberg™ License available with this file or online at
www.gutenberg.org/license.

Section 1. General Terms of Use and


Redistributing Project Gutenberg™
electronic works
1.A. By reading or using any part of this Project Gutenberg™
electronic work, you indicate that you have read, understand, agree
to and accept all the terms of this license and intellectual property
(trademark/copyright) agreement. If you do not agree to abide by all
the terms of this agreement, you must cease using and return or
destroy all copies of Project Gutenberg™ electronic works in your
possession. If you paid a fee for obtaining a copy of or access to a
Project Gutenberg™ electronic work and you do not agree to be
bound by the terms of this agreement, you may obtain a refund
from the person or entity to whom you paid the fee as set forth in
paragraph 1.E.8.

1.B. “Project Gutenberg” is a registered trademark. It may only be


used on or associated in any way with an electronic work by people
who agree to be bound by the terms of this agreement. There are a
few things that you can do with most Project Gutenberg™ electronic
works even without complying with the full terms of this agreement.
See paragraph 1.C below. There are a lot of things you can do with
Project Gutenberg™ electronic works if you follow the terms of this
agreement and help preserve free future access to Project
Gutenberg™ electronic works. See paragraph 1.E below.
1.C. The Project Gutenberg Literary Archive Foundation (“the
Foundation” or PGLAF), owns a compilation copyright in the
collection of Project Gutenberg™ electronic works. Nearly all the
individual works in the collection are in the public domain in the
United States. If an individual work is unprotected by copyright law
in the United States and you are located in the United States, we do
not claim a right to prevent you from copying, distributing,
performing, displaying or creating derivative works based on the
work as long as all references to Project Gutenberg are removed. Of
course, we hope that you will support the Project Gutenberg™
mission of promoting free access to electronic works by freely
sharing Project Gutenberg™ works in compliance with the terms of
this agreement for keeping the Project Gutenberg™ name associated
with the work. You can easily comply with the terms of this
agreement by keeping this work in the same format with its attached
full Project Gutenberg™ License when you share it without charge
with others.

1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside the
United States, check the laws of your country in addition to the
terms of this agreement before downloading, copying, displaying,
performing, distributing or creating derivative works based on this
work or any other Project Gutenberg™ work. The Foundation makes
no representations concerning the copyright status of any work in
any country other than the United States.

1.E. Unless you have removed all references to Project Gutenberg:

1.E.1. The following sentence, with active links to, or other


immediate access to, the full Project Gutenberg™ License must
appear prominently whenever any copy of a Project Gutenberg™
work (any work on which the phrase “Project Gutenberg” appears,
or with which the phrase “Project Gutenberg” is associated) is
accessed, displayed, performed, viewed, copied or distributed:
This eBook is for the use of anyone anywhere in the United
States and most other parts of the world at no cost and with
almost no restrictions whatsoever. You may copy it, give it away
or re-use it under the terms of the Project Gutenberg License
included with this eBook or online at www.gutenberg.org. If you
are not located in the United States, you will have to check the
laws of the country where you are located before using this
eBook.

1.E.2. If an individual Project Gutenberg™ electronic work is derived


from texts not protected by U.S. copyright law (does not contain a
notice indicating that it is posted with permission of the copyright
holder), the work can be copied and distributed to anyone in the
United States without paying any fees or charges. If you are
redistributing or providing access to a work with the phrase “Project
Gutenberg” associated with or appearing on the work, you must
comply either with the requirements of paragraphs 1.E.1 through
1.E.7 or obtain permission for the use of the work and the Project
Gutenberg™ trademark as set forth in paragraphs 1.E.8 or 1.E.9.

1.E.3. If an individual Project Gutenberg™ electronic work is posted


with the permission of the copyright holder, your use and distribution
must comply with both paragraphs 1.E.1 through 1.E.7 and any
additional terms imposed by the copyright holder. Additional terms
will be linked to the Project Gutenberg™ License for all works posted
with the permission of the copyright holder found at the beginning
of this work.

1.E.4. Do not unlink or detach or remove the full Project


Gutenberg™ License terms from this work, or any files containing a
part of this work or any other work associated with Project
Gutenberg™.

1.E.5. Do not copy, display, perform, distribute or redistribute this


electronic work, or any part of this electronic work, without
prominently displaying the sentence set forth in paragraph 1.E.1
Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.

More than just a book-buying platform, we strive to be a bridge


connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.

Join us on a journey of knowledge exploration, passion nurturing, and


personal growth every day!

ebookbell.com

You might also like