0% found this document useful (0 votes)
43 views

Operating System Concepts 10th Edition Silberschatz Abraham Gagne instant download

The document provides an overview of the 10th edition of 'Operating System Concepts' by Abraham Silberschatz, Peter B. Galvin, and Greg Gagne, detailing its structure, content, and enhancements aimed at engaging students and reducing costs. It covers fundamental concepts of operating systems, including process management, memory management, storage management, and security, while also offering various digital and print formats for accessibility. The book is designed for undergraduate and graduate courses, emphasizing practical understanding through examples and exercises.

Uploaded by

titihatotos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views

Operating System Concepts 10th Edition Silberschatz Abraham Gagne instant download

The document provides an overview of the 10th edition of 'Operating System Concepts' by Abraham Silberschatz, Peter B. Galvin, and Greg Gagne, detailing its structure, content, and enhancements aimed at engaging students and reducing costs. It covers fundamental concepts of operating systems, including process management, memory management, storage management, and security, while also offering various digital and print formats for accessibility. The book is designed for undergraduate and graduate courses, emphasizing practical understanding through examples and exercises.

Uploaded by

titihatotos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 83

Operating System Concepts 10th Edition

Silberschatz Abraham Gagne download

https://ebookbell.com/product/operating-system-concepts-10th-
edition-silberschatz-abraham-gagne-50554092

Explore and download more ebooks at ebookbell.com


Here are some recommended products that we believe you will be
interested in. You can click the link to download.

Operating System Concepts 10th Edition Abraham Silberschatz Peter B


Galvin

https://ebookbell.com/product/operating-system-concepts-10th-edition-
abraham-silberschatz-peter-b-galvin-50169430

Operating System Concepts 10th 10th Edition Abraham Silberschatz

https://ebookbell.com/product/operating-system-concepts-10th-10th-
edition-abraham-silberschatz-52165688

Operating System Concepts Essentials 1st Edition Abraham Silberschatz

https://ebookbell.com/product/operating-system-concepts-
essentials-1st-edition-abraham-silberschatz-2341698

Operating System Concepts 8th Edition Abraham Silberschatz Peter B


Galvin

https://ebookbell.com/product/operating-system-concepts-8th-edition-
abraham-silberschatz-peter-b-galvin-2353778
Operating System Concepts With Java 8th Edition Abraham Silberschatz

https://ebookbell.com/product/operating-system-concepts-with-java-8th-
edition-abraham-silberschatz-2492980

Operating System Concepts 9th Edition Abraham Silberschatz Peter B


Galvin

https://ebookbell.com/product/operating-system-concepts-9th-edition-
abraham-silberschatz-peter-b-galvin-4078942

Operating System Concepts 8th Update Edition Abraham Silberschatz

https://ebookbell.com/product/operating-system-concepts-8th-update-
edition-abraham-silberschatz-4078950

Operating System Concepts 6th Ed 6th Ed James Lyle Peterson Abraham


Silberschatz

https://ebookbell.com/product/operating-system-concepts-6th-ed-6th-ed-
james-lyle-peterson-abraham-silberschatz-4119834

Operating System Concepts And Basic Linux Commands Shital Vivek Ghate

https://ebookbell.com/product/operating-system-concepts-and-basic-
linux-commands-shital-vivek-ghate-43788132
Operating System Concepts 6th Edition 6th Abraham Silberschatz

https://ebookbell.com/product/operating-system-concepts-6th-
edition-6th-abraham-silberschatz-4450080
OPERATING
SYSTEM
CONCEPTS
7(17+(',7,21
OPERATING
SYSTEM
CONCEPTS
ABRAHAM SILBERSCHATZ
:BMF6OJWFSTJUZ

PETER BAER GALVIN


$BNCSJEHF$PNQVUFSBOE4UBSGJTI4UPSBHF

GREG GAGNE
8FTUNJOTUFS$PMMFHF

7(17+(',7,21
Publisher Laurie Rosatone
Editorial Director Don Fowley
Development Editor Ryann Dannelly
Freelance Developmental Editor Chris Nelson/Factotum
Executive Marketing Manager Glenn Wilson
Senior Content Manage Valerie Zaborski
Senior Production Editor Ken Santor
Media Specialist Ashley Patterson
Editorial Assistant Anna Pham
Cover Designer Tom Nery
Cover art © metha189/Shutterstock

This book was set in Palatino by the author using LaTeX and printed and bound by LSC Kendallville.
The cover was printed by LSC Kendallville.

Copyright © 2018, 2013, 2012, 2008 John Wiley & Sons, Inc. All rights reserved.

No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by
any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted
under Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior written
permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the
Copyright Clearance Center, Inc. 222 Rosewood Drive, Danvers, MA 01923, (978)750-8400, fax
(978)750-4470. Requests to the Publisher for permission should be addressed to the Permissions
Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030 (201)748-6011, fax (201)748-
6008, E-Mail: [email protected].

Evaluation copies are provided to qualified academics and professionals for review purposes only, for use
in their courses during the next academic year. These copies are licensed and may not be sold or
transferred to a third party. Upon completion of the review period, please return the evaluation copy to
Wiley. Return instructions and a free-of-charge return shipping label are available at
www.wiley.com/go/evalreturn. Outside of the United States, please contact your local representative.

Library of Congress Cataloging-in-Publication Data

Names: Silberschatz, Abraham, author. | Galvin, Peter B., author. | Gagne,


Greg, author.
Title: Operating system concepts / Abraham Silberschatz, Yale University,
Peter Baer Galvin, Pluribus Networks, Greg Gagne, Westminster College.
Description: 10th edition. | Hoboken, NJ : Wiley, [2018] | Includes
bibliographical references and index. |
Identifiers: LCCN 2017043464 (print) | LCCN 2017045986 (ebook) | ISBN
9781119320913 (enhanced ePub)
Subjects: LCSH: Operating systems (Computers)
Classification: LCC QA76.76.O63 (ebook) | LCC QA76.76.O63 S55825 2018 (print)
| DDC 005.4/3--dc23
LC record available at https://lccn.loc.gov/2017043464

The inside back cover will contain printing identification and country of origin if omitted from this page. In
addition, if the ISBN on the back cover differs from the ISBN on this page, the one on the back cover is
correct.
Enhanced ePub ISBN 978-1-119-32091-3
Printed in the United States of America
10 9 8 7 6 5 4 3 2 1
To my children, Lemor, Sivan, and Aaron
and my Nicolette

Avi Silberschatz

To my wife, Carla,
and my children, Gwen, Owen, and Maddie

Peter Baer Galvin

To my wife, Pat,
and our sons, Tom and Jay

Greg Gagne
Preface
Operating systems are an essential part of any computer system. Similarly, a
course on operating systems is an essential part of any computer science edu-
cation. This field is undergoing rapid change, as computers are now prevalent
in virtually every arena of day-to-day life—from embedded devices in auto-
mobiles through the most sophisticated planning tools for governments and
multinational firms. Yet the fundamental concepts remain fairly clear, and it is
on these that we base this book.
We wrote this book as a text for an introductory course in operating sys-
tems at the junior or senior undergraduate level or at the first-year graduate
level. We hope that practitioners will also find it useful. It provides a clear
description of the concepts that underlie operating systems. As prerequisites,
we assume that the reader is familiar with basic data structures, computer
organization, and a high-level language, such as C or Java. The hardware topics
required for an understanding of operating systems are covered in Chapter 1.
In that chapter, we also include an overview of the fundamental data structures
that are prevalent in most operating systems. For code examples, we use pre-
dominantly C, as well as a significant amount of Java, but the reader can still
understand the algorithms without a thorough knowledge of these languages.
Concepts are presented using intuitive descriptions. Important theoretical
results are covered, but formal proofs are largely omitted. The bibliographical
notes at the end of each chapter contain pointers to research papers in which
results were first presented and proved, as well as references to recent material
for further reading. In place of proofs, figures and examples are used to suggest
why we should expect the result in question to be true.
The fundamental concepts and algorithms covered in the book are often
based on those used in both open-source and commercial operating systems.
Our aim is to present these concepts and algorithms in a general setting that
is not tied to one particular operating system. However, we present a large
number of examples that pertain to the most popular and the most innovative
operating systems, including Linux, Microsoft Windows, Apple macOS (the
original name, OS X, was changed in 2016 to match the naming scheme of other
Apple products), and Solaris. We also include examples of both Android and
iOS, currently the two dominant mobile operating systems.
The organization of the text reflects our many years of teaching courses
on operating systems. Consideration was also given to the feedback provided
vii
viii Preface

by the reviewers of the text, along with the many comments and suggestions
we received from readers of our previous editions and from our current and
former students. This Tenth Edition also reflects most of the curriculum guide-
lines in the operating-systems area in Computer Science Curricula 2013, the most
recent curriculum guidelines for undergraduate degree programs in computer
science published by the IEEE Computing Society and the Association for Com-
puting Machinery (ACM).

What’s New in This Edition


For the Tenth Edition, we focused on revisions and enhancements aimed at
lowering costs to the students, better engaging them in the learning process,
and providing increased support for instructors.
According to the publishing industry’s most trusted market research firm,
Outsell, 2015 represented a turning point in text usage: for the first time,
student preference for digital learning materials was higher than for print, and
the increase in preference for digital has been accelerating since.
While print remains important for many students as a pedagogical tool, the
Tenth Edition is being delivered in forms that emphasize support for learning
from digital materials. All forms we are providing dramatically reduce the cost
to students compared to the Ninth Edition. These forms are:
• Stand-alone e-text now with significan enhancements. The e-text format
for the Tenth Edition adds exercises with solutions at the ends of main
sections, hide/reveal definitions for key terms, and a number of animated
figures. It also includes additional “Practice Exercises” with solutions for
each chapter, extra exercises, programming problems and projects, “Fur-
ther Reading” sections, a complete glossary, and four appendices for legacy
operating systems.
• E-text with print companion bundle. For a nominal additional cost, the
e-text also is available with an abridged print companion that includes
a loose-leaf copy of the main chapter text, end-of-chapter “Practice Exer-
cises” (solutions available online), and “Further Reading” sections. Instruc-
tors may also order bound print companions for the bundled package by
contacting their Wiley account representative.
Although we highly encourage all instructors and students to take advantage
of the cost, content, and learning advantages of the e-text edition, it is possible
for instructors to work with their Wiley Account Manager to create a custom
print edition.
To explore these options further or to discuss other options, contact your
Wiley account manager (http://www.wiley.com/go/whosmyrep) or visit the
product information page for this text on wiley.com

Book Material
The book consists of 21 chapters and 4 appendices. Each chapter and appendix
contains the text, as well as the following enhancements:
Preface ix

• A set of practice exercises, including solutions


• A set of regular exercises
• A set of programming problems
• A set of programming projects
• A Further Reading section
• Pop-up definitions of important (blue) terms
• A glossary of important terms
• Animations that describe specific key concepts

A hard copy of the text is available in book stores and online. That version has
the same text chapters as the electronic version. It does not, however, include
the appendices, the regular exercises, the solutions to the practice exercises,
the programming problems, the programming projects, and some of the other
enhancements found in this ePub electronic book.

Content of This Book


The text is organized in ten major parts:
• Overview. Chapters 1 and 2 explain what operating systems are, what
they do, and how they are designed and constructed. These chapters dis-
cuss what the common features of an operating system are and what an
operating system does for the user. We include coverage of both tradi-
tional PC and server operating systems and operating systems for mobile
devices. The presentation is motivational and explanatory in nature. We
have avoided a discussion of how things are done internally in these chap-
ters. Therefore, they are suitable for individual readers or for students in
lower-level classes who want to learn what an operating system is without
getting into the details of the internal algorithms.
• Process management. Chapters 3 through 5 describe the process concept
and concurrency as the heart of modern operating systems. A process is
the unit of work in a system. Such a system consists of a collection of
concurrently executing processes, some executing operating-system code
and others executing user code. These chapters cover methods for process
scheduling and interprocess communication. Also included is a detailed
discussion of threads, as well as an examination of issues related to multi-
core systems and parallel programming.
• Process synchronization. Chapters 6 through 8 cover methods for process
synchronization and deadlock handling. Because we have increased the
coverage of process synchronization, we have divided the former Chapter
5 (Process Synchronization) into two separate chapters: Chapter 6, Syn-
chronization Tools, and Chapter 7, Synchronization Examples.
• Memory management. Chapters 9 and 10 deal with the management of
main memory during the execution of a process. To improve both the
x Preface

utilization of the CPU and the speed of its response to its users, the com-
puter must keep several processes in memory. There are many different
memory-management schemes, reflecting various approaches to memory
management, and the effectiveness of a particular algorithm depends on
the situation.
• Storage management. Chapters 11 and 12 describe how mass storage and
I/O are handled in a modern computer system. The I/O devices that attach
to a computer vary widely, and the operating system needs to provide a
wide range of functionality to applications to allow them to control all
aspects of these devices. We discuss system I/O in depth, including I/O
system design, interfaces, and internal system structures and functions.
In many ways, I/O devices are the slowest major components of the com-
puter. Because they represent a performance bottleneck, we also examine
performance issues associated with I/O devices.
• File systems. Chapters 13 through 15 discuss how file systems are handled
in a modern computer system. File systems provide the mechanism for on-
line storage of and access to both data and programs. We describe the clas-
sic internal algorithms and structures of storage management and provide
a firm practical understanding of the algorithms used—their properties,
advantages, and disadvantages.
• Security and protection. Chapters 16 and 17 discuss the mechanisms nec-
essary for the security and protection of computer systems. The processes
in an operating system must be protected from one another’s activities.
To provide such protection, we must ensure that only processes that have
gained proper authorization from the operating system can operate on
the files, memory, CPU, and other resources of the system. Protection is
a mechanism for controlling the access of programs, processes, or users
to computer-system resources. This mechanism must provide a means
of specifying the controls to be imposed, as well as a means of enforce-
ment. Security protects the integrity of the information stored in the system
(both data and code), as well as the physical resources of the system, from
unauthorized access, malicious destruction or alteration, and accidental
introduction of inconsistency.
• Advanced topics. Chapters 18 and 19 discuss virtual machines and
networks/distributed systems. Chapter 18 provides an overview of
virtual machines and their relationship to contemporary operating
systems. Included is a general description of the hardware and software
techniques that make virtualization possible. Chapter 19 provides an
overview of computer networks and distributed systems, with a focus on
the Internet and TCP/IP.
• Case studies. Chapter 20 and 21 present detailed case studies of two real
operating systems—Linux and Windows 10.
• Appendices. Appendix A discusses several old influential operating sys-
tems that are no longer in use. Appendices B through D cover in great
detaisl three older operating systems— Windows 7, BSD, and Mach.
Preface xi

Programming Environments

The text provides several example programs written in C and Java. These
programs are intended to run in the following programming environments:
• POSIX. POSIX (which stands for Portable Operating System Interface) repre-
sents a set of standards implemented primarily for UNIX-based operat-
ing systems. Although Windows systems can also run certain POSIX pro-
grams, our coverage of POSIX focuses on Linux and UNIX systems. POSIX-
compliant systems must implement the POSIX core standard (POSIX.1);
Linux and macOS are examples of POSIX-compliant systems. POSIX also
defines several extensions to the standards, including real-time extensions
(POSIX.1b) and an extension for a threads library (POSIX.1c, better known
as Pthreads). We provide several programming examples written in C
illustrating the POSIX base API, as well as Pthreads and the extensions for
real-time programming. These example programs were tested on Linux 4.4
and macOS 10.11 systems using the gcc compiler.
• Java. Java is a widely used programming language with a rich API and
built-in language support for concurrent and parallel programming. Java
programs run on any operating system supporting a Java virtual machine
(or JVM). We illustrate various operating-system and networking concepts
with Java programs tested using Version 1.8 of the Java Development Kit
(JDK).
• Windows systems. The primary programming environment for Windows
systems is the Windows API, which provides a comprehensive set of func-
tions for managing processes, threads, memory, and peripheral devices.
We supply a modest number of C programs illustrating the use of this API.
Programs were tested on a system running Windows 10.

We have chosen these three programming environments because we


believe that they best represent the two most popular operating-system
models—Linux/UNIX and Windows—along with the widely used Java
environment. Most programming examples are written in C, and we expect
readers to be comfortable with this language. Readers familiar with both the
C and Java languages should easily understand most programs provided in
this text.
In some instances—such as thread creation—we illustrate a specific con-
cept using all three programming environments, allowing the reader to con-
trast the three different libraries as they address the same task. In other situa-
tions, we may use just one of the APIs to demonstrate a concept. For example,
we illustrate shared memory using just the POSIX API; socket programming in
TCP/IP is highlighted using the Java API.

Linux Virtual Machine


To help students gain a better understanding of the Linux system, we pro-
vide a Linux virtual machine running the Ubuntu distribution with this text.
The virtual machine, which is available for download from the text website
xii Preface

(http://www.os-book.com), also provides development environments includ-


ing the gcc and Java compilers. Most of the programming assignments in the
book can be completed using this virtual machine, with the exception of assign-
ments that require the Windows API. The virtual machine can be installed and
run on any host operating system that can run the VirtualBox virtualization
software, which currently includes Windows 10 Linux, and macOS.

The Tenth Edition


As we wrote this Tenth Edition of Operating System Concepts, we were guided by
the sustained growth in four fundamental areas that affect operating systems:
1. Mobile operating systems
2. Multicore systems
3. Virtualization
4. Nonvolatile memory secondary storage
To emphasize these topics, we have integrated relevant coverage throughout
this new edition. For example, we have greatly increased our coverage of the
Android and iOS mobile operating systems, as well as our coverage of the
ARMv8 architecture that dominates mobile devices. We have also increased
our coverage of multicore systems, including increased coverage of APIs that
provide support for concurrency and parallelism. Nonvolatile memory devices
like SSDs are now treated as the equals of hard-disk drives in the chapters that
discuss I/O, mass storage, and file systems.
Several of our readers have expressed support for an increase in Java
coverage, and we have provided additional Java examples throughout this
edition.
Additionally, we have rewritten material in almost every chapter by bring-
ing older material up to date and removing material that is no longer interest-
ing or relevant. We have reordered many chapters and have, in some instances,
moved sections from one chapter to another. We have also greatly revised
the artwork, creating several new figures as well as modifying many existing
figures.

Major Changes
The Tenth Edition update encompasses much more material than previous
updates, in terms of both content and new supporting material. Next, we
provide a brief outline of the major content changes in each chapter:

• Chapter 1: Introduction includes updated coverage of multicore systems,


as well as new coverage of NUMA systems and Hadoop clusters. Old
material has been updated, and new motivation has been added for the
study of operating systems.
• Chapter 2: Operating-System Structures provides a significantly revised
discussion of the design and implementation of operating systems. We
have updated our treatment of Android and iOS and have revised our
Preface xiii

coverage of the system boot process with a focus on GRUB for Linux
systems. New coverage of the Windows subsystem for Linux is included
as well. We have added new sections on linkers and loaders, and we now
discuss why applications are often operating-system specific. Finally, we
have added a discussion of the BCC debugging toolset.
• Chapter 3: Processes simplifies the discussion of scheduling so that it
now includes only CPU scheduling issues. New coverage describes the
memory layout of a C program, the Android process hierarchy, Mach
message passing, and Android RPCs. We have also replaced coverage of
the traditional UNIX/Linux init process with coverage of systemd.
• Chapter 4: Threads and Concurrency (previously Threads) increases the
coverage of support for concurrent and parallel programming at the API
and library level. We have revised the section on Java threads so that it
now includes futures and have updated the coverage of Apple’s Grand
Central Dispatch so that it now includes Swift. New sections discuss fork-
join parallelism using the fork-join framework in Java, as well as Intel
thread building blocks.
• Chapter 5: CPU Scheduling (previously Chapter 6) revises the coverage of
multilevel queue and multicore processing scheduling. We have integrated
coverage of NUMA-aware scheduling issues throughout, including how
this scheduling affects load balancing. We also discuss related modifica-
tions to the Linux CFS scheduler. New coverage combines discussions of
round-robin and priority scheduling, heterogeneous multiprocessing, and
Windows 10 scheduling.
• Chapter 6: Synchronization Tools (previously part of Chapter 5, Process
Synchronization) focuses on various tools for synchronizing processes.
Significant new coverage discusses architectural issues such as instruction
reordering and delayed writes to buffers. The chapter also introduces lock-
free algorithms using compare-and-swap (CAS) instructions. No specific
APIs are presented; rather, the chapter provides an introduction to race
conditions and general tools that can be used to prevent data races. Details
include new coverage of memory models, memory barriers, and liveness
issues.
• Chapter 7: Synchronization Examples (previously part of Chapter 5,
Process Synchronization) introduces classical synchronization problems
and discusses specific API support for designing solutions that solve
these problems. The chapter includes new coverage of POSIX named and
unnamed semaphores, as well as condition variables. A new section on
Java synchronization is included as well.
• Chapter 8: Deadlocks (previously Chapter 7) provides minor updates,
including a new section on livelock and a discussion of deadlock as an
example of a liveness hazard. The chapter includes new coverage of the
Linux lockdep and the BCC deadlock detector tools, as well as coverage
of Java deadlock detection using thread dumps.
• Chapter 9: Main Memory (previously Chapter 8) includes several revi-
sions that bring the chapter up to date with respect to memory manage-
xiv Preface

ment on modern computer systems. We have added new coverage of the


ARMv8 64-bit architecture, updated the coverage of dynamic link libraries,
and changed swapping coverage so that it now focuses on swapping pages
rather than processes. We have also eliminated coverage of segmentation.
• Chapter 10: Virtual Memory (previously Chapter 9) contains several revi-
sions, including updated coverage of memory allocation on NUMA systems
and global allocation using a free-frame list. New coverage includes com-
pressed memory, major/minor page faults, and memory management in
Linux and Windows 10.
• Chapter 11: Mass-Storage Structure (previously Chapter 10) adds cover-
age of nonvolatile memory devices, such as flash and solid-state disks.
Hard-drive scheduling is simplified to show only currently used algo-
rithms. Also included are a new section on cloud storage, updated RAID
coverage, and a new discussion of object storage.
• Chapter 12, I/O (previously Chapter 13) updates the coverage of
technologies and performance numbers, expands the coverage of
synchronous/asynchronous and blocking/nonblocking I/O, and adds a
section on vectored I/O. It also expands coverage of power management
for mobile operating systems.
• Chapter 13: File-System Interface (previously Chapter 11) has been
updated with information about current technologies. In particular, the
coverage of directory structures has been improved, and the coverage of
protection has been updated. The memory-mapped files section has been
expanded, and a Windows API example has been added to the discussion
of shared memory. The ordering of topics is refactored in Chapter 13 and
14.
• Chapter 14: File-System Implementation (previously Chapter 12) has
been updated with coverage of current technologies. The chapter now
includes discussions of TRIM and the Apple File System. In addition, the
discussion of performance has been updated, and the coverage of journal-
ing has been expanded.
• Chapter 15: File System Internals is new and contains updated informa-
tion from previous Chapters 11 and 12.
• Chapter 16: Security (previously Chapter 15) now precedes the protec-
tion chapter. It includes revised and updated terms for current security
threats and solutions, including ransomware and remote access tools. The
principle of least privilege is emphasized. Coverage of code-injection vul-
nerabilities and attacks has been revised and now includes code samples.
Discussion of encryption technologies has been updated to focus on the
technologies currently used. Coverage of authentication (by passwords
and other methods) has been updated and expanded with helpful hints.
Additions include a discussion of address-space layout randomization and
a new summary of security defenses. The Windows 7 example has been
updated to Windows 10.
• Chapter 17: Protection (previously Chapter 14) contains major changes.
The discussion of protection rings and layers has been updated and now
Preface xv

refers to the Bell–LaPadula model and explores the ARM model of Trust-
Zones and Secure Monitor Calls. Coverage of the need-to-know principle
has been expanded, as has coverage of mandatory access control. Subsec-
tions on Linux capabilities, Darwin entitlements, security integrity protec-
tion, system-call filtering, sandboxing, and code signing have been added.
Coverage of run-time-based enforcement in Java has also been added,
including the stack inspection technique.
• Chapter 18: Virtual Machines (previously Chapter 16) includes added
details about hardware assistance technologies. Also expanded is the
topic of application containment, now including containers, zones, docker,
and Kubernetes. A new section discusses ongoing virtualization research,
including unikernels, library operating systems, partitioning hypervisors,
and separation hypervisors.
• Chapter 19, Networks and Distributed Systems (previously Chapter 17)
has been substantially updated and now combines coverage of computer
networks and distributed systems. The material has been revised to bring
it up to date with respect to contemporary computer networks and dis-
tributed systems. The TCP/IP model receives added emphasis, and a dis-
cussion of cloud storage has been added. The section on network topolo-
gies has been removed. Coverage of name resolution has been expanded
and a Java example added. The chapter also includes new coverage of dis-
tributed file systems, including MapReduce on top of Google file system,
Hadoop, GPFS, and Lustre.
• Chapter 20: The Linux System (previously Chapter 18) has been updated
to cover the Linux 4.i kernel.
• Chapter 21: The Windows 10 System is a new chapter that covers the
internals of Windows 10.
• Appendix A: Influentia Operating Systems has been updated to include
material from chapters that are no longer covered in the text.

Supporting Website

When you visit the website supporting this text at http://www.os-book.com,


you can download the following resources:
• Linux virtual machine
• C and Java source code
• The complete set of figures and illustrations
• FreeBSD, Mach, and Windows 7 case studies

• Errata
• Bibliography

Notes to Instructors

On the website for this text, we provide several sample syllabi that suggest var-
ious approaches for using the text in both introductory and advanced courses.
xvi Preface

As a general rule, we encourage instructors to progress sequentially through


the chapters, as this strategy provides the most thorough study of operat-
ing systems. However, by using the sample syllabi, an instructor can select a
different ordering of chapters (or subsections of chapters).
In this edition, we have added many new written exercises and pro-
gramming problems and projects. Most of the new programming assignments
involve processes, threads, process scheduling, process synchronization, and
memory management. Some involve adding kernel modules to the Linux sys-
tem, which requires using either the Linux virtual machine that accompanies
this text or another suitable Linux distribution.
Solutions to written exercises and programming assignments are avail-
able to instructors who have adopted this text for their operating-system
class. To obtain these restricted supplements, contact your local John Wiley &
Sons sales representative. You can find your Wiley representative by going to
http://www.wiley.com/college and clicking “Who’s my rep?”

Notes to Students
We encourage you to take advantage of the practice exercises that appear at the
end of each chapter. We also encourage you to read through the study guide,
which was prepared by one of our students. Finally, for students who are unfa-
miliar with UNIX and Linux systems, we recommend that you download and
install the Linux virtual machine that we include on the supporting website.
Not only will this provide you with a new computing experience, but the open-
source nature of Linux will allow you to easily examine the inner details of this
popular operating system. We wish you the very best of luck in your study of
operating systems!

Contacting Us
We have endeavored to eliminate typos, bugs, and the like from the text. But,
as in new releases of software, bugs almost surely remain. An up-to-date errata
list is accessible from the book’s website. We would be grateful if you would
notify us of any errors or omissions in the book that are not on the current list
of errata.
We would be glad to receive suggestions on improvements to the book.
We also welcome any contributions to the book website that could be of use
to other readers, such as programming exercises, project suggestions, on-line
labs and tutorials, and teaching tips. E-mail should be addressed to os-book-
[email protected].

Acknowledgments

Many people have helped us with this Tenth Edition, as well as with the
previous nine editions from which it is derived.
Preface xvii

Tenth Edition
• Rick Farrow provided expert advice as a technical editor.
• Jonathan Levin helped out with coverage of mobile systems, protection,
and security.
• Alex Ionescu updated the previous Windows 7 chapter to provide Chapter
21: Windows 10.
• Sarah Diesburg revised Chapter 19: Networks and Distributed Systems.
• Brendan Gregg provided guidance on the BCC toolset.
• Richard Stallman (RMS) supplied feedback on the description of free and
open-source software.
• Robert Love provided updates to Chapter 20: The Linux System.
• Michael Shapiro helped with storage and I/O technology details.
• Richard West provided insight on areas of virtualization research.
• Clay Breshears helped with coverage of Intel thread-building blocks.
• Gerry Howser gave feedback on motivating the study of operating systems
and also tried out new material in his class.
• Judi Paige helped with generating figures and presentation of slides.
• Jay Gagne and Audra Rissmeyer prepared new artwork for this edition.
• Owen Galvin provided technical editing for Chapter 11 and Chapter 12.
• Mark Wogahn has made sure that the software to produce this book (LATEX
and fonts) works properly.
• Ranjan Kumar Meher rewrote some of the LATEX software used in the pro-
duction of this new text.

Previous Editions
• First three editions. This book is derived from the previous editions, the
first three of which were coauthored by James Peterson.
• General contributions. Others who helped us with previous editions
include Hamid Arabnia, Rida Bazzi, Randy Bentson, David Black, Joseph
Boykin, Jeff Brumfield, Gael Buckley, Roy Campbell, P. C. Capon, John
Carpenter, Gil Carrick, Thomas Casavant, Bart Childs, Ajoy Kumar Datta,
Joe Deck, Sudarshan K. Dhall, Thomas Doeppner, Caleb Drake, M. Rasit
Eskicioğlu, Hans Flack, Robert Fowler, G. Scott Graham, Richard Guy,
Max Hailperin, Rebecca Hartman, Wayne Hathaway, Christopher Haynes,
Don Heller, Bruce Hillyer, Mark Holliday, Dean Hougen, Michael Huang,
Ahmed Kamel, Morty Kewstel, Richard Kieburtz, Carol Kroll, Morty
Kwestel, Thomas LeBlanc, John Leggett, Jerrold Leichter, Ted Leung, Gary
Lippman, Carolyn Miller, Michael Molloy, Euripides Montagne, Yoichi
Muraoka, Jim M. Ng, Banu Özden, Ed Posnak, Boris Putanec, Charles
xviii Preface

Qualline, John Quarterman, Mike Reiter, Gustavo Rodriguez-Rivera,


Carolyn J. C. Schauble, Thomas P. Skinner, Yannis Smaragdakis, Jesse
St. Laurent, John Stankovic, Adam Stauffer, Steven Stepanek, John
Sterling, Hal Stern, Louis Stevens, Pete Thomas, David Umbaugh, Steve
Vinoski, Tommy Wagner, Larry L. Wear, John Werth, James M. Westall, J.
S. Weston, and Yang Xiang
• Specifi Contributions
◦ Robert Love updated both Chapter 20 and the Linux coverage through-
out the text, as well as answering many of our Android-related ques-
tions.
◦ Appendix B was written by Dave Probert and was derived from Chap-
ter 22 of the Eighth Edition of Operating System Concepts.
◦ Jonathan Katz contributed to Chapter 16. Richard West provided input
into Chapter 18. Salahuddin Khan updated Section 16.7 to provide new
coverage of Windows 7 security.
◦ Parts of Chapter 19 were derived from a paper by Levy and Silberschatz
[1990].
◦ Chapter 20 was derived from an unpublished manuscript by Stephen
Tweedie.
◦ Cliff Martin helped with updating the UNIX appendix to cover FreeBSD.
◦ Some of the exercises and accompanying solutions were supplied by
Arvind Krishnamurthy.
◦ Andrew DeNicola prepared the student study guide that is available on
our website. Some of the slides were prepared by Marilyn Turnamian.
◦ Mike Shapiro, Bryan Cantrill, and Jim Mauro answered several Solaris-
related questions, and Bryan Cantrill from Sun Microsystems helped
with the ZFS coverage. Josh Dees and Rob Reynolds contributed cover-
age of Microsoft’s NET.
◦ Owen Galvin helped copy-edit Chapter 18 edition.

Book Production
The Executive Editor was Don Fowley. The Senior Production Editor was Ken
Santor. The Freelance Developmental Editor was Chris Nelson. The Assistant
Developmental Editor was Ryann Dannelly. The cover designer was Tom Nery.
The copyeditor was Beverly Peavler. The freelance proofreader was Katrina
Avery. The freelance indexer was WordCo, Inc. The Aptara LaTex team con-
sisted of Neeraj Saxena and Lav kush.

Personal Notes
Avi would like to acknowledge Valerie for her love, patience, and support
during the revision of this book.
Preface xix

Peter would like to thank his wife Carla and his children, Gwen, Owen,
and Maddie.
Greg would like to acknowledge the continued support of his family: his
wife Pat and sons Thomas and Jay.

Abraham Silberschatz, New Haven, CT


Peter Baer Galvin, Boston, MA
Greg Gagne, Salt Lake City, UT
Contents
PART ONE OVERVIEW
Chapter 1 Introduction
1.1 What Operating Systems Do 4 1.8 Distributed Systems 35
1.2 Computer-System Organization 7 1.9 Kernel Data Structures 36
1.3 Computer-System Architecture 15 1.10 Computing Environments 40
1.4 Operating-System Operations 21 1.11 Free and Open-Source Operating
1.5 Resource Management 27 Systems 46
1.6 Security and Protection 33 Practice Exercises 53
1.7 Virtualization 34 Further Reading 54

Chapter 2 Operating-System Structures


2.1 Operating-System Services 55 2.7 Operating-System Design and
2.2 User and Operating-System Implementation 79
Interface 58 2.8 Operating-System Structure 81
2.3 System Calls 62 2.9 Building and Booting an Operating
2.4 System Services 74 System 92
2.5 Linkers and Loaders 75 2.10 Operating-System Debugging 95
2.6 Why Applications Are 2.11 Summary 100
Operating-System Specific 77 Practice Exercises 101
Further Reading 101

PART TWO PROCESS MANAGEMENT


Chapter 3 Processes
3.1 Process Concept 106 3.7 Examples of IPC Systems 132
3.2 Process Scheduling 110 3.8 Communication in Client –
3.3 Operations on Processes 116 Server Systems 145
3.4 Interprocess Communication 123 3.9 Summary 153
3.5 IPC in Shared-Memory Systems 125 Practice Exercises 154
3.6 IPC in Message-Passing Systems 127 Further Reading 156

YYJ
YYJJ Contents

Chapter 4 Threads & Concurrency


4.1 Overview 160 4.6 Threading Issues 188
4.2 Multicore Programming 162 4.7 Operating-System Examples 194
4.3 Multithreading Models 166 4.8 Summary 196
4.4 Thread Libraries 168 Practice Exercises 197
4.5 Implicit Threading 176 Further Reading 198

Chapter 5 CPU Scheduling


5.1 Basic Concepts 200 5.7 Operating-System Examples 234
5.2 Scheduling Criteria 204 5.8 Algorithm Evaluation 244
5.3 Scheduling Algorithms 205 5.9 Summary 250
5.4 Thread Scheduling 217 Practice Exercises 251
5.5 Multi-Processor Scheduling 220 Further Reading 254
5.6 Real-Time CPU Scheduling 227

PART THREE PROCESS SYNCHRONIZATION

Chapter 6 Synchronization Tools


6.1 Background 257 6.7 Monitors 276
6.2 The Critical-Section Problem 260 6.8 Liveness 283
6.3 Peterson’s Solution 262 6.9 Evaluation 284
6.4 Hardware Support for 6.10 Summary 286
Synchronization 265 Practice Exercises 287
6.5 Mutex Locks 270 Further Reading 288
6.6 Semaphores 272

Chapter 7 Synchronization Examples


7.1 Classic Problems of 7.5 Alternative Approaches 311
Synchronization 289 7.6 Summary 314
7.2 Synchronization within the Kernel 295 Practice Exercises 314
7.3 POSIX Synchronization 299 Further Reading 315
7.4 Synchronization in Java 303

Chapter 8 Deadlocks
8.1 System Model 318 8.6 Deadlock Avoidance 330
8.2 Deadlock in Multithreaded 8.7 Deadlock Detection 337
Applications 319 8.8 Recovery from Deadlock 341
8.3 Deadlock Characterization 321 8.9 Summary 343
8.4 Methods for Handling Deadlocks 326 Practice Exercises 344
8.5 Deadlock Prevention 327 Further Reading 346
Contents YYJJJ

PART FOUR MEMORY MANAGEMENT

Chapter 9 Main Memory


9.1 Background 349 9.6 Example: Intel 32- and 64-bit
9.2 Contiguous Memory Allocation 356 Architectures 379
9.3 Paging 360 9.7 Example: ARMv8 Architecture 383
9.4 Structure of the Page Table 371 9.8 Summary 384
9.5 Swapping 376 Practice Exercises 385
Further Reading 387

Chapter 10 Virtual Memory


10.1 Background 389 10.8 Allocating Kernel Memory 426
10.2 Demand Paging 392 10.9 Other Considerations 430
10.3 Copy-on-Write 399 10.10 Operating-System Examples 436
10.4 Page Replacement 401 10.11 Summary 440
10.5 Allocation of Frames 413 Practice Exercises 441
10.6 Thrashing 419 Further Reading 444
10.7 Memory Compression 425

PART FIVE STORAGE MANAGEMENT

Chapter 11 Mass-Storage Structure


11.1 Overview of Mass-Storage 11.6 Swap-Space Management 467
Structure 449 11.7 Storage Attachment 469
11.2 HDD Scheduling 457 11.8 RAID Structure 473
11.3 NVM Scheduling 461 11.9 Summary 485
11.4 Error Detection and Correction 462 Practice Exercises 486
11.5 Storage Device Management 463 Further Reading 487

Chapter 12 I/O Systems


12.1 Overview 489 12.6 STREAMS 519
12.2 I/O Hardware 490 12.7 Performance 521
12.3 Application I/O Interface 500 12.8 Summary 524
12.4 Kernel I/O Subsystem 508 Practice Exercises 525
12.5 Transforming I/O Requests to Further Reading 526
Hardware Operations 516
YYJW Contents

PART SIX FILE SYSTEM


Chapter 13 File-System Interface
13.1 File Concept 529 13.5 Memory-Mapped Files 555
13.2 Access Methods 539 13.6 Summary 560
13.3 Directory Structure 541 Practice Exercises 560
13.4 Protection 550 Further Reading 561

Chapter 14 File-System Implementation


14.1 File-System Structure 564 14.7 Recovery 586
14.2 File-System Operations 566 14.8 Example: The WAFL File System 589
14.3 Directory Implementation 568 14.9 Summary 593
14.4 Allocation Methods 570 Practice Exercises 594
14.5 Free-Space Management 578 Further Reading 594
14.6 Efficiency and Performance 582

Chapter 15 File-System Internals


15.1 File Systems 597 15.7 Consistency Semantics 608
15.2 File-System Mounting 598 15.8 NFS 610
15.3 Partitions and Mounting 601 15.9 Summary 615
15.4 File Sharing 602 Practice Exercises 616
15.5 Virtual File Systems 603 Further Reading 617
15.6 Remote File Systems 605

PART SEVEN SECURITY AND PROTECTION


Chapter 16 Security
16.1 The Security Problem 621 16.6 Implementing Security Defenses 653
16.2 Program Threats 625 16.7 An Example: Windows 10 662
16.3 System and Network Threats 634 16.8 Summary 664
16.4 Cryptography as a Security Tool 637 Further Reading 665
16.5 User Authentication 648

Chapter 17 Protection
17.1 Goals of Protection 667 17.9 Mandatory Access Control
17.2 Principles of Protection 668 (MAC) 684
17.3 Protection Rings 669 17.10 Capability-Based Systems 685
17.4 Domain of Protection 671 17.11 Other Protection Improvement
17.5 Access Matrix 675 Methods 687
17.6 Implementation of the Access 17.12 Language-Based Protection 690
Matrix 679 17.13 Summary 696
17.7 Revocation of Access Rights 682 Further Reading 697
17.8 Role-Based Access Control 683
Contents YYW

PART EIGHT ADVANCED TOPICS

Chapter 18 Virtual Machines


18.1 Overview 701 18.6 Virtualization and Operating-System
18.2 History 703 Components 719
18.3 Benefits and Features 704 18.7 Examples 726
18.4 Building Blocks 707 18.8 Virtualization Research 728
18.5 Types of VMs and Their 18.9 Summary 729
Implementations 713 Further Reading 730

Chapter 19 Networks and Distributed Systems


19.1 Advantages of Distributed 19.6 Distributed File Systems 757
Systems 733 19.7 DFS Naming and Transparency 761
19.2 Network Structure 735 19.8 Remote File Access 764
19.3 Communication Structure 738 19.9 Final Thoughts on Distributed File
19.4 Network and Distributed Operating Systems 767
Systems 749 19.10 Summary 768
19.5 Design Issues in Distributed Practice Exercises 769
Systems 753 Further Reading 770

PART NINE CASE STUDIES

Chapter 20 The Linux System


20.1 Linux History 775 20.8 Input and Output 810
20.2 Design Principles 780 20.9 Interprocess Communication 812
20.3 Kernel Modules 783 20.10 Network Structure 813
20.4 Process Management 786 20.11 Security 816
20.5 Scheduling 790 20.12 Summary 818
20.6 Memory Management 795 Practice Exercises 819
20.7 File Systems 803 Further Reading 819

Chapter 21 Windows 10
21.1 History 821 21.5 File System 875
21.2 Design Principles 826 21.6 Networking 880
21.3 System Components 838 21.7 Programmer Interface 884
21.4 Terminal Services and Fast User 21.8 Summary 895
Switching 874 Practice Exercises 896
Further Reading 897
YYWJ Contents

PART TEN APPENDICES


Chapter A Influentia Operating Systems
A.1 Feature Migration 1 A.10 TOPS-20 15
A.2 Early Systems 2 A.11 CP/M and MS/DOS 15
A.3 Atlas 9 A.12 Macintosh Operating System and
A.4 XDS-940 10 Windows 16
A.5 THE 11 A.13 Mach 16
A.6 RC 4000 11 A.14 Capability-based Systems—Hydra and
A.7 CTSS 12 CAP 18
A.8 MULTICS 13 A.15 Other Systems 20
A.9 IBM OS/360 13 Further Reading 21

Chapter B Windows 7
B.1 History 1 B.6 Networking 41
B.2 Design Principles 3 B.7 Programmer Interface 46
B.3 System Components 10 B.8 Summary 55
B.4 Terminal Services and Fast User Practice Exercises 55
Switching 34 Further Reading 56
B.5 File System 35

Chapter C BSD UNIX


C.1 UNIX History 1 C.7 File System 25
C.2 Design Principles 6 C.8 I/O System 33
C.3 Programmer Interface 8 C.9 Interprocess Communication 36
C.4 User Interface 15 C.10 Summary 41
C.5 Process Management 18 Further Reading 42
C.6 Memory Management 22

Chapter D The Mach System


D.1 History of the Mach System 1 D.6 Memory Management 18
D.2 Design Principles 3 D.7 Programmer Interface 23
D.3 System Components 4 D.8 Summary 24
D.4 Process Management 7 Further Reading 25
D.5 Interprocess Communication 13

Credits 963 
Index 965
Part One

Overview
An operating system acts as an intermediary between the user of a com-
puter and the computer hardware. The purpose of an operating system
is to provide an environment in which a user can execute programs in a
convenient and efficient manner.
An operating system is software that manages the computer hard-
ware. The hardware must provide appropriate mechanisms to ensure the
correct operation of the computer system and to prevent programs from
interfering with the proper operation of the system.
Internally, operating systems vary greatly in their makeup, since they
are organized along many different lines. The design of a new operating
system is a major task, and it is important that the goals of the system be
well defined before the design begins.
Because an operating system is large and complex, it must be cre-
ated piece by piece. Each of these pieces should be a well-delineated
portion of the system, with carefully defined inputs, outputs, and func-
tions.
CHAPTER
1
Introduction

An operating system is software that manages a computer’s hardware. It


also provides a basis for application programs and acts as an intermediary
between the computer user and the computer hardware. An amazing aspect
of operating systems is how they vary in accomplishing these tasks in a wide
variety of computing environments. Operating systems are everywhere, from
cars and home appliances that include “Internet of Things” devices, to smart
phones, personal computers, enterprise computers, and cloud computing envi-
ronments.
In order to explore the role of an operating system in a modern computing
environment, it is important first to understand the organization and architec-
ture of computer hardware. This includes the CPU, memory, and I/O devices,
as well as storage. A fundamental responsibility of an operating system is to
allocate these resources to programs.
Because an operating system is large and complex, it must be created
piece by piece. Each of these pieces should be a well-delineated portion of the
system, with carefully defined inputs, outputs, and functions. In this chapter,
we provide a general overview of the major components of a contemporary
computer system as well as the functions provided by the operating system.
Additionally, we cover several topics to help set the stage for the remainder of
the text: data structures used in operating systems, computing environments,
and open-source and free operating systems.

CHAPTER OBJECTIVES

• Describe the general organization of a computer system and the role of


interrupts.
• Describe the components in a modern multiprocessor computer system.
• Illustrate the transition from user mode to kernel mode.
• Discuss how operating systems are used in various computing environ-
ments.
• Provide examples of free and open-source operating systems.

3
4 Chapter 1 Introduction

1.1 What Operating Systems Do

We begin our discussion by looking at the operating system’s role in the


overall computer system. A computer system can be divided roughly into four
components: the hardware, the operating system, the application programs,
and a user (Figure 1.1).
The hardware—the central processing unit (CPU), the memory, and the
input/output (I/O) devices—provides the basic computing resources for the
system. The application programs—such as word processors, spreadsheets,
compilers, and web browsers—define the ways in which these resources are
used to solve users’ computing problems. The operating system controls the
hardware and coordinates its use among the various application programs for
the various users.
We can also view a computer system as consisting of hardware, software,
and data. The operating system provides the means for proper use of these
resources in the operation of the computer system. An operating system is
similar to a government. Like a government, it performs no useful function
by itself. It simply provides an environment within which other programs can
do useful work.
To understand more fully the operating system’s role, we next explore
operating systems from two viewpoints: that of the user and that of the system.

1.1.1 User View


The user’s view of the computer varies according to the interface being used.
Many computer users sit with a laptop or in front of a PC consisting of a
monitor, keyboard, and mouse. Such a system is designed for one user to
monopolize its resources. The goal is to maximize the work (or play) that the
user is performing. In this case, the operating system is designed mostly for
ease of use, with some attention paid to performance and security and none
paid to resource utilization —how various hardware and software resources
are shared.

user

application programs
(compilers, web browsers, development kits, etc.)

operating system

computer hardware
(CPU, memory, I/O devices, etc.)

Figure 1.1 Abstract view of the components of a computer system.


1.1 What Operating Systems Do 5

Increasingly, many users interact with mobile devices such as smartphones


and tablets—devices that are replacing desktop and laptop computer systems
for some users. These devices are typically connected to networks through
cellular or other wireless technologies. The user interface for mobile computers
generally features a touch screen, where the user interacts with the system by
pressing and swiping fingers across the screen rather than using a physical
keyboard and mouse. Many mobile devices also allow users to interact through
a voice recognition interface, such as Apple’s Siri.
Some computers have little or no user view. For example, embedded com-
puters in home devices and automobiles may have numeric keypads and may
turn indicator lights on or off to show status, but they and their operating sys-
tems and applications are designed primarily to run without user intervention.

1.1.2 System View


From the computer’s point of view, the operating system is the program most
intimately involved with the hardware. In this context, we can view an oper-
ating system as a resource allocator. A computer system has many resources
that may be required to solve a problem: CPU time, memory space, storage
space, I/O devices, and so on. The operating system acts as the manager of these
resources. Facing numerous and possibly conflicting requests for resources, the
operating system must decide how to allocate them to specific programs and
users so that it can operate the computer system efficiently and fairly.
A slightly different view of an operating system emphasizes the need to
control the various I/O devices and user programs. An operating system is a
control program. A control program manages the execution of user programs
to prevent errors and improper use of the computer. It is especially concerned
with the operation and control of I/O devices.

1.1.3 Defining Operating Systems


By now, you can probably see that the term operating system covers many
roles and functions. That is the case, at least in part, because of the myriad
designs and uses of computers. Computers are present within toasters, cars,
ships, spacecraft, homes, and businesses. They are the basis for game machines,
cable TV tuners, and industrial control systems.
To explain this diversity, we can turn to the history of computers. Although
computers have a relatively short history, they have evolved rapidly. Comput-
ing started as an experiment to determine what could be done and quickly
moved to fixed-purpose systems for military uses, such as code breaking and
trajectory plotting, and governmental uses, such as census calculation. Those
early computers evolved into general-purpose, multifunction mainframes, and
that’s when operating systems were born. In the 1960s, Moore’s Law predicted
that the number of transistors on an integrated circuit would double every 18
months, and that prediction has held true. Computers gained in functionality
and shrank in size, leading to a vast number of uses and a vast number and
variety of operating systems. (See Appendix A for more details on the history
of operating systems.)
How, then, can we define what an operating system is? In general, we have
no completely adequate definition of an operating system. Operating systems
6 Chapter 1 Introduction

exist because they offer a reasonable way to solve the problem of creating
a usable computing system. The fundamental goal of computer systems is
to execute programs and to make solving user problems easier. Computer
hardware is constructed toward this goal. Since bare hardware alone is not
particularly easy to use, application programs are developed. These programs
require certain common operations, such as those controlling the I/O devices.
The common functions of controlling and allocating resources are then brought
together into one piece of software: the operating system.
In addition, we have no universally accepted definition of what is part of
the operating system. A simple viewpoint is that it includes everything a ven-
dor ships when you order “the operating system.” The features included, how-
ever, vary greatly across systems. Some systems take up less than a megabyte
of space and lack even a full-screen editor, whereas others require gigabytes
of space and are based entirely on graphical windowing systems. A more com-
mon definition, and the one that we usually follow, is that the operating system
is the one program running at all times on the computer—usually called the
kernel. Along with the kernel, there are two other types of programs: system
programs, which are associated with the operating system but are not neces-
sarily part of the kernel, and application programs, which include all programs
not associated with the operation of the system.
The matter of what constitutes an operating system became increasingly
important as personal computers became more widespread and operating sys-
tems grew increasingly sophisticated. In 1998, the United States Department of
Justice filed suit against Microsoft, in essence claiming that Microsoft included
too much functionality in its operating systems and thus prevented application
vendors from competing. (For example, a web browser was an integral part of
Microsoft’s operating systems.) As a result, Microsoft was found guilty of using
its operating-system monopoly to limit competition.
Today, however, if we look at operating systems for mobile devices, we
see that once again the number of features constituting the operating system
is increasing. Mobile operating systems often include not only a core kernel
but also middleware—a set of software frameworks that provide additional
services to application developers. For example, each of the two most promi-
nent mobile operating systems—Apple’s iOS and Google’s Android —features

WHY STUDY OPERATING SYSTEMS?

Although there are many practitioners of computer science, only a small per-
centage of them will be involved in the creation or modification of an operat-
ing system. Why, then, study operating systems and how they work? Simply
because, as almost all code runs on top of an operating system, knowledge
of how operating systems work is crucial to proper, efficient, effective, and
secure programming. Understanding the fundamentals of operating systems,
how they drive computer hardware, and what they provide to applications is
not only essential to those who program them but also highly useful to those
who write programs on them and use them.
1.2 Computer-System Organization 7

a core kernel along with middleware that supports databases, multimedia, and
graphics (to name only a few).
In summary, for our purposes, the operating system includes the always-
running kernel, middleware frameworks that ease application development
and provide features, and system programs that aid in managing the system
while it is running. Most of this text is concerned with the kernel of general-
purpose operating systems, but other components are discussed as needed to
fully explain operating system design and operation.

1.2 Computer-System Organization

A modern general-purpose computer system consists of one or more CPUs and


a number of device controllers connected through a common bus that provides
access between components and shared memory (Figure 1.2). Each device
controller is in charge of a specific type of device (for example, a disk drive,
audio device, or graphics display). Depending on the controller, more than one
device may be attached. For instance, one system USB port can connect to a
USB hub, to which several devices can connect. A device controller maintains
some local buffer storage and a set of special-purpose registers. The device
controller is responsible for moving the data between the peripheral devices
that it controls and its local buffer storage.
Typically, operating systems have a device driver for each device con-
troller. This device driver understands the device controller and provides the
rest of the operating system with a uniform interface to the device. The CPU and
the device controllers can execute in parallel, competing for memory cycles. To
ensure orderly access to the shared memory, a memory controller synchronizes
access to the memory.
In the following subsections, we describe some basics of how such a system
operates, focusing on three key aspects of the system. We start with interrupts,
which alert the CPU to events that require attention. We then discuss storage
structure and I/O structure.

mouse keyboard printer monitor


disks on-line

disk graphics
CPU USB controller
controller adapter
system bus

memory

Figure 1.2 A typical PC computer system.


8 Chapter 1 Introduction

1.2.1 Interrupts
Consider a typical computer operation: a program performing I/O. To start an
I/O operation, the device driver loads the appropriate registers in the device
controller. The device controller, in turn, examines the contents of these reg-
isters to determine what action to take (such as “read a character from the
keyboard”). The controller starts the transfer of data from the device to its local
buffer. Once the transfer of data is complete, the device controller informs the
device driver that it has finished its operation. The device driver then gives
control to other parts of the operating system, possibly returning the data or a
pointer to the data if the operation was a read. For other operations, the device
driver returns status information such as “write completed successfully” or
“device busy”. But how does the controller inform the device driver that it has
finished its operation? This is accomplished via an interrupt.

1.2.1.1 Overview
Hardware may trigger an interrupt at any time by sending a signal to the
CPU, usually by way of the system bus. (There may be many buses within
a computer system, but the system bus is the main communications path
between the major components.) Interrupts are used for many other purposes
as well and are a key part of how operating systems and hardware interact.
When the CPU is interrupted, it stops what it is doing and immediately
transfers execution to a fixed location. The fixed location usually contains
the starting address where the service routine for the interrupt is located.
The interrupt service routine executes; on completion, the CPU resumes the
interrupted computation. A timeline of this operation is shown in Figure 1.3.
To run the animation assicated with this figure please click here.
Interrupts are an important part of a computer architecture. Each computer
design has its own interrupt mechanism, but several functions are common.
The interrupt must transfer control to the appropriate interrupt service routine.
The straightforward method for managing this transfer would be to invoke
a generic routine to examine the interrupt information. The routine, in turn,

Figure 1.3 Interrupt timeline for a single program doing output.


1.2 Computer-System Organization 9

would call the interrupt-specific handler. However, interrupts must be handled


quickly, as they occur very frequently. A table of pointers to interrupt routines
can be used instead to provide the necessary speed. The interrupt routine
is called indirectly through the table, with no intermediate routine needed.
Generally, the table of pointers is stored in low memory (the first hundred or so
locations). These locations hold the addresses of the interrupt service routines
for the various devices. This array, or interrupt vector, of addresses is then
indexed by a unique number, given with the interrupt request, to provide the
address of the interrupt service routine for the interrupting device. Operating
systems as different as Windows and UNIX dispatch interrupts in this manner.
The interrupt architecture must also save the state information of whatever
was interrupted, so that it can restore this information after servicing the
interrupt. If the interrupt routine needs to modify the processor state —for
instance, by modifying register values—it must explicitly save the current state
and then restore that state before returning. After the interrupt is serviced, the
saved return address is loaded into the program counter, and the interrupted
computation resumes as though the interrupt had not occurred.

1.2.1.2 Implementation
The basic interrupt mechanism works as follows. The CPU hardware has a
wire called the interrupt-request line that the CPU senses after executing every
instruction. When the CPU detects that a controller has asserted a signal on
the interrupt-request line, it reads the interrupt number and jumps to the
interrupt-handler routine by using that interrupt number as an index into
the interrupt vector. It then starts execution at the address associated with
that index. The interrupt handler saves any state it will be changing during
its operation, determines the cause of the interrupt, performs the necessary
processing, performs a state restore, and executes a return from interrupt
instruction to return the CPU to the execution state prior to the interrupt. We
say that the device controller raises an interrupt by asserting a signal on the
interrupt request line, the CPU catches the interrupt and dispatches it to the
interrupt handler, and the handler clears the interrupt by servicing the device.
Figure 1.4 summarizes the interrupt-driven I/O cycle.
The basic interrupt mechanism just described enables the CPU to respond to
an asynchronous event, as when a device controller becomes ready for service.
In a modern operating system, however, we need more sophisticated interrupt-
handling features.

1. We need the ability to defer interrupt handling during critical processing.


2. We need an efficient way to dispatch to the proper interrupt handler for
a device.
3. We need multilevel interrupts, so that the operating system can distin-
guish between high- and low-priority interrupts and can respond with
the appropriate degree of urgency.

In modern computer hardware, these three features are provided by the CPU
and the interrupt-controller hardware.
10 Chapter 1 Introduction

CPU I/O controller


1

device driver initiates I/O 2


initiates I/O

CPU executing checks for


interrupts between instructions
3

CPU receiving interrupt, 4 input ready, output


transfers control to complete, or error
interrupt handler generates interrupt signal
7
5

interrupt handler
processes data,
returns from interrupt

CPU resumes
processing of
interrupted task

Figure 1.4 Interrupt-driven I/O cycle.

Most CPUs have two interrupt request lines. One is the nonmaskable
interrupt, which is reserved for events such as unrecoverable memory errors.
The second interrupt line is maskable: it can be turned off by the CPU before
the execution of critical instruction sequences that must not be interrupted. The
maskable interrupt is used by device controllers to request service.
Recall that the purpose of a vectored interrupt mechanism is to reduce the
need for a single interrupt handler to search all possible sources of interrupts
to determine which one needs service. In practice, however, computers have
more devices (and, hence, interrupt handlers) than they have address elements
in the interrupt vector. A common way to solve this problem is to use interrupt
chaining, in which each element in the interrupt vector points to the head of
a list of interrupt handlers. When an interrupt is raised, the handlers on the
corresponding list are called one by one, until one is found that can service
the request. This structure is a compromise between the overhead of a huge
interrupt table and the inefficiency of dispatching to a single interrupt handler.
Figure 1.5 illustrates the design of the interrupt vector for Intel processors.
The events from 0 to 31, which are nonmaskable, are used to signal various
error conditions. The events from 32 to 255, which are maskable, are used for
purposes such as device-generated interrupts.
The interrupt mechanism also implements a system of interrupt priority
levels. These levels enable the CPU to defer the handling of low-priority inter-
1.2 Computer-System Organization 11

vector number description

0 divide error
1 debug exception
2 null interrupt
3 breakpoint
4 INTO-detected overflow
5 bound range exception
6 invalid opcode
7 device not available
8 double fault
9 coprocessor segment overrun (reserved)
10 invalid task state segment
11 segment not present
12 stack fault
13 general protection
14 page fault
15 (Intel reserved, do not use)
16 floating-point error
17 alignment check
18 machine check
19–31 (Intel reserved, do not use)
32–255 maskable interrupts

Figure 1.5 Intel processor event-vector table.

rupts without masking all interrupts and makes it possible for a high-priority
interrupt to preempt the execution of a low-priority interrupt.
In summary, interrupts are used throughout modern operating systems to
handle asynchronous events (and for other purposes we will discuss through-
out the text). Device controllers and hardware faults raise interrupts. To enable
the most urgent work to be done first, modern computers use a system of
interrupt priorities. Because interrupts are used so heavily for time-sensitive
processing, efficient interrupt handling is required for good system perfor-
mance.

1.2.2 Storage Structure


The CPU can load instructions only from memory, so any programs must
first be loaded into memory to run. General-purpose computers run most
of their programs from rewritable memory, called main memory (also called
random-access memory, or RAM). Main memory commonly is implemented in
a semiconductor technology called dynamic random-access memory (DRAM).
Computers use other forms of memory as well. For example, the first pro-
gram to run on computer power-on is a bootstrap program, which then loads
the operating system. Since RAM is volatile—loses its content when power
is turned off or otherwise lost—we cannot trust it to hold the bootstrap pro-
gram. Instead, for this and some other purposes, the computer uses electri-
cally erasable programmable read-only memory (EEPROM) and other forms of
firmwar —storage that is infrequently written to and is nonvolatile. EEPROM
12 Chapter 1 Introduction

STORAGE DEFINITIONS AND NOTATION

The basic unit of computer storage is the bit. A bit can contain one of two
values, 0 and 1. All other storage in a computer is based on collections of bits.
Given enough bits, it is amazing how many things a computer can represent:
numbers, letters, images, movies, sounds, documents, and programs, to name
a few. A byte is 8 bits, and on most computers it is the smallest convenient
chunk of storage. For example, most computers don’t have an instruction to
move a bit but do have one to move a byte. A less common term is word,
which is a given computer architecture’s native unit of data. A word is made
up of one or more bytes. For example, a computer that has 64-bit registers and
64-bit memory addressing typically has 64-bit (8-byte) words. A computer
executes many operations in its native word size rather than a byte at a time.
Computer storage, along with most computer throughput, is generally
measured and manipulated in bytes and collections of bytes. A kilobyte, or
KB, is 1,024 bytes; a megabyte, or MB, is 1,0242 bytes; a gigabyte, or GB, is
1,0243 bytes; a terabyte, or TB, is 1,0244 bytes; and a petabyte, or PB, is 1,0245
bytes. Computer manufacturers often round off these numbers and say that
a megabyte is 1 million bytes and a gigabyte is 1 billion bytes. Networking
measurements are an exception to this general rule; they are given in bits
(because networks move data a bit at a time).

can be changed but cannot be changed frequently. In addition, it is low speed,


and so it contains mostly static programs and data that aren’t frequently used.
For example, the iPhone uses EEPROM to store serial numbers and hardware
information about the device.
All forms of memory provide an array of bytes. Each byte has its own
address. Interaction is achieved through a sequence of load or store instruc-
tions to specific memory addresses. The load instruction moves a byte or word
from main memory to an internal register within the CPU, whereas the store
instruction moves the content of a register to main memory. Aside from explicit
loads and stores, the CPU automatically loads instructions from main memory
for execution from the location stored in the program counter.
A typical instruction–execution cycle, as executed on a system with a von
Neumann architecture, first fetches an instruction from memory and stores
that instruction in the instruction register. The instruction is then decoded
and may cause operands to be fetched from memory and stored in some
internal register. After the instruction on the operands has been executed, the
result may be stored back in memory. Notice that the memory unit sees only
a stream of memory addresses. It does not know how they are generated (by
the instruction counter, indexing, indirection, literal addresses, or some other
means) or what they are for (instructions or data). Accordingly, we can ignore
how a memory address is generated by a program. We are interested only in
the sequence of memory addresses generated by the running program.
Ideally, we want the programs and data to reside in main memory per-
manently. This arrangement usually is not possible on most systems for two
reasons:
1.2 Computer-System Organization 13

1. Main memory is usually too small to store all needed programs and data
permanently.
2. Main memory, as mentioned, is volatile —it loses its contents when power
is turned off or otherwise lost.
Thus, most computer systems provide secondary storage as an extension of
main memory. The main requirement for secondary storage is that it be able to
hold large quantities of data permanently.
The most common secondary-storage devices are hard-disk drives (HDDs)
and nonvolatile memory (NVM) devices, which provide storage for both
programs and data. Most programs (system and application) are stored in
secondary storage until they are loaded into memory. Many programs then use
secondary storage as both the source and the destination of their processing.
Secondary storage is also much slower than main memory. Hence, the proper
management of secondary storage is of central importance to a computer sys-
tem, as we discuss in Chapter 11.
In a larger sense, however, the storage structure that we have described
—consisting of registers, main memory, and secondary storage—is only one
of many possible storage system designs. Other possible components include
cache memory, CD-ROM or blu-ray, magnetic tapes, and so on. Those that are
slow enough and large enough that they are used only for special purposes
—to store backup copies of material stored on other devices, for example —
are called tertiary storage. Each storage system provides the basic functions
of storing a datum and holding that datum until it is retrieved at a later time.
The main differences among the various storage systems lie in speed, size, and
volatility.
The wide variety of storage systems can be organized in a hierarchy (Figure
1.6) according to storage capacity and access time. As a general rule, there is a

storage capacity access time


registers
smaller

faster
primary
cache storage
volatile
storage main memory
-----------------------------------------------------------
nonvolatile
storage nonvolatile memory secondary
storage

hard-disk drives

optical disk
slower
larger

tertiary
storage

magnetic tapes

Figure 1.6 Storage-device hierarchy.


14 Chapter 1 Introduction

trade-off between size and speed, with smaller and faster memory closer to the
CPU. As shown in the figure, in addition to differing in speed and capacity, the
various storage systems are either volatile or nonvolatile. Volatile storage, as
mentioned earlier, loses its contents when the power to the device is removed,
so data must be written to nonvolatile storage for safekeeping.
The top four levels of memory in the figure are constructed using semi-
conductor memory, which consists of semiconductor-based electronic circuits.
NVM devices, at the fourth level, have several variants but in general are faster
than hard disks. The most common form of NVM device is flash memory, which
is popular in mobile devices such as smartphones and tablets. Increasingly,
flash memory is being used for long-term storage on laptops, desktops, and
servers as well.
Since storage plays an important role in operating-system structure, we
will refer to it frequently in the text. In general, we will use the following
terminology:

• Volatile storage will be referred to simply as memory. If we need to empha-


size a particular type of storage device (for example, a register),we will do
so explicitly.
• Nonvolatile storage retains its contents when power is lost. It will be
referred to as NVS. The vast majority of the time we spend on NVS will
be on secondary storage. This type of storage can be classified into two
distinct types:
◦ Mechanical. A few examples of such storage systems are HDDs, optical
disks, holographic storage, and magnetic tape. If we need to emphasize
a particular type of mechanical storage device (for example, magnetic
tape), we will do so explicitly.
◦ Electrical. A few examples of such storage systems are flash memory,
FRAM, NRAM, and SSD. Electrical storage will be referred to as NVM. If
we need to emphasize a particular type of electrical storage device (for
example, SSD), we will do so explicitly.
Mechanical storage is generally larger and less expensive per byte than
electrical storage. Conversely, electrical storage is typically costly, smaller,
and faster than mechanical storage.

The design of a complete storage system must balance all the factors just
discussed: it must use only as much expensive memory as necessary while
providing as much inexpensive, nonvolatile storage as possible. Caches can
be installed to improve performance where a large disparity in access time or
transfer rate exists between two components.

1.2.3 I/O Structure


A large portion of operating system code is dedicated to managing I/O, both
because of its importance to the reliability and performance of a system and
because of the varying nature of the devices.
Recall from the beginning of this section that a general-purpose computer
system consists of multiple devices, all of which exchange data via a common
1.3 Computer-System Architecture 15

instruction execution

cache
cycle
instructions
thread of execution and
data movement
data

CPU (*N)

I/O request

interrupt
DMA

data
memory

device
(*M)

Figure 1.7 How a modern computer system works.

bus. The form of interrupt-driven I/O described in Section 1.2.1 is fine for
moving small amounts of data but can produce high overhead when used for
bulk data movement such as NVS I/O. To solve this problem, direct memory
access (DMA) is used. After setting up buffers, pointers, and counters for the
I/O device, the device controller transfers an entire block of data directly to
or from the device and main memory, with no intervention by the CPU. Only
one interrupt is generated per block, to tell the device driver that the operation
has completed, rather than the one interrupt per byte generated for low-speed
devices. While the device controller is performing these operations, the CPU is
available to accomplish other work.
Some high-end systems use switch rather than bus architecture. On these
systems, multiple components can talk to other components concurrently,
rather than competing for cycles on a shared bus. In this case, DMA is even
more effective. Figure 1.7 shows the interplay of all components of a computer
system.

1.3 Computer-System Architecture

In Section 1.2, we introduced the general structure of a typical computer sys-


tem. A computer system can be organized in a number of different ways,
which we can categorize roughly according to the number of general-purpose
processors used.

1.3.1 Single-Processor Systems


Many years ago, most computer systems used a single processor containing
one CPU with a single processing core. The core is the component that exe-
cutes instructions and registers for storing data locally. The one main CPU with
its core is capable of executing a general-purpose instruction set, including
instructions from processes. These systems have other special-purpose proces-
16 Chapter 1 Introduction

sors as well. They may come in the form of device-specific processors, such as
disk, keyboard, and graphics controllers.
All of these special-purpose processors run a limited instruction set and
do not run processes. Sometimes, they are managed by the operating system,
in that the operating system sends them information about their next task and
monitors their status. For example, a disk-controller microprocessor receives
a sequence of requests from the main CPU core and implements its own disk
queue and scheduling algorithm. This arrangement relieves the main CPU of
the overhead of disk scheduling. PCs contain a microprocessor in the keyboard
to convert the keystrokes into codes to be sent to the CPU. In other systems or
circumstances, special-purpose processors are low-level components built into
the hardware. The operating system cannot communicate with these proces-
sors; they do their jobs autonomously. The use of special-purpose microproces-
sors is common and does not turn a single-processor system into a multiproces-
sor. If there is only one general-purpose CPU with a single processing core, then
the system is a single-processor system. According to this definition, however,
very few contemporary computer systems are single-processor systems.

1.3.2 Multiprocessor Systems


On modern computers, from mobile devices to servers, multiprocessor sys-
tems now dominate the landscape of computing. Traditionally, such systems
have two (or more) processors, each with a single-core CPU. The proces-
sors share the computer bus and sometimes the clock, memory, and periph-
eral devices. The primary advantage of multiprocessor systems is increased
throughput. That is, by increasing the number of processors, we expect to get
more work done in less time. The speed-up ratio with N processors is not N,
however; it is less than N. When multiple processors cooperate on a task, a cer-
tain amount of overhead is incurred in keeping all the parts working correctly.
This overhead, plus contention for shared resources, lowers the expected gain
from additional processors.
The most common multiprocessor systems use symmetric multiprocess-
ing (SMP), in which each peer CPU processor performs all tasks, including
operating-system functions and user processes. Figure 1.8 illustrates a typical
SMP architecture with two processors, each with its own CPU. Notice that each
CPU processor has its own set of registers, as well as a private —or local—
cache. However, all processors share physical memory over the system bus.
The benefit of this model is that many processes can run simultaneously
— N processes can run if there are N CPUs—without causing performance
to deteriorate significantly. However, since the CPUs are separate, one may
be sitting idle while another is overloaded, resulting in inefficiencies. These
inefficiencies can be avoided if the processors share certain data structures. A
multiprocessor system of this form will allow processes and resources—such
as memory—to be shared dynamically among the various processors and can
lower the workload variance among the processors. Such a system must be
written carefully, as we shall see in Chapter 5 and Chapter 6.
The definition of multiprocessor has evolved over time and now includes
multicore systems, in which multiple computing cores reside on a single chip.
Multicore systems can be more efficient than multiple chips with single cores
because on-chip communication is faster than between-chip communication.
1.3 Computer-System Architecture 17

Figure 1.8 Symmetric multiprocessing architecture.

In addition, one chip with multiple cores uses significantly less power than
multiple single-core chips, an important issue for mobile devices as well as
laptops.
In Figure 1.9, we show a dual-core design with two cores on the same pro-
cessor chip. In this design, each core has its own register set, as well as its own
local cache, often known as a level 1, or L1, cache. Notice, too, that a level 2 (L2)
cache is local to the chip but is shared by the two processing cores. Most archi-
tectures adopt this approach, combining local and shared caches, where local,
lower-level caches are generally smaller and faster than higher-level shared

Figure 1.9 A dual-core design with two cores on the same chip.
18 Chapter 1 Introduction

DEFINITIONS OF COMPUTER SYSTEM COMPONENTS

• CPU — The hardware that executes instructions.


• Processor — A physical chip that contains one or more CPUs.
• Core — The basic computation unit of the CPU.
• Multicore — Including multiple computing cores on the same CPU.
• Multiprocessor — Including multiple processors.

Although virtually all systems are now multicore, we use the general term
CPU when referring to a single computational unit of a computer system and
core as well as multicore when specifically referring to one or more cores on
a CPU.

caches. Aside from architectural considerations, such as cache, memory, and


bus contention, a multicore processor with N cores appears to the operating sys-
tem as N standard CPUs. This characteristic puts pressure on operating-system
designers—and application programmers—to make efficient use of these pro-
cessing cores, an issue we pursue in Chapter 4. Virtually all modern operating
systems—including Windows, macOS, and Linux, as well as Android and iOS
mobile systems—support multicore SMP systems.
Adding additional CPUs to a multiprocessor system will increase comput-
ing power; however, as suggested earlier, the concept does not scale very well,
and once we add too many CPUs, contention for the system bus becomes a
bottleneck and performance begins to degrade. An alternative approach is
instead to provide each CPU (or group of CPUs) with its own local memory
that is accessed via a small, fast local bus. The CPUs are connected by a shared
system interconnect, so that all CPUs share one physical address space. This
approach—known as non-uniform memory access, or NUMA —is illustrated
in Figure 1.10. The advantage is that, when a CPU accesses its local memory,
not only is it fast, but there is also no contention over the system interconnect.
Thus, NUMA systems can scale more effectively as more processors are added.
A potential drawback with a NUMA system is increased latency when a CPU
must access remote memory across the system interconnect, creating a possible
performance penalty. In other words, for example, CPU0 cannot access the local
memory of CPU3 as quickly as it can access its own local memory, slowing down
performance. Operating systems can minimize this NUMA penalty through
careful CPU scheduling and memory management, as discussed in Section 5.5.2
and Section 10.5.4. Because NUMA systems can scale to accommodate a large
number of processors, they are becoming increasingly popular on servers as
well as high-performance computing systems.
Finally, blade servers are systems in which multiple processor boards, I/O
boards, and networking boards are placed in the same chassis. The differ-
ence between these and traditional multiprocessor systems is that each blade-
processor board boots independently and runs its own operating system. Some
blade-server boards are multiprocessor as well, which blurs the lines between
1.3 Computer-System Architecture 19

memory0 memory1

interconnect
CPU 0 CPU 1

CPU 2 CPU 3

memory2 memory3

Figure 1.10 NUMA multiprocessing architecture.

types of computers. In essence, these servers consist of multiple independent


multiprocessor systems.

1.3.3 Clustered Systems


Another type of multiprocessor system is a clustered system, which gath-
ers together multiple CPUs. Clustered systems differ from the multiprocessor
systems described in Section 1.3.2 in that they are composed of two or more
individual systems—or nodes—joined together; each node is typically a mul-
ticore system. Such systems are considered loosely coupled. We should note
that the definition of clustered is not concrete; many commercial and open-
source packages wrestle to define what a clustered system is and why one
form is better than another. The generally accepted definition is that clustered
computers share storage and are closely linked via a local-area network LAN
(as described in Chapter 19) or a faster interconnect, such as InfiniBand.
Clustering is usually used to provide high-availability service—that is,
service that will continue even if one or more systems in the cluster fail.
Generally, we obtain high availability by adding a level of redundancy in the
system. A layer of cluster software runs on the cluster nodes. Each node can
monitor one or more of the others (over the network). If the monitored machine
fails, the monitoring machine can take ownership of its storage and restart the
applications that were running on the failed machine. The users and clients of
the applications see only a brief interruption of service.
High availability provides increased reliability, which is crucial in many
applications. The ability to continue providing service proportional to the level
of surviving hardware is called graceful degradation. Some systems go beyond
graceful degradation and are called fault tolerant, because they can suffer a
failure of any single component and still continue operation. Fault tolerance
requires a mechanism to allow the failure to be detected, diagnosed, and, if
possible, corrected.
Clustering can be structured asymmetrically or symmetrically. In asym-
metric clustering, one machine is in hot-standby mode while the other is run-
ning the applications. The hot-standby host machine does nothing but monitor
the active server. If that server fails, the hot-standby host becomes the active
20 Chapter 1 Introduction

PC MOTHERBOARD

Consider the desktop PC motherboard with a processor socket shown below:

This board is a fully functioning computer, once its slots are populated.
It consists of a processor socket containing a CPU, DRAM sockets, PCIe bus
slots, and I/O connectors of various types. Even the lowest-cost general-
purpose CPU contains multiple cores. Some motherboards contain multiple
processor sockets. More advanced computers allow more than one system
board, creating NUMA systems.

server. In symmetric clustering, two or more hosts are running applications


and are monitoring each other. This structure is obviously more efficient, as it
uses all of the available hardware. However, it does require that more than one
application be available to run.
Since a cluster consists of several computer systems connected via a net-
work, clusters can also be used to provide high-performance computing envi-
ronments. Such systems can supply significantly greater computational power
than single-processor or even SMP systems because they can run an application
concurrently on all computers in the cluster. The application must have been
written specifically to take advantage of the cluster, however. This involves a
technique known as parallelization, which divides a program into separate
components that run in parallel on individual cores in a computer or comput-
ers in a cluster. Typically, these applications are designed so that once each
computing node in the cluster has solved its portion of the problem, the results
from all the nodes are combined into a final solution.
Other forms of clusters include parallel clusters and clustering over a
wide-area network (WAN) (as described in Chapter 19). Parallel clusters allow
multiple hosts to access the same data on shared storage. Because most oper-
1.4 Operating-System Operations 21

interconnect interconnect
computer computer computer

storage-area
network

Figure 1.11 General structure of a clustered system.

ating systems lack support for simultaneous data access by multiple hosts,
parallel clusters usually require the use of special versions of software and
special releases of applications. For example, Oracle Real Application Cluster
is a version of Oracle’s database that has been designed to run on a parallel
cluster. Each machine runs Oracle, and a layer of software tracks access to the
shared disk. Each machine has full access to all data in the database. To provide
this shared access, the system must also supply access control and locking to
ensure that no conflicting operations occur. This function, commonly known
as a distributed lock manager (DLM), is included in some cluster technology.
Cluster technology is changing rapidly. Some cluster products support
thousands of systems in a cluster, as well as clustered nodes that are separated
by miles. Many of these improvements are made possible by storage-area
networks (SANs), as described in Section 11.7.4, which allow many systems
to attach to a pool of storage. If the applications and their data are stored on
the SAN, then the cluster software can assign the application to run on any
host that is attached to the SAN. If the host fails, then any other host can take
over. In a database cluster, dozens of hosts can share the same database, greatly
increasing performance and reliability. Figure 1.11 depicts the general structure
of a clustered system.

1.4 Operating-System Operations

Now that we have discussed basic information about computer-system organi-


zation and architecture, we are ready to talk about operating systems. An oper-
ating system provides the environment within which programs are executed.
Internally, operating systems vary greatly, since they are organized along many
different lines. There are, however, many commonalities, which we consider in
this section.
For a computer to start running—for instance, when it is powered up
or rebooted —it needs to have an initial program to run. As noted earlier,
this initial program, or bootstrap program, tends to be simple. Typically, it is
stored within the computer hardware in firmware. It initializes all aspects of
the system, from CPU registers to device controllers to memory contents. The
bootstrap program must know how to load the operating system and how to
22 Chapter 1 Introduction

HADOOP

Hadoop is an open-source software framework that is used for distributed


processing of large data sets (known as big data) in a clustered system con-
taining simple, low-cost hardware components. Hadoop is designed to scale
from a single system to a cluster containing thousands of computing nodes.
Tasks are assigned to a node in the cluster, and Hadoop arranges communica-
tion between nodes to manage parallel computations to process and coalesce
results. Hadoop also detects and manages failures in nodes, providing an
efficient and highly reliable distributed computing service.
Hadoop is organized around the following three components:

1. A distributed file system that manages data and files across distributed com-
puting nodes.
2. The YARN (“Yet Another Resource Negotiator”) framework, which manages
resources within the cluster as well as scheduling tasks on nodes in the
cluster.
3. The MapReduce system, which allows parallel processing of data across
nodes in the cluster.

Hadoop is designed to run on Linux systems, and Hadoop applications


can be written using several programming languages, including scripting
languages such as PHP, Perl, and Python. Java is a popular choice for
developing Hadoop applications, as Hadoop has several Java libraries that
support MapReduce. More information on MapReduce and Hadoop can
be found at https://hadoop.apache.org/docs/r1.2.1/mapred tutorial.html
and https://hadoop.apache.org

start executing that system. To accomplish this goal, the bootstrap program
must locate the operating-system kernel and load it into memory.
Once the kernel is loaded and executing, it can start providing services to
the system and its users. Some services are provided outside of the kernel by
system programs that are loaded into memory at boot time to become system
daemons, which run the entire time the kernel is running. On Linux, the first
system program is “systemd,” and it starts many other daemons. Once this
phase is complete, the system is fully booted, and the system waits for some
event to occur.
If there are no processes to execute, no I/O devices to service, and no users
to whom to respond, an operating system will sit quietly, waiting for something
to happen. Events are almost always signaled by the occurrence of an interrupt.
In Section 1.2.1 we described hardware interrupts. Another form of interrupt is
a trap (or an exception), which is a software-generated interrupt caused either
by an error (for example, division by zero or invalid memory access) or by
a specific request from a user program that an operating-system service be
performed by executing a special operation called a system call.
Random documents with unrelated
content Scribd suggests to you:
The Project Gutenberg eBook of Mémoires de
Céleste Mogador, Volume 2
This ebook is for the use of anyone anywhere in the United
States and most other parts of the world at no cost and with
almost no restrictions whatsoever. You may copy it, give it away
or re-use it under the terms of the Project Gutenberg License
included with this ebook or online at www.gutenberg.org. If you
are not located in the United States, you will have to check the
laws of the country where you are located before using this
eBook.

Title: Mémoires de Céleste Mogador, Volume 2

Author: comtesse Céleste Vénard de Chabrillan

Release date: June 28, 2017 [eBook #54997]


Most recently updated: October 23, 2024

Language: French

Credits: Produced by Clarity, Hélène de Mink, and the Online


Distributed Proofreading Team at http://www.pgdp.net
(This
file was produced from images generously made
available
by The Internet Archive/Canadian Libraries)

*** START OF THE PROJECT GUTENBERG EBOOK MÉMOIRES DE


CÉLESTE MOGADOR, VOLUME 2 ***
Note sur la transcription: Les erreurs clairement
introduites par le typographe ont été corrigées.
L'orthographe d'origine a été conservée et n'a pas été
harmonisée. Les numéros des pages blanches n'ont pas
été repris.

MÉMOIRES
DE

CÉLESTE MOGADOR
Paris.—IMP. DE LA LIBRAIRIE NOUVELLE.—Bourdilliat, 15, rue Breda.
MÉMOIRES
DE

CÉLESTE
MOGADOR
TOME DEUXIÈME
PARIS
LIBRAIRIE NOUVELLE
BOULEVARD DES ITALIENS, 15

La traduction et la reproduction sont réservées.


1858
MÉMOIRES
DE

CÉLESTE MOGADOR
XII

LA REINE POMARÉ.

(Suite.)

On me conduisit à Beaumarchais, où l'on me reçut d'une façon


charmante, quand j'eus dit que je m'appelais Mogador.
Je fus engagée; je répétai le lendemain dans une revue, où je me
jouais moi-même et où je dansais à la fin la mazourka. Mon costume
était délicieux. Je débutai le même soir que Pomaré; j'eus beaucoup
de succès dans la danse.
J'appris le lendemain que Pomaré avait été sifflée à outrance. Je lus
quelques journaux où on l'accablait de mauvais compliments et de
railleries. Les journalistes traitent les femmes comme les
gouvernements: ils les inventent; après les avoir inventées, ils les
prônent; après les avoir prônées, ils veulent les défaire. Si ces
réputations, qui sont leur ouvrage, résistent, ils se déchaînent,
insultent, méprisent; ils crient à la dépravation.
Mais, messieurs, si cette dépravation, dont on commence à
s'effrayer, a fait tant de progrès, c'est un peu votre faute.
Autrefois, il n'y avait qu'un ou deux bals publics; pourquoi y en a-t-il
dix aujourd'hui? A cause des célébrités que vous vous êtes amusés à
créer à temps perdu, et quand vous ne saviez que faire. Cette gloire
de clinquant a trouvé des envieuses; des milliers de jeunes filles sont
entraînées dans les bals publics par l'appât de cet éclat menteur!
Elles font tout au monde pour qu'on les regarde et pour que vous
disiez leurs noms.
Les jeunes gens de famille vont voir ces combats, ces assauts de
jambes; comment voulez-vous qu'ils gardent leur raison au milieu de
ces jeunes femmes, dont quelques-unes sont charmantes? Ils
s'enivrent ensemble de la même folie.
Pomaré avait une voiture; toutes veulent en avoir, beaucoup en ont.
Les Champs-Élysées comptent tous les jours dix promeneuses
nouvelles, élégantes, hardies.
Ce luxe fait mal à voir, je le confesse, quand on songe que beaucoup
de femmes, qui n'ont pas une faute à se reprocher, végètent dans la
misère ou dans la gêne avec leurs familles.
Les vaudevillistes et les dramaturges, toujours à l'affût des passions
qu'on peut exploiter avec succès, ont mis la prostitution sur la scène.
Tout Paris s'est attendri pendant deux cents représentations sur le
désintéressement de cœur et sur l'agonie d'une courtisane; puis, un
beau jour, on a été effrayé du chemin qu'on avait fait.
Le monde galant a eu sa réaction, tout comme la société vertueuse.
D'autres vaudevillistes et d'autres dramaturges, saisissant la nouvelle
veine, nous ont attachées au pilori de l'opinion.
Les journalistes ont fait ces choses sans se rappeler qu'à une autre
époque ils avaient battu la grosse caisse à la porte du Ranelagh, à la
porte du bal Mabille, à la porte du bal d'Asnières.
Dans les grandes, comme dans les petites choses, dans les choses
honnêtes comme dans les choses honteuses, l'esprit humain est
toujours le même: il ressemble à la girouette qui est sur ma maison.
Si l'on veut réellement détruire cette puissance des femmes
galantes, qui touche à tout, qui commence dans les plus hautes
sphères pour finir dans les derniers rangs de la société, le meilleur
moyen c'est d'étudier les faits. L'histoire vraie des femmes qui ont
vécu de cette vie infernale serait plus éloquente, pour en détourner
les jeunes filles, que les idylles attendrissantes ou les contrastes
forcés, dont le public parisien s'amuse tour-à-tour à pleurer et à rire.
Tant que j'ai vécu dans ce tourbillon, je n'avais guère le temps de
réfléchir, ni à mon malheur, ni à celui des autres. Aujourd'hui, que je
me suis retirée de ce monde; aujourd'hui, que j'envisage mon propre
désenchantement et que je me rappelle comment ont fini les
femmes que j'ai vues les plus brillantes et les plus adulées, il me
semble que si, comme dans le petit drame de _Victorine_, on
pouvait leur montrer leur avenir dans un rêve, toutes reculeraient.
Pomaré devait être triste; je fus la voir. Elle demeurait alors, 25, rue
de la Michodière, à l'entre-sol. La maison, c'était un hôtel garni, était
meublée très-proprement. Lise était très-élégante. J'attendais qu'elle
me parlât de ses débuts; elle ne jugea pas convenable de le faire et
me demanda les suites des miens.
—Je suis contente, lui dis-je; c'est un commencement.
—Ah bien! moi, dit-elle en riant, mon commencement ressemble
joliment à une fin; j'ai eu au Palais-Royal le succès de Lola-Montès.
On avait fait forger des clés à trous, et on s'en est donné à souffler
dedans; le bruit a couvert l'orchestre. J'ai dansé à contre mesure; il
était temps pour moi de me sauver, car on se disposait à me jeter
les bancs à la tête. J'en suis encore malade; je ne sors plus de six
mois.
—A part cela, lui dis-je, tu es heureuse?
—Oui, me dit-elle; vois.
Elle ouvrit une armoire et me montra un tas de chiffons, que je ne
regardai pas, je l'avoue, sans une certaine envie.
—Je suis tranquille, me dit-elle. Je vis avec un jeune homme de
Toulouse, qui m'adore et me comble. Il est employé au bureau des
postes pour plaire à ses parents, qui veulent qu'il s'occupe, ce dont il
n'a pas besoin, car il est fort riche.
—Tant mieux! cela me fait plaisir. Je t'aime beaucoup; je voudrais te
voir ménager un peu plus ta santé et ta bourse.
—Oh! je n'ai pas longtemps à vivre; je veux bien m'amuser pour ne
rien regretter.
—Joues-tu ce soir? me dit-elle en ouvrant la croisée.
—Oui, tous les soirs.
—Eh bien! j'irai te voir aujourd'hui avec mon époux.
Je la quittai. Je la vis le soir, dans une avant-scène du rez-de-
chaussée, avec un petit homme blond mat, les cheveux frisés,
portant lunettes. Il paraissait rempli d'attentions pour elle.
Elle me fit prier d'aller dîner le lendemain avec eux. Elle me dit,
avant qu'il arrivât, qu'elle ne pouvait pas le souffrir, mais qu'il l'aimait
tant qu'elle avait pitié de lui; que c'était la bonté même.
En effet, il m'intéressa; il avait l'air si honnête, si tendre; il faisait
montre de si beaux sentiments, que je fus enchantée de lui, et que
je fis promettre à Lise de mieux le traiter.
—Voyez-vous, mademoiselle, me dit-il le soir en me reconduisant, en
ce moment, je ne puis pas faire tout ce que je veux pour elle; mais
je vais avoir beaucoup d'argent d'une propriété que je fais vendre: je
lui donnerai tout.
A quelques jours de là, j'entendis conter, au foyer du théâtre, que la
reine Pomaré était arrêtée comme complice d'un vol très-important
dont on recherchait les auteurs.
Je ne pouvais pas croire cela, et, d'ailleurs, je n'ai jamais pu
supporter entendre dire du mal de mes amies. Je donnai des
démentis à toutes ces vipères qui, ne m'aimant pas, étaient
enchantées de me faire de la peine.
Une vieille duègne, qui, du reste, avait été très-belle, disait:
—Parbleu! des sauteuses comme cela, ça fait tous les métiers.
—Ah! reprenait une ingénue de trente ans, si j'étais juge, je la
condamnerais à la prison pour toute sa vie.
Rien n'est méchant comme les vertueuses par force. Celle-là était si
sèche, si laide, que je ne pus m'empêcher de lui dire:
—Il faudrait mettre en prison toutes les femmes un peu jolies; la
disette en viendrait et vous trouveriez peut-être votre placement.
—Taisez-vous, me dit une de mes camarades; ne vous querellez pas
ainsi sans savoir ce qui en est: vous pourriez vous compromettre.
Dès que le spectacle fut fini, je courus rue de la Michodière. La
maîtresse de la maison me dit qu'on lui avait recommandé le plus
grand secret: mais qu'à moi, elle allait tout me conter... Je devais
être au moins la centième confidente.
—Hier, me dit-elle, il s'est présenté un homme, fort bien mis, qui m'a
demandé quelle chambre habitait Mlle Lise et comment elle vivait. Je
crus que c'était son père, dont elle a si peur, et je répondis à ce
monsieur que j'ignorais sa manière de vivre.
—Oh! elle se cache: preuve qu'elle est coupable. Il fit signe à deux
autres messieurs, qui entrèrent également, et ils montèrent tous
trois à sa porte, en me faisant signe de les suivre. Je vis bien que
c'étaient des agents de la police.
—Frappez vous-même, me dirent-ils. Il faut qu'elle ouvre sans avoir
peur; un papier est vite brûlé.
Je fis ce qu'on me disait.
—Lise m'ouvrit en chemise. En voyant tout ce monde, elle voulut
repousser la porte, mais elle n'en eut pas le temps; les trois hommes
étaient entrés: deux s'étaient placés à côté d'elle, de manière à
l'empêcher de faire un mouvement.
La pauvre fille était si pâle que ça me fendit le cœur.
—Habillez-vous, dit un de ces hommes, pendant que les autres
visitaient les meubles, prenaient les papiers; habillez-vous donc,
vous allez nous suivre.
—Vous suivre! dit Lise; où donc?
—Parbleu! pas à Mabille, dit l'homme, mais à la Préfecture.
—A la Préfecture! moi! Mais qu'ai-je donc fait?
—Ah! si vous n'aviez que dansé, vous n'auriez fait de tort qu'à vos
jambes.
—Mais, monsieur, je n'ai fait de tort à personne.
—C'est ce que le juge d'instruction verra; en attendant, dépêchons.
—Un juge d'instruction! vous m'arrêtez donc comme une voleuse?
—Ou complice, dit l'homme; c'est la même chose.
—Moi! cria-t-elle en enfonçant ses deux mains dans ses cheveux en
désordre; et vous avez pu croire que vous m'emmèneriez vivante?
Elle s'élança dans la seconde pièce, où sans doute elle voulait
prendre un couteau; mais on s'empara d'elle avant qu'elle n'eût
ouvert un meuble.
—Voyez-vous, mademoiselle Céleste, cette scène me fit un mal
affreux. Ses cheveux étaient épars; elle était presque nue, car elle
avait cessé de s'habiller. On la tenait le plus doucement possible. Elle
se jetait à terre, frappait sa tête; je la crus folle! Voyant son
désespoir, ils commencèrent à la traiter plus doucement.
—Allons, mon enfant, ne vous mettez pas dans cet état; on ne vous
fera peut-être rien. Si vous n'êtes pas coupable, vous sortirez de
suite. Allons, allons, pas de bruit; personne ne le saura. Vous vivez
malheureusement avec des gens que vous ne connaissez pas assez,
qui peuvent vous tromper sur leurs ressources, sur leurs moyens
d'existence.
Et les trois hommes l'enlevèrent de terre pour la placer dans un
fauteuil.
Elle avait les yeux fixes et paraissait ne pas entendre. Elle se leva,
comme si elle avait pris une résolution, puis elle s'habilla,
silencieuse, l'œil sec. On ne perdait pas un de ses mouvements. Elle
me demanda si monsieur était venu.
—Non, lui dis-je, je ne l'ai pas vu.
—Tout m'abandonne! Allons, je suis prête. Ah! misérable que je suis!
voilà où cette vie devait me conduire! Je voudrais que toutes celles
qui marchent sur mes traces pussent me voir en ce moment.
On avait fait avancer un fiacre. Ces messieurs lui prirent chacun un
bras et se placèrent près d'elle dans la voiture. Je la vis jeter sa tête
en arrière; la voiture partit.
La brave femme n'en savait pas davantage. Les informations qu'elle
pouvait me donner s'arrêtaient là.
Je n'en revenais pas de ce que j'apprenais; je n'eus pas, du reste, un
instant de doute sur l'innocence de Lise: je la savais incapable d'un
acte d'improbité.
Je fis quelques démarches pour avoir de ses nouvelles; mais je dus
être prudente, car j'étais moi-même sous une surveillance qui me
désespérait, et mon intervention dans une affaire de cette nature
aurait pu me coûter bien cher. Lise était au secret, rien ne pouvait lui
parvenir.
Je fus vingt fois chez elle.
Je ne pouvais me remettre du coup que son arrestation m'avait
porté; c'était la semaine aux mauvaises nouvelles.
Au moment où j'étais le plus triste, j'appris un nouveau malheur, qui
m'impressionna d'autant plus vivement qu'il me faisait faire sur ma
propre situation un cruel retour.
J'avais eu occasion de voir, chez Adolphe, un jeune homme qui avait
une maîtresse charmante. Elle s'appelait Angéline; sa figure était
fine, spirituelle au possible. Elle avait été inscrite très-jeune; elle
avait compris dans quelle affreuse position elle s'était mise. Aussi,
sans être devenue une vertu bien farouche, vivait-elle très-
modestement avec son amant, qui ignorait sa position.
Je rencontrai ce jeune homme, un jour que je venais de faire chez
Lise une nouvelle démarche qui ne m'avait pas plus servi que mes
premières tentatives pour avoir de ses nouvelles.
—Ah! ma chère Céleste, me dit-il en m'arrêtant par le bras, vous me
voyez désolé. Nous avons fait une partie de bal masqué, il y a trois
jours; nous étions une douzaine: nous avions fait un bon souper
avant d'entrer à l'Opéra. Angéline avait un costume charmant. Vous
savez comme elle danse bien; on la regardait, on l'excitait à faire
plus. Elle s'est un peu trop émancipée; un sergent de ville lui dit qu'il
allait la mettre dehors. Je descendais du foyer en ce moment. Mon
ami, avec qui elle dansait, répondit: ce fut une querelle, on les
emmena au poste. Nous étions gris; nous avons voulu employer la
violence; on garda la pauvre fille. Quand elle eut repris son sang-
froid, on lui dit qu'elle allait être conduite à la Préfecture de police.
Elle ne se plaignit pas; elle demanda seulement la permission de
monter chez elle, disant qu'elle ne pouvait se présenter en
débardeur chez un magistrat. On l'accompagna en fiacre. Elle pria
les agents d'attendre cinq minutes, afin qu'elle eût le temps d'écrire
un mot à sa mère et à moi. Ces messieurs s'impatientaient, ils
frappèrent. «Entrez!» dit-elle. En ouvrant la porte, ils la virent
disparaître par la fenêtre, puis ils entendirent un corps tomber sur le
pavé. Ils trouvèrent deux lettres; on me remit celle-ci. Et il la lut en
pleurant:
«Mon pauvre ami, je vais faire un saut bien pénible à mon âge: je
n'ai pas vingt ans. Ce n'est pas la vie que je regrette, c'est toi; ce
n'est pas de la mort que j'ai peur, c'est de me défigurer sans me
tuer: tu ne m'aimerais plus. Fais-moi enterrer; si ma tête n'est pas
mutilée, embrasse-moi. Je suis fille inscrite; depuis deux ans que je
suis avec toi, je te l'ai caché: j'avais si peur de te déplaire! Je me
suis soustraite au règlement; j'ai été prise hier; j'aurai payé tout à la
fois. J'aime mieux rendre mon corps à la terre que d'aller quelques
mois à Saint-Lazare. Tu me plaindras; tu m'aurais méprisée. Ne me
regrette pas plus que je ne vaux, mais ne m'oublie pas trop vite.
Adieu!»
—Et elle s'est tuée! dis-je, émue jusqu'au cœ.
—Non; elle s'est cassé les deux jambes; elle sera estropiée toute sa
vie. Mais j'en aurai soin; je ne la quitterai jamais.
J'avais envie de l'embrasser; je lui donnai une bonne poignée de
main en lui disant:
—Vous êtes un brave garçon, embrassez-la pour moi.
Il me quitta. Je regardais autour de moi tout effrayée, car j'étais
dans la même position qu'elle.
Je trouvais Angéline heureuse, plus heureuse que moi. Après un
pareil malheur, il était impossible qu'elle n'obtînt pas d'être rayée,
tandis que moi, je n'avais pas l'espérance d'atteindre de bien
longtemps ce but de tous mes désirs, car ma maudite célébrité
devait redoubler les obstacles.
Je n'avais pu me résigner à retourner à la Préfecture, avec ces
femmes qui sont tenues de s'y présenter toutes les quinzaines, sous
peine d'être punies.
J'étais en contravention: on aurait eu le droit de m'arrêter partout où
l'on m'aurait trouvée. J'étais dans cette position de ne marcher qu'en
tremblant. Je ne passais jamais sur les boulevards; le quartier
Montmartre étant rempli de femmes, la surveillance y était plus
active qu'ailleurs.
Chaque fois qu'un homme me regardait, je croyais voir un
inspecteur; je courais de toutes mes forces, mon cœur battait. Cette
vie, toujours dominée par le sentiment de la peur, était atroce; je
n'osais sortir à pied la nuit.
Un soir, on me vola ma montre. J'y tenais beaucoup; du jour où je
l'avais eue, je me croyais en possession des richesses du Pérou: eh
bien! dans la crainte d'être obligée de dire mon nom, je n'osai faire
ma déclaration.
En entrant à Beaumarchais, je m'étais crue sauvée. Je m'imaginais
que j'allais avoir un état, gagner de l'argent: c'était encore une
illusion.
On m'avait reçue à bras ouverts; on me faisait jouer et danser tous
les soirs, mais... on ne me donnait pas d'appointements.
Je demandai si cela irait ainsi longtemps? On me répondit que non,
que le théâtre allait fermer.
Ce fut pour moi comme un véritable coup de foudre. La misère, à
laquelle je me flattais d'avoir échappé, allait revenir, plus menaçante,
frapper à ma porte.
Un hasard me tira de ce mauvais pas.
Un jour où je me sentais encore plus triste qu'à l'ordinaire, le
désœuvrement conduisit mes pas chez une marchande à la toilette
de ma connaissance, qui demeurait faubourg du Temple, no 16.
Le malheur rend communicatif; je lui racontai mes peines.
Il y avait chez elle un homme âgé, les cheveux gris, l'œil enfoncé, le
nez courbé, des lunettes d'argent, des diamants plein les doigts,
grand, maigre, mais bien droit et l'air vigoureux. C'était le
propriétaire de la maison.
Ce monsieur paraissait m'écouter avec intérêt, et me regardait
surtout avec une attention dont je me demandais la cause, sans la
deviner.
—Je crois, mademoiselle, me dit-il, après m'avoir bien considérée,
que je suis à même de vous offrir un emploi plus avantageux que
celui que vous allez perdre à Beaumarchais; je cherche des écuyères
pour l'Hippodrome. Il nous faut des femmes jeunes et élégantes.
—Oh! me dit Mme Alphonse, voilà votre affaire. Vous avez de
l'adresse et du courage, vous apprendrez bien vite à monter à
cheval. On va ouvrir un hippodrome magnifique, barrière de l'Étoile;
vous serez bien payée.
Je demandai combien je gagnerais.
—Cela dépendra de vos dispositions et de ce que vous saurez faire.
Dès à présent, je puis vous donner cent francs par mois, et je vous
montrerai moi-même.
—Ma foi! dis-je, c'est bien tentant; et vous me ferez un
engagement?
—Tout de suite, si vous voulez.
—Je préférerais le théâtre; mais gagner cent francs par mois! cela
vaut la peine d'y songer... D'ailleurs, je vous préviens que je mettrai
tant d'ardeur que vous serez forcé de m'augmenter l'année
prochaine. Eh bien! j'ai réfléchi: c'est fait. A quand ma première
leçon?
—La semaine prochaine, si vous voulez. Dès demain, je vous
présenterai à mon fils.
Il sortit, en ayant soin de prendre mon adresse.
Quand il fut parti, Mme Alphonse me dit:
—Vous avez joliment bien fait de saisir la balle au bond; vous y
gagnerez toujours une chose, c'est d'apprendre à monter à cheval
avec le premier maître d'équitation de Paris. C'est un homme bien
remarquable que M. Laurent Franconi; personne ne le remplacera: il
vous fera faire en un mois ce qu'un autre ne vous ferait pas faire en
un an.
Tout fut arrangé et signé le lendemain. Ma pièce finissait à
Beaumarchais; je quittai le théâtre.
On dit qu'un malheur n'arrive jamais seul; je crois qu'il en est de
même des bonheurs de la vie.
Je me sentais toute joyeuse; je courus chez Lise avec un heureux
pressentiment. Il ne me trompait pas; elle était revenue: on l'avait
mise en liberté la veille au soir. Elle était si honteuse qu'elle ne
voulait voir personne. Je pensai que cette consigne n'était pas pour
moi; je montai au deuxième: elle était dans une toute petite
chambre sur la cour.
La clef était sur la porte, j'entrai sans frapper. Je la trouvai étendue
sur une petite couchette en bois peint, ses bras le long de son corps,
la figure tirée, les yeux bordés d'un cercle noir. Elle râlait plutôt
qu'elle ne respirait. Je lui pris la main; cette main était froide.
—Lise! lui dis-je doucement.
Elle ouvrit les yeux et me regarda sans me voir, car elle me
demanda:
—Qui est là?
—C'est moi; pardon de t'avoir réveillée; mais ton sommeil paraissait
pénible.
—Ah! ma chère Céleste, je sais que tu es venue bien des fois;
j'aurais dû aller chez toi, je n'en ai pas eu le courage: je suis brisée.
Tu n'as pas pensé que j'avais volé, n'est-ce pas? me dit-elle avec des
yeux égarés et en me secouant le bras.
—Non, puisque je suis là. Mais conte-moi ce qui s'est passé, car c'est
un rêve.
—Oh! me dit-elle, un mauvais rêve. Tu sais comme je fus emmenée.
On visita mes papiers, et on ne trouva rien qui pût faire croire que je
fusse complice de ces hommes. Depuis quelque temps, on se
plaignait que des envois d'argent faits par la poste n'arrivaient pas;
on faisait des réclamations, des recherches: impossible de découvrir
les coupables. Il y a un mois environ, un jeune homme se présenta
pour toucher un mandat dans un bureau de poste. Il y avait là un
monsieur qui, attendant de l'argent, venait faire une réclamation. Ce
monsieur entendit prononcer son nom, et fut tout surpris de voir le
jeune homme signer pour lui et tenir dans sa main la lettre d'avis
que lui s'étonnait de n'avoir pas reçue. On fit arrêter ce jeune
homme, on le fouilla; il avait plusieurs lettres chargées décachetées,
portant différentes adresses. D'abord, il ne voulut pas répondre, dire
qui il était, mais il finit par tout avouer: c'était une association. Ils
étaient sept ou huit. Ils avaient un employé à la poste; chaque fois
qu'une lettre était chargée, cet employé la volait, et alors les
associés allaient faire les recouvrements. Sans compter ces vols, qui
étaient très-importants, ils faisaient un tort considérable au
commerce, car, lorsque les lettres contenaient des valeurs qu'ils ne
pouvaient pas toucher, ils les brûlaient.
Tu as deviné quel était l'employé de la poste; tu comprends quel
soupçon ont eu les juges. On a cru que j'étais complice! Une
adresse, une lettre oubliée chez moi, dont je n'aurais pas eu
connaissance, et j'étais perdue!
Il m'a défendue, il paraît, tant qu'il a pu. Le magistrat qui m'a
interrogée me disait toujours:
—Mais enfin, c'est pour vous qu'il l'a fait.
Je lui répondais:
—Cela est possible, et j'en suis assez malheureuse; mais je ne me
doutais de rien.
Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.

More than just a book-buying platform, we strive to be a bridge


connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.

Join us on a journey of knowledge exploration, passion nurturing, and


personal growth every day!

ebookbell.com

You might also like