Operating System Concepts 10th Edition Silberschatz Abraham Gagne instant download
Operating System Concepts 10th Edition Silberschatz Abraham Gagne instant download
https://ebookbell.com/product/operating-system-concepts-10th-
edition-silberschatz-abraham-gagne-50554092
https://ebookbell.com/product/operating-system-concepts-10th-edition-
abraham-silberschatz-peter-b-galvin-50169430
https://ebookbell.com/product/operating-system-concepts-10th-10th-
edition-abraham-silberschatz-52165688
https://ebookbell.com/product/operating-system-concepts-
essentials-1st-edition-abraham-silberschatz-2341698
https://ebookbell.com/product/operating-system-concepts-8th-edition-
abraham-silberschatz-peter-b-galvin-2353778
Operating System Concepts With Java 8th Edition Abraham Silberschatz
https://ebookbell.com/product/operating-system-concepts-with-java-8th-
edition-abraham-silberschatz-2492980
https://ebookbell.com/product/operating-system-concepts-9th-edition-
abraham-silberschatz-peter-b-galvin-4078942
https://ebookbell.com/product/operating-system-concepts-8th-update-
edition-abraham-silberschatz-4078950
https://ebookbell.com/product/operating-system-concepts-6th-ed-6th-ed-
james-lyle-peterson-abraham-silberschatz-4119834
Operating System Concepts And Basic Linux Commands Shital Vivek Ghate
https://ebookbell.com/product/operating-system-concepts-and-basic-
linux-commands-shital-vivek-ghate-43788132
Operating System Concepts 6th Edition 6th Abraham Silberschatz
https://ebookbell.com/product/operating-system-concepts-6th-
edition-6th-abraham-silberschatz-4450080
OPERATING
SYSTEM
CONCEPTS
7(17+(',7,21
OPERATING
SYSTEM
CONCEPTS
ABRAHAM SILBERSCHATZ
:BMF6OJWFSTJUZ
GREG GAGNE
8FTUNJOTUFS$PMMFHF
7(17+(',7,21
Publisher Laurie Rosatone
Editorial Director Don Fowley
Development Editor Ryann Dannelly
Freelance Developmental Editor Chris Nelson/Factotum
Executive Marketing Manager Glenn Wilson
Senior Content Manage Valerie Zaborski
Senior Production Editor Ken Santor
Media Specialist Ashley Patterson
Editorial Assistant Anna Pham
Cover Designer Tom Nery
Cover art © metha189/Shutterstock
This book was set in Palatino by the author using LaTeX and printed and bound by LSC Kendallville.
The cover was printed by LSC Kendallville.
Copyright © 2018, 2013, 2012, 2008 John Wiley & Sons, Inc. All rights reserved.
No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by
any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted
under Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior written
permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the
Copyright Clearance Center, Inc. 222 Rosewood Drive, Danvers, MA 01923, (978)750-8400, fax
(978)750-4470. Requests to the Publisher for permission should be addressed to the Permissions
Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030 (201)748-6011, fax (201)748-
6008, E-Mail: [email protected].
Evaluation copies are provided to qualified academics and professionals for review purposes only, for use
in their courses during the next academic year. These copies are licensed and may not be sold or
transferred to a third party. Upon completion of the review period, please return the evaluation copy to
Wiley. Return instructions and a free-of-charge return shipping label are available at
www.wiley.com/go/evalreturn. Outside of the United States, please contact your local representative.
The inside back cover will contain printing identification and country of origin if omitted from this page. In
addition, if the ISBN on the back cover differs from the ISBN on this page, the one on the back cover is
correct.
Enhanced ePub ISBN 978-1-119-32091-3
Printed in the United States of America
10 9 8 7 6 5 4 3 2 1
To my children, Lemor, Sivan, and Aaron
and my Nicolette
Avi Silberschatz
To my wife, Carla,
and my children, Gwen, Owen, and Maddie
To my wife, Pat,
and our sons, Tom and Jay
Greg Gagne
Preface
Operating systems are an essential part of any computer system. Similarly, a
course on operating systems is an essential part of any computer science edu-
cation. This field is undergoing rapid change, as computers are now prevalent
in virtually every arena of day-to-day life—from embedded devices in auto-
mobiles through the most sophisticated planning tools for governments and
multinational firms. Yet the fundamental concepts remain fairly clear, and it is
on these that we base this book.
We wrote this book as a text for an introductory course in operating sys-
tems at the junior or senior undergraduate level or at the first-year graduate
level. We hope that practitioners will also find it useful. It provides a clear
description of the concepts that underlie operating systems. As prerequisites,
we assume that the reader is familiar with basic data structures, computer
organization, and a high-level language, such as C or Java. The hardware topics
required for an understanding of operating systems are covered in Chapter 1.
In that chapter, we also include an overview of the fundamental data structures
that are prevalent in most operating systems. For code examples, we use pre-
dominantly C, as well as a significant amount of Java, but the reader can still
understand the algorithms without a thorough knowledge of these languages.
Concepts are presented using intuitive descriptions. Important theoretical
results are covered, but formal proofs are largely omitted. The bibliographical
notes at the end of each chapter contain pointers to research papers in which
results were first presented and proved, as well as references to recent material
for further reading. In place of proofs, figures and examples are used to suggest
why we should expect the result in question to be true.
The fundamental concepts and algorithms covered in the book are often
based on those used in both open-source and commercial operating systems.
Our aim is to present these concepts and algorithms in a general setting that
is not tied to one particular operating system. However, we present a large
number of examples that pertain to the most popular and the most innovative
operating systems, including Linux, Microsoft Windows, Apple macOS (the
original name, OS X, was changed in 2016 to match the naming scheme of other
Apple products), and Solaris. We also include examples of both Android and
iOS, currently the two dominant mobile operating systems.
The organization of the text reflects our many years of teaching courses
on operating systems. Consideration was also given to the feedback provided
vii
viii Preface
by the reviewers of the text, along with the many comments and suggestions
we received from readers of our previous editions and from our current and
former students. This Tenth Edition also reflects most of the curriculum guide-
lines in the operating-systems area in Computer Science Curricula 2013, the most
recent curriculum guidelines for undergraduate degree programs in computer
science published by the IEEE Computing Society and the Association for Com-
puting Machinery (ACM).
Book Material
The book consists of 21 chapters and 4 appendices. Each chapter and appendix
contains the text, as well as the following enhancements:
Preface ix
A hard copy of the text is available in book stores and online. That version has
the same text chapters as the electronic version. It does not, however, include
the appendices, the regular exercises, the solutions to the practice exercises,
the programming problems, the programming projects, and some of the other
enhancements found in this ePub electronic book.
utilization of the CPU and the speed of its response to its users, the com-
puter must keep several processes in memory. There are many different
memory-management schemes, reflecting various approaches to memory
management, and the effectiveness of a particular algorithm depends on
the situation.
• Storage management. Chapters 11 and 12 describe how mass storage and
I/O are handled in a modern computer system. The I/O devices that attach
to a computer vary widely, and the operating system needs to provide a
wide range of functionality to applications to allow them to control all
aspects of these devices. We discuss system I/O in depth, including I/O
system design, interfaces, and internal system structures and functions.
In many ways, I/O devices are the slowest major components of the com-
puter. Because they represent a performance bottleneck, we also examine
performance issues associated with I/O devices.
• File systems. Chapters 13 through 15 discuss how file systems are handled
in a modern computer system. File systems provide the mechanism for on-
line storage of and access to both data and programs. We describe the clas-
sic internal algorithms and structures of storage management and provide
a firm practical understanding of the algorithms used—their properties,
advantages, and disadvantages.
• Security and protection. Chapters 16 and 17 discuss the mechanisms nec-
essary for the security and protection of computer systems. The processes
in an operating system must be protected from one another’s activities.
To provide such protection, we must ensure that only processes that have
gained proper authorization from the operating system can operate on
the files, memory, CPU, and other resources of the system. Protection is
a mechanism for controlling the access of programs, processes, or users
to computer-system resources. This mechanism must provide a means
of specifying the controls to be imposed, as well as a means of enforce-
ment. Security protects the integrity of the information stored in the system
(both data and code), as well as the physical resources of the system, from
unauthorized access, malicious destruction or alteration, and accidental
introduction of inconsistency.
• Advanced topics. Chapters 18 and 19 discuss virtual machines and
networks/distributed systems. Chapter 18 provides an overview of
virtual machines and their relationship to contemporary operating
systems. Included is a general description of the hardware and software
techniques that make virtualization possible. Chapter 19 provides an
overview of computer networks and distributed systems, with a focus on
the Internet and TCP/IP.
• Case studies. Chapter 20 and 21 present detailed case studies of two real
operating systems—Linux and Windows 10.
• Appendices. Appendix A discusses several old influential operating sys-
tems that are no longer in use. Appendices B through D cover in great
detaisl three older operating systems— Windows 7, BSD, and Mach.
Preface xi
Programming Environments
The text provides several example programs written in C and Java. These
programs are intended to run in the following programming environments:
• POSIX. POSIX (which stands for Portable Operating System Interface) repre-
sents a set of standards implemented primarily for UNIX-based operat-
ing systems. Although Windows systems can also run certain POSIX pro-
grams, our coverage of POSIX focuses on Linux and UNIX systems. POSIX-
compliant systems must implement the POSIX core standard (POSIX.1);
Linux and macOS are examples of POSIX-compliant systems. POSIX also
defines several extensions to the standards, including real-time extensions
(POSIX.1b) and an extension for a threads library (POSIX.1c, better known
as Pthreads). We provide several programming examples written in C
illustrating the POSIX base API, as well as Pthreads and the extensions for
real-time programming. These example programs were tested on Linux 4.4
and macOS 10.11 systems using the gcc compiler.
• Java. Java is a widely used programming language with a rich API and
built-in language support for concurrent and parallel programming. Java
programs run on any operating system supporting a Java virtual machine
(or JVM). We illustrate various operating-system and networking concepts
with Java programs tested using Version 1.8 of the Java Development Kit
(JDK).
• Windows systems. The primary programming environment for Windows
systems is the Windows API, which provides a comprehensive set of func-
tions for managing processes, threads, memory, and peripheral devices.
We supply a modest number of C programs illustrating the use of this API.
Programs were tested on a system running Windows 10.
Major Changes
The Tenth Edition update encompasses much more material than previous
updates, in terms of both content and new supporting material. Next, we
provide a brief outline of the major content changes in each chapter:
coverage of the system boot process with a focus on GRUB for Linux
systems. New coverage of the Windows subsystem for Linux is included
as well. We have added new sections on linkers and loaders, and we now
discuss why applications are often operating-system specific. Finally, we
have added a discussion of the BCC debugging toolset.
• Chapter 3: Processes simplifies the discussion of scheduling so that it
now includes only CPU scheduling issues. New coverage describes the
memory layout of a C program, the Android process hierarchy, Mach
message passing, and Android RPCs. We have also replaced coverage of
the traditional UNIX/Linux init process with coverage of systemd.
• Chapter 4: Threads and Concurrency (previously Threads) increases the
coverage of support for concurrent and parallel programming at the API
and library level. We have revised the section on Java threads so that it
now includes futures and have updated the coverage of Apple’s Grand
Central Dispatch so that it now includes Swift. New sections discuss fork-
join parallelism using the fork-join framework in Java, as well as Intel
thread building blocks.
• Chapter 5: CPU Scheduling (previously Chapter 6) revises the coverage of
multilevel queue and multicore processing scheduling. We have integrated
coverage of NUMA-aware scheduling issues throughout, including how
this scheduling affects load balancing. We also discuss related modifica-
tions to the Linux CFS scheduler. New coverage combines discussions of
round-robin and priority scheduling, heterogeneous multiprocessing, and
Windows 10 scheduling.
• Chapter 6: Synchronization Tools (previously part of Chapter 5, Process
Synchronization) focuses on various tools for synchronizing processes.
Significant new coverage discusses architectural issues such as instruction
reordering and delayed writes to buffers. The chapter also introduces lock-
free algorithms using compare-and-swap (CAS) instructions. No specific
APIs are presented; rather, the chapter provides an introduction to race
conditions and general tools that can be used to prevent data races. Details
include new coverage of memory models, memory barriers, and liveness
issues.
• Chapter 7: Synchronization Examples (previously part of Chapter 5,
Process Synchronization) introduces classical synchronization problems
and discusses specific API support for designing solutions that solve
these problems. The chapter includes new coverage of POSIX named and
unnamed semaphores, as well as condition variables. A new section on
Java synchronization is included as well.
• Chapter 8: Deadlocks (previously Chapter 7) provides minor updates,
including a new section on livelock and a discussion of deadlock as an
example of a liveness hazard. The chapter includes new coverage of the
Linux lockdep and the BCC deadlock detector tools, as well as coverage
of Java deadlock detection using thread dumps.
• Chapter 9: Main Memory (previously Chapter 8) includes several revi-
sions that bring the chapter up to date with respect to memory manage-
xiv Preface
refers to the Bell–LaPadula model and explores the ARM model of Trust-
Zones and Secure Monitor Calls. Coverage of the need-to-know principle
has been expanded, as has coverage of mandatory access control. Subsec-
tions on Linux capabilities, Darwin entitlements, security integrity protec-
tion, system-call filtering, sandboxing, and code signing have been added.
Coverage of run-time-based enforcement in Java has also been added,
including the stack inspection technique.
• Chapter 18: Virtual Machines (previously Chapter 16) includes added
details about hardware assistance technologies. Also expanded is the
topic of application containment, now including containers, zones, docker,
and Kubernetes. A new section discusses ongoing virtualization research,
including unikernels, library operating systems, partitioning hypervisors,
and separation hypervisors.
• Chapter 19, Networks and Distributed Systems (previously Chapter 17)
has been substantially updated and now combines coverage of computer
networks and distributed systems. The material has been revised to bring
it up to date with respect to contemporary computer networks and dis-
tributed systems. The TCP/IP model receives added emphasis, and a dis-
cussion of cloud storage has been added. The section on network topolo-
gies has been removed. Coverage of name resolution has been expanded
and a Java example added. The chapter also includes new coverage of dis-
tributed file systems, including MapReduce on top of Google file system,
Hadoop, GPFS, and Lustre.
• Chapter 20: The Linux System (previously Chapter 18) has been updated
to cover the Linux 4.i kernel.
• Chapter 21: The Windows 10 System is a new chapter that covers the
internals of Windows 10.
• Appendix A: Influentia Operating Systems has been updated to include
material from chapters that are no longer covered in the text.
Supporting Website
• Errata
• Bibliography
Notes to Instructors
On the website for this text, we provide several sample syllabi that suggest var-
ious approaches for using the text in both introductory and advanced courses.
xvi Preface
Notes to Students
We encourage you to take advantage of the practice exercises that appear at the
end of each chapter. We also encourage you to read through the study guide,
which was prepared by one of our students. Finally, for students who are unfa-
miliar with UNIX and Linux systems, we recommend that you download and
install the Linux virtual machine that we include on the supporting website.
Not only will this provide you with a new computing experience, but the open-
source nature of Linux will allow you to easily examine the inner details of this
popular operating system. We wish you the very best of luck in your study of
operating systems!
Contacting Us
We have endeavored to eliminate typos, bugs, and the like from the text. But,
as in new releases of software, bugs almost surely remain. An up-to-date errata
list is accessible from the book’s website. We would be grateful if you would
notify us of any errors or omissions in the book that are not on the current list
of errata.
We would be glad to receive suggestions on improvements to the book.
We also welcome any contributions to the book website that could be of use
to other readers, such as programming exercises, project suggestions, on-line
labs and tutorials, and teaching tips. E-mail should be addressed to os-book-
[email protected].
Acknowledgments
Many people have helped us with this Tenth Edition, as well as with the
previous nine editions from which it is derived.
Preface xvii
Tenth Edition
• Rick Farrow provided expert advice as a technical editor.
• Jonathan Levin helped out with coverage of mobile systems, protection,
and security.
• Alex Ionescu updated the previous Windows 7 chapter to provide Chapter
21: Windows 10.
• Sarah Diesburg revised Chapter 19: Networks and Distributed Systems.
• Brendan Gregg provided guidance on the BCC toolset.
• Richard Stallman (RMS) supplied feedback on the description of free and
open-source software.
• Robert Love provided updates to Chapter 20: The Linux System.
• Michael Shapiro helped with storage and I/O technology details.
• Richard West provided insight on areas of virtualization research.
• Clay Breshears helped with coverage of Intel thread-building blocks.
• Gerry Howser gave feedback on motivating the study of operating systems
and also tried out new material in his class.
• Judi Paige helped with generating figures and presentation of slides.
• Jay Gagne and Audra Rissmeyer prepared new artwork for this edition.
• Owen Galvin provided technical editing for Chapter 11 and Chapter 12.
• Mark Wogahn has made sure that the software to produce this book (LATEX
and fonts) works properly.
• Ranjan Kumar Meher rewrote some of the LATEX software used in the pro-
duction of this new text.
Previous Editions
• First three editions. This book is derived from the previous editions, the
first three of which were coauthored by James Peterson.
• General contributions. Others who helped us with previous editions
include Hamid Arabnia, Rida Bazzi, Randy Bentson, David Black, Joseph
Boykin, Jeff Brumfield, Gael Buckley, Roy Campbell, P. C. Capon, John
Carpenter, Gil Carrick, Thomas Casavant, Bart Childs, Ajoy Kumar Datta,
Joe Deck, Sudarshan K. Dhall, Thomas Doeppner, Caleb Drake, M. Rasit
Eskicioğlu, Hans Flack, Robert Fowler, G. Scott Graham, Richard Guy,
Max Hailperin, Rebecca Hartman, Wayne Hathaway, Christopher Haynes,
Don Heller, Bruce Hillyer, Mark Holliday, Dean Hougen, Michael Huang,
Ahmed Kamel, Morty Kewstel, Richard Kieburtz, Carol Kroll, Morty
Kwestel, Thomas LeBlanc, John Leggett, Jerrold Leichter, Ted Leung, Gary
Lippman, Carolyn Miller, Michael Molloy, Euripides Montagne, Yoichi
Muraoka, Jim M. Ng, Banu Özden, Ed Posnak, Boris Putanec, Charles
xviii Preface
Book Production
The Executive Editor was Don Fowley. The Senior Production Editor was Ken
Santor. The Freelance Developmental Editor was Chris Nelson. The Assistant
Developmental Editor was Ryann Dannelly. The cover designer was Tom Nery.
The copyeditor was Beverly Peavler. The freelance proofreader was Katrina
Avery. The freelance indexer was WordCo, Inc. The Aptara LaTex team con-
sisted of Neeraj Saxena and Lav kush.
Personal Notes
Avi would like to acknowledge Valerie for her love, patience, and support
during the revision of this book.
Preface xix
Peter would like to thank his wife Carla and his children, Gwen, Owen,
and Maddie.
Greg would like to acknowledge the continued support of his family: his
wife Pat and sons Thomas and Jay.
YYJ
YYJJ Contents
Chapter 8 Deadlocks
8.1 System Model 318 8.6 Deadlock Avoidance 330
8.2 Deadlock in Multithreaded 8.7 Deadlock Detection 337
Applications 319 8.8 Recovery from Deadlock 341
8.3 Deadlock Characterization 321 8.9 Summary 343
8.4 Methods for Handling Deadlocks 326 Practice Exercises 344
8.5 Deadlock Prevention 327 Further Reading 346
Contents YYJJJ
Chapter 17 Protection
17.1 Goals of Protection 667 17.9 Mandatory Access Control
17.2 Principles of Protection 668 (MAC) 684
17.3 Protection Rings 669 17.10 Capability-Based Systems 685
17.4 Domain of Protection 671 17.11 Other Protection Improvement
17.5 Access Matrix 675 Methods 687
17.6 Implementation of the Access 17.12 Language-Based Protection 690
Matrix 679 17.13 Summary 696
17.7 Revocation of Access Rights 682 Further Reading 697
17.8 Role-Based Access Control 683
Contents YYW
Chapter 21 Windows 10
21.1 History 821 21.5 File System 875
21.2 Design Principles 826 21.6 Networking 880
21.3 System Components 838 21.7 Programmer Interface 884
21.4 Terminal Services and Fast User 21.8 Summary 895
Switching 874 Practice Exercises 896
Further Reading 897
YYWJ Contents
Chapter B Windows 7
B.1 History 1 B.6 Networking 41
B.2 Design Principles 3 B.7 Programmer Interface 46
B.3 System Components 10 B.8 Summary 55
B.4 Terminal Services and Fast User Practice Exercises 55
Switching 34 Further Reading 56
B.5 File System 35
Credits 963
Index 965
Part One
Overview
An operating system acts as an intermediary between the user of a com-
puter and the computer hardware. The purpose of an operating system
is to provide an environment in which a user can execute programs in a
convenient and efficient manner.
An operating system is software that manages the computer hard-
ware. The hardware must provide appropriate mechanisms to ensure the
correct operation of the computer system and to prevent programs from
interfering with the proper operation of the system.
Internally, operating systems vary greatly in their makeup, since they
are organized along many different lines. The design of a new operating
system is a major task, and it is important that the goals of the system be
well defined before the design begins.
Because an operating system is large and complex, it must be cre-
ated piece by piece. Each of these pieces should be a well-delineated
portion of the system, with carefully defined inputs, outputs, and func-
tions.
CHAPTER
1
Introduction
CHAPTER OBJECTIVES
3
4 Chapter 1 Introduction
user
application programs
(compilers, web browsers, development kits, etc.)
operating system
computer hardware
(CPU, memory, I/O devices, etc.)
exist because they offer a reasonable way to solve the problem of creating
a usable computing system. The fundamental goal of computer systems is
to execute programs and to make solving user problems easier. Computer
hardware is constructed toward this goal. Since bare hardware alone is not
particularly easy to use, application programs are developed. These programs
require certain common operations, such as those controlling the I/O devices.
The common functions of controlling and allocating resources are then brought
together into one piece of software: the operating system.
In addition, we have no universally accepted definition of what is part of
the operating system. A simple viewpoint is that it includes everything a ven-
dor ships when you order “the operating system.” The features included, how-
ever, vary greatly across systems. Some systems take up less than a megabyte
of space and lack even a full-screen editor, whereas others require gigabytes
of space and are based entirely on graphical windowing systems. A more com-
mon definition, and the one that we usually follow, is that the operating system
is the one program running at all times on the computer—usually called the
kernel. Along with the kernel, there are two other types of programs: system
programs, which are associated with the operating system but are not neces-
sarily part of the kernel, and application programs, which include all programs
not associated with the operation of the system.
The matter of what constitutes an operating system became increasingly
important as personal computers became more widespread and operating sys-
tems grew increasingly sophisticated. In 1998, the United States Department of
Justice filed suit against Microsoft, in essence claiming that Microsoft included
too much functionality in its operating systems and thus prevented application
vendors from competing. (For example, a web browser was an integral part of
Microsoft’s operating systems.) As a result, Microsoft was found guilty of using
its operating-system monopoly to limit competition.
Today, however, if we look at operating systems for mobile devices, we
see that once again the number of features constituting the operating system
is increasing. Mobile operating systems often include not only a core kernel
but also middleware—a set of software frameworks that provide additional
services to application developers. For example, each of the two most promi-
nent mobile operating systems—Apple’s iOS and Google’s Android —features
Although there are many practitioners of computer science, only a small per-
centage of them will be involved in the creation or modification of an operat-
ing system. Why, then, study operating systems and how they work? Simply
because, as almost all code runs on top of an operating system, knowledge
of how operating systems work is crucial to proper, efficient, effective, and
secure programming. Understanding the fundamentals of operating systems,
how they drive computer hardware, and what they provide to applications is
not only essential to those who program them but also highly useful to those
who write programs on them and use them.
1.2 Computer-System Organization 7
a core kernel along with middleware that supports databases, multimedia, and
graphics (to name only a few).
In summary, for our purposes, the operating system includes the always-
running kernel, middleware frameworks that ease application development
and provide features, and system programs that aid in managing the system
while it is running. Most of this text is concerned with the kernel of general-
purpose operating systems, but other components are discussed as needed to
fully explain operating system design and operation.
disk graphics
CPU USB controller
controller adapter
system bus
memory
1.2.1 Interrupts
Consider a typical computer operation: a program performing I/O. To start an
I/O operation, the device driver loads the appropriate registers in the device
controller. The device controller, in turn, examines the contents of these reg-
isters to determine what action to take (such as “read a character from the
keyboard”). The controller starts the transfer of data from the device to its local
buffer. Once the transfer of data is complete, the device controller informs the
device driver that it has finished its operation. The device driver then gives
control to other parts of the operating system, possibly returning the data or a
pointer to the data if the operation was a read. For other operations, the device
driver returns status information such as “write completed successfully” or
“device busy”. But how does the controller inform the device driver that it has
finished its operation? This is accomplished via an interrupt.
1.2.1.1 Overview
Hardware may trigger an interrupt at any time by sending a signal to the
CPU, usually by way of the system bus. (There may be many buses within
a computer system, but the system bus is the main communications path
between the major components.) Interrupts are used for many other purposes
as well and are a key part of how operating systems and hardware interact.
When the CPU is interrupted, it stops what it is doing and immediately
transfers execution to a fixed location. The fixed location usually contains
the starting address where the service routine for the interrupt is located.
The interrupt service routine executes; on completion, the CPU resumes the
interrupted computation. A timeline of this operation is shown in Figure 1.3.
To run the animation assicated with this figure please click here.
Interrupts are an important part of a computer architecture. Each computer
design has its own interrupt mechanism, but several functions are common.
The interrupt must transfer control to the appropriate interrupt service routine.
The straightforward method for managing this transfer would be to invoke
a generic routine to examine the interrupt information. The routine, in turn,
1.2.1.2 Implementation
The basic interrupt mechanism works as follows. The CPU hardware has a
wire called the interrupt-request line that the CPU senses after executing every
instruction. When the CPU detects that a controller has asserted a signal on
the interrupt-request line, it reads the interrupt number and jumps to the
interrupt-handler routine by using that interrupt number as an index into
the interrupt vector. It then starts execution at the address associated with
that index. The interrupt handler saves any state it will be changing during
its operation, determines the cause of the interrupt, performs the necessary
processing, performs a state restore, and executes a return from interrupt
instruction to return the CPU to the execution state prior to the interrupt. We
say that the device controller raises an interrupt by asserting a signal on the
interrupt request line, the CPU catches the interrupt and dispatches it to the
interrupt handler, and the handler clears the interrupt by servicing the device.
Figure 1.4 summarizes the interrupt-driven I/O cycle.
The basic interrupt mechanism just described enables the CPU to respond to
an asynchronous event, as when a device controller becomes ready for service.
In a modern operating system, however, we need more sophisticated interrupt-
handling features.
In modern computer hardware, these three features are provided by the CPU
and the interrupt-controller hardware.
10 Chapter 1 Introduction
interrupt handler
processes data,
returns from interrupt
CPU resumes
processing of
interrupted task
Most CPUs have two interrupt request lines. One is the nonmaskable
interrupt, which is reserved for events such as unrecoverable memory errors.
The second interrupt line is maskable: it can be turned off by the CPU before
the execution of critical instruction sequences that must not be interrupted. The
maskable interrupt is used by device controllers to request service.
Recall that the purpose of a vectored interrupt mechanism is to reduce the
need for a single interrupt handler to search all possible sources of interrupts
to determine which one needs service. In practice, however, computers have
more devices (and, hence, interrupt handlers) than they have address elements
in the interrupt vector. A common way to solve this problem is to use interrupt
chaining, in which each element in the interrupt vector points to the head of
a list of interrupt handlers. When an interrupt is raised, the handlers on the
corresponding list are called one by one, until one is found that can service
the request. This structure is a compromise between the overhead of a huge
interrupt table and the inefficiency of dispatching to a single interrupt handler.
Figure 1.5 illustrates the design of the interrupt vector for Intel processors.
The events from 0 to 31, which are nonmaskable, are used to signal various
error conditions. The events from 32 to 255, which are maskable, are used for
purposes such as device-generated interrupts.
The interrupt mechanism also implements a system of interrupt priority
levels. These levels enable the CPU to defer the handling of low-priority inter-
1.2 Computer-System Organization 11
0 divide error
1 debug exception
2 null interrupt
3 breakpoint
4 INTO-detected overflow
5 bound range exception
6 invalid opcode
7 device not available
8 double fault
9 coprocessor segment overrun (reserved)
10 invalid task state segment
11 segment not present
12 stack fault
13 general protection
14 page fault
15 (Intel reserved, do not use)
16 floating-point error
17 alignment check
18 machine check
19–31 (Intel reserved, do not use)
32–255 maskable interrupts
rupts without masking all interrupts and makes it possible for a high-priority
interrupt to preempt the execution of a low-priority interrupt.
In summary, interrupts are used throughout modern operating systems to
handle asynchronous events (and for other purposes we will discuss through-
out the text). Device controllers and hardware faults raise interrupts. To enable
the most urgent work to be done first, modern computers use a system of
interrupt priorities. Because interrupts are used so heavily for time-sensitive
processing, efficient interrupt handling is required for good system perfor-
mance.
The basic unit of computer storage is the bit. A bit can contain one of two
values, 0 and 1. All other storage in a computer is based on collections of bits.
Given enough bits, it is amazing how many things a computer can represent:
numbers, letters, images, movies, sounds, documents, and programs, to name
a few. A byte is 8 bits, and on most computers it is the smallest convenient
chunk of storage. For example, most computers don’t have an instruction to
move a bit but do have one to move a byte. A less common term is word,
which is a given computer architecture’s native unit of data. A word is made
up of one or more bytes. For example, a computer that has 64-bit registers and
64-bit memory addressing typically has 64-bit (8-byte) words. A computer
executes many operations in its native word size rather than a byte at a time.
Computer storage, along with most computer throughput, is generally
measured and manipulated in bytes and collections of bytes. A kilobyte, or
KB, is 1,024 bytes; a megabyte, or MB, is 1,0242 bytes; a gigabyte, or GB, is
1,0243 bytes; a terabyte, or TB, is 1,0244 bytes; and a petabyte, or PB, is 1,0245
bytes. Computer manufacturers often round off these numbers and say that
a megabyte is 1 million bytes and a gigabyte is 1 billion bytes. Networking
measurements are an exception to this general rule; they are given in bits
(because networks move data a bit at a time).
1. Main memory is usually too small to store all needed programs and data
permanently.
2. Main memory, as mentioned, is volatile —it loses its contents when power
is turned off or otherwise lost.
Thus, most computer systems provide secondary storage as an extension of
main memory. The main requirement for secondary storage is that it be able to
hold large quantities of data permanently.
The most common secondary-storage devices are hard-disk drives (HDDs)
and nonvolatile memory (NVM) devices, which provide storage for both
programs and data. Most programs (system and application) are stored in
secondary storage until they are loaded into memory. Many programs then use
secondary storage as both the source and the destination of their processing.
Secondary storage is also much slower than main memory. Hence, the proper
management of secondary storage is of central importance to a computer sys-
tem, as we discuss in Chapter 11.
In a larger sense, however, the storage structure that we have described
—consisting of registers, main memory, and secondary storage—is only one
of many possible storage system designs. Other possible components include
cache memory, CD-ROM or blu-ray, magnetic tapes, and so on. Those that are
slow enough and large enough that they are used only for special purposes
—to store backup copies of material stored on other devices, for example —
are called tertiary storage. Each storage system provides the basic functions
of storing a datum and holding that datum until it is retrieved at a later time.
The main differences among the various storage systems lie in speed, size, and
volatility.
The wide variety of storage systems can be organized in a hierarchy (Figure
1.6) according to storage capacity and access time. As a general rule, there is a
faster
primary
cache storage
volatile
storage main memory
-----------------------------------------------------------
nonvolatile
storage nonvolatile memory secondary
storage
hard-disk drives
optical disk
slower
larger
tertiary
storage
magnetic tapes
trade-off between size and speed, with smaller and faster memory closer to the
CPU. As shown in the figure, in addition to differing in speed and capacity, the
various storage systems are either volatile or nonvolatile. Volatile storage, as
mentioned earlier, loses its contents when the power to the device is removed,
so data must be written to nonvolatile storage for safekeeping.
The top four levels of memory in the figure are constructed using semi-
conductor memory, which consists of semiconductor-based electronic circuits.
NVM devices, at the fourth level, have several variants but in general are faster
than hard disks. The most common form of NVM device is flash memory, which
is popular in mobile devices such as smartphones and tablets. Increasingly,
flash memory is being used for long-term storage on laptops, desktops, and
servers as well.
Since storage plays an important role in operating-system structure, we
will refer to it frequently in the text. In general, we will use the following
terminology:
The design of a complete storage system must balance all the factors just
discussed: it must use only as much expensive memory as necessary while
providing as much inexpensive, nonvolatile storage as possible. Caches can
be installed to improve performance where a large disparity in access time or
transfer rate exists between two components.
instruction execution
cache
cycle
instructions
thread of execution and
data movement
data
CPU (*N)
I/O request
interrupt
DMA
data
memory
device
(*M)
bus. The form of interrupt-driven I/O described in Section 1.2.1 is fine for
moving small amounts of data but can produce high overhead when used for
bulk data movement such as NVS I/O. To solve this problem, direct memory
access (DMA) is used. After setting up buffers, pointers, and counters for the
I/O device, the device controller transfers an entire block of data directly to
or from the device and main memory, with no intervention by the CPU. Only
one interrupt is generated per block, to tell the device driver that the operation
has completed, rather than the one interrupt per byte generated for low-speed
devices. While the device controller is performing these operations, the CPU is
available to accomplish other work.
Some high-end systems use switch rather than bus architecture. On these
systems, multiple components can talk to other components concurrently,
rather than competing for cycles on a shared bus. In this case, DMA is even
more effective. Figure 1.7 shows the interplay of all components of a computer
system.
sors as well. They may come in the form of device-specific processors, such as
disk, keyboard, and graphics controllers.
All of these special-purpose processors run a limited instruction set and
do not run processes. Sometimes, they are managed by the operating system,
in that the operating system sends them information about their next task and
monitors their status. For example, a disk-controller microprocessor receives
a sequence of requests from the main CPU core and implements its own disk
queue and scheduling algorithm. This arrangement relieves the main CPU of
the overhead of disk scheduling. PCs contain a microprocessor in the keyboard
to convert the keystrokes into codes to be sent to the CPU. In other systems or
circumstances, special-purpose processors are low-level components built into
the hardware. The operating system cannot communicate with these proces-
sors; they do their jobs autonomously. The use of special-purpose microproces-
sors is common and does not turn a single-processor system into a multiproces-
sor. If there is only one general-purpose CPU with a single processing core, then
the system is a single-processor system. According to this definition, however,
very few contemporary computer systems are single-processor systems.
In addition, one chip with multiple cores uses significantly less power than
multiple single-core chips, an important issue for mobile devices as well as
laptops.
In Figure 1.9, we show a dual-core design with two cores on the same pro-
cessor chip. In this design, each core has its own register set, as well as its own
local cache, often known as a level 1, or L1, cache. Notice, too, that a level 2 (L2)
cache is local to the chip but is shared by the two processing cores. Most archi-
tectures adopt this approach, combining local and shared caches, where local,
lower-level caches are generally smaller and faster than higher-level shared
Figure 1.9 A dual-core design with two cores on the same chip.
18 Chapter 1 Introduction
Although virtually all systems are now multicore, we use the general term
CPU when referring to a single computational unit of a computer system and
core as well as multicore when specifically referring to one or more cores on
a CPU.
memory0 memory1
interconnect
CPU 0 CPU 1
CPU 2 CPU 3
memory2 memory3
PC MOTHERBOARD
This board is a fully functioning computer, once its slots are populated.
It consists of a processor socket containing a CPU, DRAM sockets, PCIe bus
slots, and I/O connectors of various types. Even the lowest-cost general-
purpose CPU contains multiple cores. Some motherboards contain multiple
processor sockets. More advanced computers allow more than one system
board, creating NUMA systems.
interconnect interconnect
computer computer computer
storage-area
network
ating systems lack support for simultaneous data access by multiple hosts,
parallel clusters usually require the use of special versions of software and
special releases of applications. For example, Oracle Real Application Cluster
is a version of Oracle’s database that has been designed to run on a parallel
cluster. Each machine runs Oracle, and a layer of software tracks access to the
shared disk. Each machine has full access to all data in the database. To provide
this shared access, the system must also supply access control and locking to
ensure that no conflicting operations occur. This function, commonly known
as a distributed lock manager (DLM), is included in some cluster technology.
Cluster technology is changing rapidly. Some cluster products support
thousands of systems in a cluster, as well as clustered nodes that are separated
by miles. Many of these improvements are made possible by storage-area
networks (SANs), as described in Section 11.7.4, which allow many systems
to attach to a pool of storage. If the applications and their data are stored on
the SAN, then the cluster software can assign the application to run on any
host that is attached to the SAN. If the host fails, then any other host can take
over. In a database cluster, dozens of hosts can share the same database, greatly
increasing performance and reliability. Figure 1.11 depicts the general structure
of a clustered system.
HADOOP
1. A distributed file system that manages data and files across distributed com-
puting nodes.
2. The YARN (“Yet Another Resource Negotiator”) framework, which manages
resources within the cluster as well as scheduling tasks on nodes in the
cluster.
3. The MapReduce system, which allows parallel processing of data across
nodes in the cluster.
start executing that system. To accomplish this goal, the bootstrap program
must locate the operating-system kernel and load it into memory.
Once the kernel is loaded and executing, it can start providing services to
the system and its users. Some services are provided outside of the kernel by
system programs that are loaded into memory at boot time to become system
daemons, which run the entire time the kernel is running. On Linux, the first
system program is “systemd,” and it starts many other daemons. Once this
phase is complete, the system is fully booted, and the system waits for some
event to occur.
If there are no processes to execute, no I/O devices to service, and no users
to whom to respond, an operating system will sit quietly, waiting for something
to happen. Events are almost always signaled by the occurrence of an interrupt.
In Section 1.2.1 we described hardware interrupts. Another form of interrupt is
a trap (or an exception), which is a software-generated interrupt caused either
by an error (for example, division by zero or invalid memory access) or by
a specific request from a user program that an operating-system service be
performed by executing a special operation called a system call.
Random documents with unrelated
content Scribd suggests to you:
The Project Gutenberg eBook of Mémoires de
Céleste Mogador, Volume 2
This ebook is for the use of anyone anywhere in the United
States and most other parts of the world at no cost and with
almost no restrictions whatsoever. You may copy it, give it away
or re-use it under the terms of the Project Gutenberg License
included with this ebook or online at www.gutenberg.org. If you
are not located in the United States, you will have to check the
laws of the country where you are located before using this
eBook.
Language: French
MÉMOIRES
DE
CÉLESTE MOGADOR
Paris.—IMP. DE LA LIBRAIRIE NOUVELLE.—Bourdilliat, 15, rue Breda.
MÉMOIRES
DE
CÉLESTE
MOGADOR
TOME DEUXIÈME
PARIS
LIBRAIRIE NOUVELLE
BOULEVARD DES ITALIENS, 15
CÉLESTE MOGADOR
XII
LA REINE POMARÉ.
(Suite.)
ebookbell.com