Operatingsystems
Operatingsystems
A program that acts as an intermediary between a user of a computer bootstrap program is loaded at power-up or reboot.
and the computer hardware
Typically stored in ROM or EEPROM, known as firmware
Operating system goals: Initializes all aspects of system.
Loads operating system kernel and starts execution.
Execute user programs and make solving user problems
easier. Computer-System Operation
Make the computer system convenient to use.
I/O devices and the CPU can execute concurrently
Use the computer hardware in an efficient manner.
Each device controller oversees a particular device type.
What Operating Systems Do Each device controller has a local buffer.
CPU moves data from/to main memory to/from local buffers.
An operating system provides an environment within which other I/O is from the device to local buffer of controller
programs can do useful work. Device controller informs CPU that it has finished its operation by
Two viewpoints to understand the role of operating systems: causing an interrupt
Storage systems organized in hierarchy. The controller informs the driver via an interrupt.
The driver returns control to the OS.
Speed
Cost Direct Memory Access Structure
Volatility
Used for high-speed I/O devices able to transmit information at close
Caching – copying information into faster storage system; main to memory speeds
memory can be viewed as a cache for secondary storage
Device controller transfers blocks of data from buffer storage directly
Device Driver for each device controller to manage I/O to main memory without CPU intervention
Provides uniform interface between controller and kernel. Only one interrupt is generated per block, rather than the one
interrupt per byte
The basic unit of computer storage is the bit. A bit can contain one of A von Neumann architecture
two values, 0 and 1. All other storage in a computer is based on
collections of bits. Given enough bits, it is amazing how many things a Computer-System Architecture
computer can represent: numbers, letters, images, movies, sounds,
Most systems use a single general-purpose processor.
documents, and programs, to name a few. A byte is eight bits, and on
most computers, it is the smallest convenient chunk of storage. For Most systems have special-purpose processors as well.
example, most computers don’t have an instruction to move a bit but
do have one to move a byte. A less common term is word, which is a Multiprocessors systems growing in use and importance.
given computer architecture’s native unit of data. A word is made up Also known as parallel systems, tightly coupled systems.
of one or more bytes. For example, a computer that has 64-bit
registers and 64-bit memory addressing typically has 64-bit (8-byte) Advantages include:
words. A computer executes many operations in its native word size
1. Increased throughput
rather than a byte at a time.
2. Economy of scale
Computer storage, along with most computer throughput, is generally
3. Increased reliability – graceful degradation or fault tolerance
measured and manipulated in bytes and collections of bytes.
Two types:
a kilobyte, or KB, is 1,024 bytes.
1. Asymmetric Multiprocessing – each processor is assigned a
a megabyte, or MB, is 1,0242 bytes.
specific task.
a gigabyte, or GB, is 1,0243 bytes 2. Symmetric Multiprocessing – each processor performs all
tasks.
a terabyte, or TB, is 1,0244 bytes
Symmetric Multiprocessing Architecture
a petabyte, or PB, is 1,0245 bytes.
Computer manufacturers often round off these numbers and say that a
megabyte is 1 million bytes, and a gigabyte is 1 billion bytes.
Networking measurements are an exception to this general rule; they
are given in bits (because networks move data a bit at a time).
I/O Structure
To start I/O, the device driver loads registers within the device Multi-chip and multicore
controller. Systems containing all chips.
Chassis containing multiple separate systems. Increasingly CPUs support multi-mode operations.
Software error (e.g., division by zero) All (or part) of the data that is needed by the program must be in
Request for operating system service memory.
Other process problems include infinite loop, processes
Memory management determines what is in memory and when
modifying each other or the operating system.
Optimizing CPU utilization and computer response to users
Dual-mode operation to distinguish between operating-system code
and user-defined code. Memory management activities
User mode and kernel mode Keeping track of which parts of memory are currently being
Mode bit provided by hardware. used and by whom.
Provides ability to distinguish when system is running user Deciding which processes (or parts of processes) and data to
code or kernel code. move into and out of memory.
Some instructions designated as privileged, only executable in Allocating and deallocating memory space as needed.
kernel mode.
System call changes mode to kernel, return from call resets it Storage Management
to user. OS provides uniform, logical view of information storage
Abstracts physical properties to logical storage unit - file. Multiprocessor environment must provide cache coherency in
hardware such that all CPUs have the most recent value in their cache.
File-System management
Protection and Security
Files usually organized into directories.
Protection – any mechanism for controlling access of processes or
Access control on most systems to determine who can access what.
users to resources defined by the OS
OS activities include.
Security – defence of the system against internal and external attacks
Creating and deleting files and directories
Huge range, including denial-of-service, worms, viruses,
Primitives to manipulate files and directories.
identity theft, theft of service
Mapping files onto secondary storage
Backup files onto stable (non-volatile) storage media Systems generally first distinguish among users, to determine who can
do what.
Mass-Storage Management
User identities (user IDs, security IDs) include name and
Usually, disks used to store data that does not fit in main memory or
associated number, one per user
data that must be kept for a “longperiod.
User ID then associated with all files, processes of that user to
Proper management is of central importance. determine access control
Group identifier (group ID) allows set of users to be defined
Entire speed of computer operation hinges on disk subsystem and its and controls managed, then also associated with each
algorithms process, file.
Privilege escalation allows user to change to effective ID with
OS activities
more rights.
Free-space management
Chapter 2: Operating-System Structures
Storage allocation
Disk scheduling Operating System Services
Some storage need not be fast. Operating systems provide an environment for execution of programs
and services to programs and users.
Tertiary storage includes optical storage, magnetic tape.
Varies between WORM (write-once, read-many-times) and One set of operating-system services provides functions that are
RW (read-write) helpful to the user:
Caching User interface – Almost all operating systems have a user interface
(UI).
Important principle, performed at many levels in a computer (in
hardware, operating system, software) Varies between Command-Line (CLI), Graphics User Interface
(GUI), Batch
Information in use copied from slower to faster storage temporarily
Program execution - The system must be able to load a program into
Faster storage (cache) checked first to determine if information is
memory and to run that program, end execution, either normally or
there
abnormally (indicating error)
If it is, information used directly from the cache (fast)
I/O operations - A running program may require I/O, which may
If not, data copied to cache and used there.
involve a file or an I/O device
Cache smaller than storage being cached.
File-system manipulation - The file system is of particular interest.
Cache management important design problem Programs need to read and write files and directories, create, and
Cache size and replacement policy delete them, search them, list file Information, permission
management.
Performance of Various Levels of Storage
Communications – Processes may exchange information, on the same
computer or between computers over a network.
Voice commands.
Note that the system-call names used throughout this text are generic
System call sequence to copy the contents of one file to another file
Process control
Information maintenance
Background Services
Application programs
Standard C Library Example
Don’t pertain to system
C program invoking printf() library call, which calls write() system call
Run by users
Not typically considered part of OS
Launched by command line, mouse click, finger poke
Program loading and execution- Absolute loaders, relocatable loaders, Operating System Structure
linkage editors, and overlay-loaders, debugging systems for higher-
General-purpose OS is very large program
level and machine language
Various ways to structure ones
Communications - Provide the mechanism for creating virtual o Simple structure – MS-DOS
connections among processes, users, and computer systems o More complex – UNIX
o Layered – an abstraction Benefits:
o Microkernel – Mach
Easier to extend a microkernel
o Modules – Solaris
Easier to port the operating system to new architectures
o Hybrid – Mac OS X
More reliable (less code is running in kernel mode)
Simple Structure -- MS-DOS More secure
Hybrid Systems
Most modern operating systems are actually not one pure model
Layered Approach Hybrid combines multiple approaches to address
performance, security, usability needs
The operating system is divided into
Linux and Solaris kernels in kernel address space, so
several layers (levels), each built on
monolithic, plus modular for dynamic loading of functionality
top of lower layers. The bottom layer
Windows mostly monolithic, plus microkernel for different
(layer 0), is the hardware; the
subsystem personalities
highest (layer N) is the user
interface. Apple Mac OS X hybrid, layered, Aqua UI plus Cocoa programming
With modularity, layers are selected environment
such that each uses functions (operations) and services of only
lower-level layers Below is kernel consisting of Mach microkernel and BSD Unix
parts, plus I/O kit and dynamically loadable modules (called
Microkernel System Structure kernel extensions)
Moves as much from the kernel into user space Mac OS X Structure
Mach example of microkernel
message passing
ready: The process is waiting to be assigned to a processor
terminated: The process has finished execution
System Boot
When power initialized on system, execution starts at a fixed memory Process Control Block (PCB)
location
Information associated with each process (also called task control
Firmware ROM used to hold initial boot code block)
Operating system must be made available to hardware so hardware Process state – running, waiting,
can start it etc
Program counter – location of
Small piece of code – bootstrap loader, stored in ROM or instruction to next execute
EEPROM locates the kernel, loads it into memory, and starts it CPU registers – contents of all
Sometimes two-step process where boot block at fixed process centric registers
location loaded by ROM code, which loads bootstrap loader CPU scheduling information -
from disk priorities, scheduling queue
Common bootstrap loader, GRUB, allows selection of kernel from pointers
multiple disks, versions, kernel options Memory-management
information – memory allocated to the process
Kernel loads and system is then running Accounting information – CPU used, clock time elapsed since
start, time limits
Chapter 3: Processes
I/O status information – I/O devices allocated to process, list
Process Concept of open files
Process – a program in execution; process execution must progress in CPU Switch From Process to Process
sequential fashion
Multiple parts
Maximize CPU use, quickly switch processes onto CPU for time sharing
Process Creation
Schedulers
Short-term scheduler (or CPU scheduler) – selects which process A Tree of Processes in Linux
should be executed next and allocates CPU
Address space
Sometimes the only scheduler in a system
Short-term scheduler is invoked frequently (milliseconds) Child duplicate of parent
(must be fast) Child has a program loaded into it
Long-term scheduler (or job scheduler) – selects which processes UNIX examples
should be brought into the ready queue fork() system call creates new process
Long-term scheduler is invoked infrequently (seconds, exec() system call used after a fork() to replace the process’
minutes) (may be slow) memory space with a new program
The long-term scheduler controls the degree of
multiprogramming
Context Switch
When CPU switches to another process, the system must save the
state of the old process and load the saved state for the new process
via a context switch
The more complex the OS and the PCB ➔ the longer the Process executes last statement and then asks the operating system to
context switch delete it using the exit() system call.
Time dependent on hardware support Returns status data from child to parent (via wait())
Process’ resources are deallocated by operating system
Parent may terminate the execution of children processes using the Shared data
abort() system call. Some reasons for doing so:
#define BUFFER_SIZE 10
Child has exceeded allocated resources
typedef struct {
Task assigned to child is no longer required
The parent is exiting, and the operating systems does not ...
allow a child to continue if its parent terminates (cascading
} item;
termination)
Interprocess Communication
item buffer[BUFFER_SIZE];
Processes within a system may be independent or cooperating
int in = 0;
Cooperating process can affect or be affected by other processes,
int out = 0;
including sharing data
Solution is correct, but can only use BUFFER_SIZE-1 elements
Reasons for cooperating processes:
Bounded-Buffer – Producer
Information sharing
Computation speedup item next_produced;
Modularity
while (true) {
Convenience
/* produce an item in next produced */
Cooperating processes need interprocess communication (IPC)
while (((in + 1) % BUFFER_SIZE) == out)
Two models of IPC
; /* do nothing */
Shared memory
Message passing buffer[in] = next_produced;
in = (in + 1) % BUFFER_SIZE;
Communications Models
}
item next_consumed;
while (true) {
; /* do nothing */
next_consumed = buffer[out];
Normally, the operating system prevents one process accessing other Message Passing Systems
process’s memory
Mechanism for processes to communicate and to synchronize their
Requires that two or more processes agree to remove this actions
restriction
Message system – processes communicate with each other without
Communication is under the control of the processes, not the
resorting to shared variables
operating system
IPC facility provides two operations:
Major issues is to provide mechanism that will allow the user
processes to synchronize their actions when they access shared send(message)
memory. receive(message)
Producer-Consumer Problem The message size is either fixed or variable
Paradigm for cooperating processes, producer process produces If processes P and Q wish to communicate, they need to:
information that is consumed by a consumer process
Establish a communication link between them
unbounded-buffer places no practical limit on the size of the Exchange messages via send/receive
buffer
bounded-buffer assumes that there is a fixed buffer size Implementation issues:
send (P, message) – send a message to process P Shared memory and message passing can also be used in client-server
receive(Q, message) – receive a message from process Q systems.
Properties of communication link In addition, client-server systems use three other strategies:
Operations
Benefits
Multicore Programming
Chapter 4: Threads
Types of parallelism
One-to-One
Many-to-Many Model
Two-level Model
Java Threads
Java threads are managed by the JVM Growing in popularity as numbers of threads increase, program
Typically implemented using the threads model provided by correctness more difficult with explicit threads
underlying OS
Creation and management of threads done by compilers and run-time
Java threads may be created by:
libraries rather than programmers
Thread Pools
o Extending Thread class OpenMP
o Implementing the Runnable interface Grand Central Dispatch
Microsoft Threading Building Blocks
Java Multithreaded Program #1
Thread Pools
Advantages:
Threading Issues
Invoking thread cancellation requests cancellation, but actual Critical Section Problem
cancellation depends on thread state
Consider system of n processes {p0, p1, … pn-1}
Each process has critical section segment of code
o Process may be changing common variables, updating
table, writing file, etc
o When one process is in critical section, no other may be
If thread has cancellation disabled, cancellation remains pending
in its critical section
until thread enables it
Critical section problem is to design protocol to solve this
Default type is deferred
Each process must ask permission to enter critical section in entry
o Cancellation only occurs when thread reaches
section, may follow critical section with exit section, then
cancellation point
remainder section
I.e. pthread_testcancel()
Then cleanup handler is invoked Critical Section
On Linux systems, thread cancellation is handled through signals
General structure of process Pi
Chapter 5: Process Synchronization
Background
Race Condition
do { do {
} while (true);
test_and_set Instruction
Mutex Locks Counting semaphore – integer value can range over an
unrestricted domain
Previous solutions are complicated and generally inaccessible to
Binary semaphore – integer value can range only between 0 and 1
application programmers
o Same as a mutex lock
OS designers build software tools to solve critical section problem
Can solve various synchronization problems
Simplest is mutex lock
Consider P1 and P2 that require S1 to happen before S2
Protect a critical section by first acquire() a lock then release() the
lock Create a semaphore “synch” initialized to 0
o Boolean variable indicating if lock is available or not
P1:
Calls to acquire() and release() must be atomic
o Usually implemented via hardware atomic instructions S1;
But this solution requires busy waiting
signal(synch);
o This lock therefore called a spinlock
P2:
acquire() and release()
wait(synch);
acquire() {
S2;
while (!available)
Semaphore Implementation with no Busy waiting
; /* busy wait */
Overcoming the busy waiting
available = false;
Semaphore } semaphore;
Synchronization tool that provides more sophisticated ways (than wait(semaphore *S) {
Mutex locks) for process to S->value--;
synchronize their activities.
Semaphore S – integer if (S->value < 0) {
variable add this process to S->list;
Can only be accessed via two
block();
indivisible (atomic) operations
o wait() and signal() }
Definition of the wait()
}
wait(S) {
signal(semaphore *S) {
while (S <= 0)
S->value++;
; // busy wait
if (S->value <= 0) {
S--;
remove a process P from S->list;
}
wakeup(P);
Definition of the signal() }
signal(S) { }
S++;
Priority Inversion – Scheduling problem when lower-priority process Problem – allow multiple readers to read at the same time
holds a lock needed by higher-priority process
Only one single writer can access the shared data at the same
Solved via priority-inheritance protocol time
Classical Problems of Synchronization Several variations of how readers and writers are considered – all
involve some form of priorities
Classical problems used to test newly-proposed synchronization
schemes Shared Data
do { /* writing is performed */
... ...
do { ...
wait(full); wait(mutex);
... if (read_count == 0)
do {
wait (chopstick[i] );
wait (chopStick[ (i + 1) % 5] );
}
Monitor Solution to Dining Philosophers
void release() {
busy = FALSE;
x.signal();
initialization code() {
busy = FALSE;
DiningPhilosophers.pickup(i);
EAT
DiningPhilosophers.putdown(i);
R.acquire(t);
...
...
R.release;
monitor ResourceAllocator