0% found this document useful (0 votes)
19 views

OS-LecturePPT Unit 1 AY2021-22 TechngVrsn

Uploaded by

genuine Gamer
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

OS-LecturePPT Unit 1 AY2021-22 TechngVrsn

Uploaded by

genuine Gamer
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 142

 Operating Systems

References:

1) Operating System Concepts, Abraham Silberchatz, Peter B.


Galvin and Greg Gagne, 8th Edition, John Wiley, 2008th Edition

NPTEL Video Lectures:

1) https://www.cse.iitb.ac.in/~mythili/os/

2) nptel.ac.in/courses/106106144/

3) https://nptel.ac.in/courses/106108101
Unit 1

CO1:
Acquire basic knowledge about different functions,
structures and design features of contemporary operating
systems, to aid in applying and solving complex
engineering problems.
 What is an Operating System?
 A program that acts as an intermediary between
user Programs and the computer hardware

 OS is a resource allocator
• Manages all resources: CPU, memory, and
I/O devices (disk, network card, mouse, keyboard etc.)
• Decides between conflicting requests for efficient and
fair resource use

 OS is a control program
•Controls execution of programs, to prevent errors and
improper use of the computer
Operating system goals:
• To provide Usable Computing Environment: The
fundamental goal of Computer System is to
execute user programs and make solving user
problems easier.
• Convenience, Abstraction of hardware
resources for user programs
• Efficient use of computer hardware
 Why bother with an OS?
• Application benefits
• programming simplicity
• see high-level abstractions (files) instead of low-level
hardware details (device registers)
• abstractions are reusable across many programs
• portability (across machine configurations or architectures)
• device independence: 3com card or Intel card?
• User benefits
• safety
• program “sees” its own virtual machine, thinks it “owns”
the computer
• OS protects programs from each other
• OS fairly multiplexes resources across programs
• efficiency (cost and speed)
• share one computer across many users
• concurrent execution of multiple programs
© 2012 Gribble, Lazowska, Levy, Zahorjan
5
 Computer System Structure
• Computer system can be divided into four components:
• Hardware – provides basic computing resources
• CPU, memory, I/O devices
• Operating system
• Controls and coordinates use of hardware among
various user application programs
• Application programs – solve the computing problems
of the users
• Word processors, compilers, web browsers,
database systems, video games
• Users
• People, machines, other computers
 Abstract View of Components of Computer
 Virtual Machine Abstraction

Application  

Virtual Machine Interface


Operating System  
Physical Machine Interface

Hardware

 Virtual is a word commonly used to mean something that


is created by software to 'appear' as something real.

 Virtualization is a process that allows for more efficient


utilization of physical computer hardware and is the
foundation of cloud computing.
 Operating System Functions and Services

1) User Interface
Command-Line (CLI), Graphical user interface (GUI)
2) Program execution (Process execution)
•Creating and deleting both user and system processes
•Suspending and resuming processes
•Providing mechanisms for process synchronization
•Providing mechanisms for process communication
3) I/O operations - A running program may require
I/O, which may involve a file or an I/O device

4) File-system manipulation (File Management):


•Creating and deleting files
• Creating and deleting directories to organize files
• Supporting primitives for manipulating files and
directories
• Mapping files onto mass storage
5) Communications:
•Processes may exchange information,
- on the same computer or
- between computers over a network
•Communications may be
- via shared memory or
- through message passing
6) Error detection – OS needs to be constantly aware of
possible errors
• May occur in the CPU and memory hardware, in I/O
devices, in user program
• For each type of error, OS should take the appropriate
action
• Debugging facilities
7) Resource allocation –
•Multiple processes running concurrently, resources
must be allocated to each of them
• Many types of resources - CPU cycles, main
memory, file storage, I/O devices.
7a) Memory management
• Keeping track of which parts of memory are
currently being used and which process is using
them
• Allocating and deallocating memory space as
needed
• Deciding which processes (or parts of processes)
and data to move into and out of memory
7b) Mass storage management (Disk storage management)
• Free-space management
• Storage allocation
• Disk scheduling
• Partitioning
• Protection
7c) I/O Management:
I/O subsystem responsible for
• buffering (storing data temporarily while it is being transferred),
• caching (storing parts of data in faster storage for
performance),
• spooling (the overlapping of output of one job with input of
other jobs)
• Drivers for specific hardware devices
8) Protection & Security
•Protection – any mechanism for controlling access of
processes or users to resources defined by the OS.
•Protection counters threats posed by users

•Security – defense of the system against external


attacks. Ex. attacks: denial-of-service, worms, viruses
• Security counters threats of interference or illegal
use posed by intruders.

9) Accounting - To keep track of which users use


how much and what kinds of computer resources
 A View of Operating System Services
 Classes of Operating Systems (Overview of
Operating Systems)
 Distributed Systems

• Collection of separate, possibly heterogeneous, systems


networked together
• Network is a communications path, TCP/IP most
common
• Local Area Network (LAN)
• Wide Area Network (WAN)
• Metropolitan Area Network (MAN)
• Personal Area Network (PAN)
Advantages of distributed systems:
• Resources Sharing
• Computation speed up – load sharing
• Reliability
• Communication – Message passing
 Types of Distributed Operating Systems

• Network Operating Systems

• Distributed Operating Systems


 Network-Operating Systems

• Users are aware of multiplicity of machines. Access


to resources of various machines is done explicitly by:

• Remote logging into the appropriate remote


machine (telnet, ssh)

• Remote Desktop (Microsoft Windows)

• Transferring data from remote machines to local


machines, via the File Transfer Protocol (FTP)
mechanism
 Distributed-Operating Systems

• Users not aware of multiplicity of machines

• Access to remote resources similar to access to


local resources
 Special-Purpose Systems

• Real-time embedded systems


• Multimedia systems
• Handheld systems
 Real Time Embedded System: Characteristics

- Embedded computers are found in car engines, robots,


microwave ovens, medical imaging systems, systems
controlling scientific experiments, industrial control
systems, weapon systems.
- OS analyses the sensors data and adjust controls
- These Embedded Systems run Real Time OS
- OS provides limited features
- Little or no user interface
 Real-Time Operating Systems
• In real-time applications, users need computer to
perform some actions in a timely manner
• To control activities in an external system

• Well-defined fixed-time constraints.

• Real-Time systems may be either hard or soft real-


time.
• A hard real-time system meets response requirements
(time requirement) under all conditions

• A soft real-time system makes best effort to meet


response requirement (time).

- Cannot guarantee that it will be able to meet


time requirement

- Limited utility in industrial control of robotics

- Useful in multimedia applications


 Multimedia Systems: What is Multimedia?

• Multimedia data includes


- audio and video clips (i.e., MP3 and MPEG files)
- live webcasts

• Multimedia data may be delivered to

- desktop PC’s
- handheld devices (PDAs, smart phones
 Media Delivery

• Multimedia data is stored in the file system like other


ordinary data
• However, multimedia data must be accessed with
specific timing requirements
• For example, Multimedia video data must be delivered
at a rate which guarantees 24-30 frames/second
• Continuous-media data is data with specific rate
requirements
 Streaming

• Streaming is delivering a multimedia file from a server


to a client - typically the deliver occurs over a network
connection.
• There are two different types of streaming:
1. Progressive download - the client begins playback of
the multimedia file as it is delivered. The file is
ultimately stored on the client computer.
2. Real-time streaming - the multimedia file is delivered
to - but not stored on - the client’s computer.
 Real-time Streaming

• There are two types of real-time streaming:

1. Live streaming - used to deliver a live event while


it is occurring
2. On-demand streaming - used to deliver media
streams such as movies, archived lectures, etc.
The events are not delivered in real-time.
 Multimedia Operating Systems Issues

• The operating system must guarantee the specific


data rate and timing requirements of continuous
media

• Such requirements are known as Quality-of-Service


(QoS) guarantees
 Requirement of Multimedia Operating Systems

• There are three levels of QoS

1. Best-effort service - the system makes a best effort


with no QoS guarantees

2. Soft QoS - allows different traffic streams to be


prioritized, however no QoS guarantees are made

3. Hard QoS - the QoS rquirements are guaranteed


 Parameters Defining QoS

• Throughput - the total amount of work completed


during a specific time interval

• Delay - the elapsed time from when a request is first


submitted to when the desired result is produced

• Jitter - the delays that occur during playback of a


stream

• Reliability - how errors are handled during


transmission and processing of continuous media
 Multimedia OS: CPU Scheduling

• Multimedia systems require hard realtime scheduling to


ensure critical tasks will be serviced within timing
deadlines

• Most hard realtime CPU scheduling algorithms assign


realtime processes static priorities that do not change
over time
 Handheld Systems: Characteristics

• Personal Digital Assistants (PDAs)


• Cellular telephones
• Issues:
• Limited memory, typically between 1 MB and 1 GB
• OS should manage memory efficiently
• Slow processors. Faster processor requires larger
battery and requires frequent recharging and takes
more space.
• OS should not tax the processor
• Small display screens: Limits output options. Use web
clipping for web page display.
 System Calls

• System calls provide an interface to the services


made available by an operating system.

• Application developers design programs according


to an application programming interface (API). The
API specifies a set of functions that are available to
an application programmer.
• Three most common APIs are Win32 API for Windows,
POSIX API for POSIX-based systems (including virtually
all versions of UNIX, Linux, and Mac OS X), and Java API
for the Java virtual machine (JVM).

Advantages of using API


- Portability
- Difficult to work with actual system call names which are
OS dependent whereas system call names in API are
generic.

• Ex: Win32 function CreateProcess() actually calls


NTCreateProcess() system call in the Windows kernel.
System call interface:

The system call interface intercepts function calls in the


API and invokes necessary system calls within the OS.

Typically, a number associated with each system call

System-call interface maintains a table indexed


according to these numbers

The system call interface invokes the intended system


call in OS kernel and returns status of the system call
and any return values
API – System Call – OS Relationship
 System Call Parameter Passing

• Three general methods used to pass parameters to the OS


1) Pass the parameters in registers
• In some cases, if more parameters then registers
may not be sufficient
• Simplest
2) Parameters stored in a block, or table, in memory, and
address of block passed as a parameter in a register
3) Parameters placed, or pushed, onto the stack by the
program and popped off the stack by the operating
system
•Block and stack methods do not limit the number or
length of parameters being passed
 Parameter Passing via Table
 Types of System Calls
- Process control
• create process, terminate process
• end, abort
• load, execute
• get process attributes, set process attributes
• wait for time
• wait event, signal event
• allocate and free memory
- File management
• create file, delete file
• open, close file
• read, write, reposition
• get and set file attributes
- Device management
• request device, release device
• read, write, reposition
• get device attributes, set device attributes
• logically attach or detach devices
- Information maintenance
• get time or date, set time or date
• get system data, set system data
• get and set process, file, or device attributes

- Communications
• create, delete communication connection
• send, receive messages
• transfer status information
• attach and detach remote devices
- Protection
• Control access to resources
• Get and set permissions
• Allow and deny user access
 Examples of Windows and Unix System Calls
 System Programs
• Provide a convenient environment for program
development and execution
• File management - Create, delete, copy, rename,
print, dump, list, and generally manipulate files and
directories
• Status information
• Some ask the system for info - date, time, amount
of available memory, disk space, number of users
• Provide detailed performance, logging, and
debugging information
• Some systems support, a registry – which is used
to store and retrieve configuration information
• File modification
• Text editors to create and modify files
• Programming-language support - Compilers,
assemblers, debuggers and interpreters
• Program loading and execution- Absolute loaders,
relocatable loaders, and overlay-loaders
• Communications - Provide the mechanism for
creating virtual connections among processes,
users, and computer systems
• Allow users to send messages to one another’s
screens, browse web pages, send electronic-mail
messages, log in remotely, transfer files from one
machine to another
 Operating System Design and Implementation

• Start the design by defining goals


• Affected by choice of hardware
• User goals and System goals
• User goals – operating system should be
convenient to use, easy to learn, reliable, safe, and
fast
• System goals – operating system should be easy
to design, implement, and maintain, as well as
flexible, reliable, error-free, and efficient
 Operating System Structures

• Simple structure – MS-DOS

• Traditional UNIX structure

• Layered structure

• Microkernel (Mach) structure

• Modular structure
1. Simple Structure -- MS-DOS

• Not divided into modules

• MS-DOS doesn’t have a well defined structure

• Its interfaces and levels of functionality are not


well separated

• Vulnerable to malicious programs

• MS-DOS was limited by hardware of its era


(Intel 8088).
2. Traditional UNIX System Structure
3. Layered Approach

Characteristics:
•OS is divided into layers
•The bottom layer (layer 0), is the
hardware; the highest (layer N) is
the user interface.
• With modularity, layers are
selected such that each uses
functions (operations) and services
of only lower-level layers
•Layer M can be invoked by layer
M+1
•Layer M can invoke layer M-1
Advantages:
-Testing & debugging is easy
-Design and implementation are simplified

Difficulties (Disadvantages):
-Designing layers and identifying objects in those layers

-Tend to be less efficient. Function call should pass


through layers.
4. Microkernel System Structure

• Moves as much (non essential components) from the


kernel into user space
• The result is smaller kernel
• Mach example of microkernel (Developed at CMU)

• Communication is provided using message passing


between client program and various services running
in user space.
• Benefits:
• Easier to extend a microkernel
• Easier to port the operating system to new
architectures
• More reliable (less code is running in kernel mode)
• More secure
• Detriments:
• Due to overhead of communication facility,
performance decreases
• Provide minimal process and memory management
 Microkernel System Structure
5. Modular structure
Characteristics:
•OS design is based on OOPs concepts to create
modular kernel
•Kernel has a set of core components

•Kernel core component links additional services during


boot time or run time. Such a strategy uses dynamically
loadable modules.

Ex: Modern implementations of Unix, such as Solaris,


Linux, and Mac OS X
 Example: Solaris Modular Structure:
Organisation showing core kernel with 7 types of
loadable kernel modules
6. Hybrid Systems

• Most modern operating systems are not one pure model

• Hybrid combines multiple approaches to address


performance, security, usability needs

• Linux and Solaris kernels in kernel address space, so


monolithic, plus modular for dynamic loading of
functionality

• Windows mostly monolithic, plus microkernel for


different subsystem personalities
Unit I
Process Management
 Process Concept
• Process – a program in execution
• Multiple parts
• The program code, also called text section
• Current activity including program counter, processor
registers
• Stack containing temporary data
• Function parameters, return addresses, local
variables
• Data section containing global variables
• Heap containing memory dynamically allocated
during run time
 Process Concept (Cont.)

• Program is passive entity stored on disk (executable


file); process is active
• Program becomes process when an executable file
is loaded into memory

• Execution of program started via GUI mouse clicks,


command line entry of its name, etc.

• One program can be several processes


• Consider multiple users executing the same
program
 Process in Memory
 Memory Layout of a C Program
 Process State

• As a process executes, it changes state


• New: The process is being created
• Running: Instructions are being executed
• Waiting: The process is waiting for some event to
occur
• Ready: The process is waiting to be assigned to a
processor
• Terminated: The process has finished execution
 Diagram of Process State
 Process Control Block (PCB)
Information associated with each process(also called
task control block)
• Process state – running, waiting, etc.
• Program counter – location of instruction
to next execute
• CPU registers – contents of all process-
centric registers
• CPU scheduling information- priorities,
scheduling queue pointers
• Memory-management information –
memory allocated to the process
• Accounting information – CPU used, clock
time elapsed since start, time limits
• I/O status information – I/O devices
allocated to process, list of open files
 Process Scheduling

• Process scheduler selects among available processes


for next execution on CPU
• Goal -- Maximize CPU use, quickly switch processes
onto CPU
• Maintains scheduling queues of processes
• Ready queue – set of all processes residing in main
memory, ready and waiting to execute
• Wait queues – set of processes waiting for an event
(i.e., I/O)
• Processes migrate among the various queues
 Ready and Wait Queues
 Representation of Process Scheduling
(Queueing diagram )
 CPU Switch From Process to Process
 Context Switch

• When CPU switches to another process, the system must

Step 1): Save the state of process currently under execution


into its PCB

Step 2): Load saved state from the scheduled process PCB
The above two steps constitute context switching

• Context-switch time is pure overhead; the system does no


useful work while switching
CPU Scheduling
 Schedulers
• Short-term scheduler (or CPU scheduler) – selects which
process should be executed next and allocates CPU
• Short-term scheduler is invoked frequently (milliseconds) 
(must be fast)
• Long-term scheduler (or job scheduler) – selects which
processes should be brought into the ready queue
• Long-term scheduler is invoked infrequently (seconds,
minutes)  (may be slow)
• The long-term scheduler controls the degree of
multiprogramming
• Processes can be described as either:
• I/O-bound process – spends more time doing I/O than
computations, many short CPU bursts
• CPU-bound process – spends more time doing computations;
few very long CPU bursts
• Long-term scheduler strives for good process mix
 Addition of Medium Term Scheduling
 Medium-term scheduler can be added if degree of
multiple programming needs to be decreased
 Remove process from memory, store on disk, bring
back in from disk to continue execution: swapping
 CPU Scheduler

• The CPU scheduler selects from among the processes


in ready queue, and allocates a CPU to one of them
• Queue may be ordered in various ways

• CPU scheduling decisions may take place when a


process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4.Terminates
 Preemptive and Nonpreemptive Scheduling
• When scheduling takes place only under circumstances
1 and 4, the scheduling scheme is nonpreemptive.

• Otherwise, it is preemptive.

• Under Nonpreemptive scheduling, once the CPU has


been allocated to a process, the process keeps the
CPU until it releases it either by terminating or by
requesting I/O.

What is preemptive scheduling ?

• Virtually all modern operating systems including


Windows, MacOS, Linux, and UNIX use preemptive
scheduling algorithms.
 Dispatcher

• Dispatcher module gives control of


the CPU to the process selected by
the CPU scheduler; this involves:
• Switching context
• Switching to user mode
• Jumping to the proper location in
the user program to restart that
program
• Dispatch latency – time it takes for
the dispatcher to stop one process
and start another running
 Scheduling Criteria

• CPU utilization – keep the CPU as busy as possible

• Throughput – # of processes that complete their


execution per time unit
• Turnaround time – amount of time to execute a
particular process (TAT= Wt+ CPU Burst Time+ I/O time)
• Waiting time – amount of time a process has been
waiting in the ready queue
• Response time – amount of time it takes from when
a request was submitted until the first response is
produced.
 Scheduling Algorithm Optimization Criteria

• Maximise CPU utilization

• Maximise throughput

• Minimise turnaround time

• Minimise waiting time

• Minimise response time


CPU Scheduling Algorithms (Short term scheduling
algorithms)

• FCFS Scheduling algorithm


• Shortest job first (SJF) algorithm
• Priority scheduling algorithm
• Round robin scheduling algorithm
• Multilevel Queue scheduling algorithm
• Multilevel feedback queue scheduling algorithm
 First- Come, First-Served (FCFS) Scheduling
(Non-preemptive scheduling)
 First- Come, First-Served (FCFS) Scheduling
(Non-preemptive scheduling)
Process CPU Burst Time (ms) Arrival Time
P1 24 0
P2 3 0
P3 3 0
• Suppose that the processes arrive in the order: P1 , P2 , P3

The Gantt Chart for the schedule is:

P1 P2 P3
0 24 27 30

Waiting times: P1 = 0; P2 = 24; P3 = 27

Average waiting time: (0 + 24 + 27)/3 = 17


Process CPU Burst Arrival Completion Turn around Waiting
time (ms) time time in time (TAT) time
(AT) Gantt chart = CT-AT (WT)=TAT-
(CT) CPU burst
time
P1 24 0 24 24 0
P2 3 0 27 27 24
P3 3 0 30 30 27

Average waiting time (Avg WT)= (0+24+27)/3 = 17 ms

Average turn around time (Avg TAT)= (24+27+30)/3= 27


ms
Convoy effect - short processes behind long process leads to
convoy effect.
Consider one CPU-bound and many I/O-bound processes
(Ex: 24, 3, 3)
Suppose that the processes arrive in the order: P2 , P3 , P1

Process CPU Burst Time (ms) Arrival Time


P2 3 0
P3 3 0
P1 24 0

• The Gantt chart for the schedule is:

P2 P3 P1
0 3 6 30

Waiting time for P1 = 6; P2 = 0; P3 = 3


Average waiting time: (6 + 0 + 3)/3 = 3

Much better than previous case (24, 3, 3). Why ?


 Shortest-Job-First (SJF) Scheduling
(Non-preemptive or preemptive scheduling)

Preemptive version is called as shortest-remaining-time-


first
 Shortest-Job-First (SJF) Scheduling
(Non-preemptive or preemptive scheduling)

• Associate with each process the length of its next


CPU burst

• Use these lengths to schedule the process with the


shortest CPU burst

• SJF is optimal – gives minimum average waiting time


for a given set of processes

• The difficulty is, knowing the length of the next CPU


burst
 Example of SJF (Non-preemptive)
ProcessArriva l Burst Time Arrival Time
P1 0.0 6 0
P2 2.0 8 0
P3 4.0 7 0
P4 5.0 3 0

• SJF scheduling Gantt Chart

P4 P1 P3 P2
0 3 9 16 24

Average waiting time = (3 + 16 + 9 + 0) / 4 = 7


 Determining Length of Next CPU Burst

• Can only estimate the length – should be similar to the


previous one
• Then pick process with shortest predicted next CPU burst
• Can be done by using the length of previous CPU bursts,
using exponential averaging

1. t n  actual length of n th CPU burst


2.  n 1  predicted value for the next CPU burst
3.  , 0    1
4. Define :

• Commonly, α set to ½
 Prediction of the Length of the Next CPU Burst
 Examples of Exponential Averaging
•  =0
• n+1 = n
• Recent history does not count
•  =1
• n+1 =  tn
• Only the actual last CPU burst counts
• If we expand the formula, we get:
n+1 =  tn+(1 - ) tn -1 + …
+(1 -  )j  tn -j + …
+(1 -  )n +1 0

• Since both  and (1 - ) are less than or equal to 1, each


successive term has less weight than its predecessor
 Example of Shortest-remaining-time-first
(SJF_Preemptive scheduling)

• Now we add the concepts of varying arrival times and preemption to


the analysis
ProcessA arri Arrival TimeT Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
• Preemptive SJF Gantt Chart

P1 P2 P4 P1 P3
0 1 5 10 17 26

Average waiting time = [(10-1)+(1-1)+(17-2)+5-3)]/4 = 26/4 =


6.5 msec
Process CPU Burst Arrival Completion Turn around Waiting
time (ms) time time in time (TAT) time
(AT) Gantt chart = CT-AT (WT)=TAT-
(CT) CPU burst
time
P1 8 0 17 17 9
P2 4 1 5 4 0
P3 9 2 26 24 15
P4 5 3 10 7 2

Average waiting time (Avg WT)= (9+0+15+2)/4 = 6.5 ms

Average turn around time (Avg TAT)= (17+4+24+7)/4= ___


ms
 Priority Scheduling
(Non-preemptive or preemptive)
 Priority Scheduling (Non-preemptive or preemptive)

• A priority number (integer) is associated with each process

• The CPU is allocated to the process with the highest priority


(smallest integer  highest priority)
• Preemptive
• Nonpreemptive

• Problem  Starvation – low priority processes may never


execute

• Solution  Aging – as time progresses increase the priority


of the process
 Example of Priority Scheduling (Non-Preemptive)

ProcessA arri Burst TimeT Priority Arrival time


P1 10 3 0
P2 1 1 0
P3 2 4 0
P4 1 5 0
P5 5 2 0

• Priority scheduling Gantt Chart

Average waiting time = 8.2 msec


Proces CPU Priority Arrival Completion Turn Waiting
s
Burst time time in around time
time (AT) Gantt chart time (WT)=TAT
(CT) (TAT) -CPU
(ms)
= CT-AT burst time
P1 10 3 0 16 16 6
P2 1 1 0 1 1 0
P3 2 4 0 18 18 16
P4 1 5 0 19 19 18
P5 5 2 0 6 6 1

Average waiting time (Avg WT)= (6+0+16+18+1)/5=8.2

Average turn around time (Avg TAT)= ___ ms


 Round Robin (RR) (Preemptive)
 Round Robin (RR) (Preemptive)

• Each process gets a small unit of CPU time (time


quantum q), usually 10-100 milliseconds. After this
time has elapsed, the process is preempted and added
to the end of the ready queue.
• Timer interrupts after every quantum to schedule next
process
• Performance
• If q is large  it becomes FIFO
 Example of RR with Time Quantum = 4

Process Burst Time


P1 24
P2 3
P3 3
The Gantt chart is:

P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30

Typically, higher average turnaround than SJF, but better response


• q should be large compared to context switch time
• q usually 10ms to 100ms, context switch < 10 usec
 Time Quantum and Context Switch Time
 Multilevel Queue
• Ready queue is partitioned into separate queues, eg:
• foreground (interactive)
• background (batch)
• Process does move between queues
• Each queue has its own scheduling algorithm:
• foreground – RR
• background – FCFS
• Scheduling must be done between the queues:
 Priority scheduling; (i.e., serve all from foreground then
from background). Possibility of starvation.
Or
 Time slice – each queue gets a certain amount of CPU
time which it can schedule amongst its processes; i.e.,
80% to foreground in RR
• 20% to background in FCFS
 Multilevel Queue
• With priority scheduling, have separate queues for each
priority.
• Schedule the process in the highest-priority queue!
 Multilevel Queue Scheduling
 Multilevel Feedback Queue

• A process can move between the various queues


 Multilevel Feedback Queue: A process can move
between the various queues
• Example with Three Queues:
• Q0 – RR with time quantum 8
milliseconds
• Q1 – RR time quantum 16
milliseconds
• Q2 – FCFS

• Scheduling
• A new job enters queue Q0 which is
served FCFS
• When it gains CPU, job receives 8
milliseconds
• If it does not finish in 8
milliseconds, job is moved to
queue Q1
• At Q1 job is again served FCFS and
receives 16 additional milliseconds
• If it still does not complete, it is
preempted and moved to queue
Q2
 Threads

• So far, process has a single thread of execution


• Consider having multiple program counters per
process
• Multiple locations can execute at once
• Multiple threads of control
• Must then have storage for thread details, multiple
program counters in PCB
 Motivation
• Most modern applications are multithreaded
• Multiple tasks with the application can be implemented by
separate threads
• Update display
• Fetch data
• Spell checking
• Answer a network request
• Process creation is heavy-weight while thread creation is
light-weight
• Can simplify code, increase efficiency
• Kernels are generally multithreaded
 Single and Multithreaded Processes
 Benefits

• Responsiveness – may allow continued execution if part


of process is blocked, especially important for user
interfaces

• Resource Sharing – threads share resources of process,


easier than shared memory or message passing

• Economy – cheaper than process creation, thread


switching lower overhead than context switching

• Scalability – process can take advantage of multicore


architectures
 User Threads and Kernel Threads
• User threads - management done by user-level threads
library
• Three primary thread libraries:
• POSIX Pthreads
• Windows threads
• Java threads
• Kernel threads - Supported by the Kernel
• Examples – virtually all general-purpose operating systems,
including:
• Windows
• Linux
• Mac OS X
• iOS
• Android
 User and Kernel Threads
 Multithreading Models

• Many-to-One
• One-to-One
• Many-to-Many
 Many-to-One
• Many user-level threads mapped to single kernel thread

• Multiple threads may not run in parallel on multicore system


because only one may be in kernel at a time

• Few systems currently use this model

• Examples:

• Solaris Green Threads


• GNU Portable Threads
 One-to-One
• Each user-level thread maps to kernel thread
• Creating a user-level thread creates a kernel thread
• More concurrency than many-to-one

• Examples
• Windows
• Linux
 Many-to-Many Model
• Allows many user level threads to be mapped to many
kernel threads
• Allows the operating system to create a sufficient number of
kernel threads

• Otherwise not very common


 Two-level Model

• Similar to M:M, except that it allows a user thread to be


bound to kernel thread
 Thread Libraries

• Thread library provides programmer with API for


creating and managing threads
• Two primary ways of implementing
• Library entirely in user space
• Kernel-level library supported by the OS
 Pthreads

• May be provided either as user-level or kernel-level

• A POSIX standard (IEEE 1003.1c) API for thread creation


and synchronization
• Specification, not implementation

• API specifies behavior of the thread library, implementation


is up to development of the library
• Common in UNIX operating systems (Linux & Mac OS X)
 Java Threads

• Java threads are managed by the JVM


• Java threads may be created by:
• Extending Thread class
• Implementing the Runnable interface

• Standard practice is to implement Runnable interface


 Thread Scheduling

• When threads supported, threads scheduled, not


processes

• A thread is a basic unit of CPU utilization

 Process-contention scope (PCS): scheduling


competition is within the process
• Typically done via priority set by programmer

 Kernel thread scheduled onto available CPU is


system-contention scope (SCS) – competition among
all threads in system
 Operating System Examples

• Linux scheduling

• Windows scheduling
 Linux Scheduling

• Preemptive, priority based


• Two priority ranges
• Real-time tasks range from 0 to 99
• Other tasks range from 100 to 140

• Higher priority gets larger q

• Two priority arrays (active, expired)


• Tasks indexed by priority

• When no more active, arrays are exchanged


 Linux Scheduling (Cont.)

• Real-time scheduling according to POSIX.1b


• Real-time tasks have static priorities
• All other tasks dynamic based on nice value plus or
minus 5
• Interactivity of task determines plus or minus
• More interactive -> more minus
• Priority recalculated when task expired
• This exchanging arrays implements adjusted priorities
 Priorities and Time-slice length
 List of Tasks Indexed According to Priorities
 Windows XP
 Windows Scheduling
• Windows uses priority-based preemptive scheduling
• Dispatcher is scheduler
• Thread runs until (1) blocks, (2) uses time slice, (3)
preempted by higher-priority thread
• Real-time threads can preempt non-real-time
• 32-level priority scheme
• Variable class is 1-15, real-time class is 16-31
• Priority 0 is memory-management thread
• Queue for each priority
• If no run-able thread, runs idle thread
 Windows Priority Classes

• Win32 API identifies several priority classes to which a


process can belong:

• REALTIME_PRIORITY_CLASS,
• HIGH_PRIORITY_CLASS,
• ABOVE_NORMAL_PRIORITY_CLASS,
• NORMAL,
• BELOW_NORMAL_PRIORITY_CLASS,
• IDLE_PRIORITY_CLASS

• All are variable except REALTIME


• A thread within a given priority class has a relative priority
• TIME_CRITICAL,
• HIGHEST,
• ABOVE_NORMAL,
• NORMAL,
• BELOW_NORMAL,
• LOWEST,
• IDLE
• Priority class and relative priority combine to give
numeric priority
• Base priority is NORMAL within the class
• If quantum expires, priority lowered, but never below
base
• If wait occurs, priority boosted depending on what was
waited for
• Foreground window given 3x priority boost
 Windows XP Priorities
 Windows XP: Kernel — Scheduling

• The dispatcher uses a 32-level priority scheme to


determine the order of thread execution.
• Priorities are divided into two classes
• The real-time class contains threads with
priorities ranging from 16 to 31
• The variable class contains threads having
priorities from 0 to 15
Windows XP: Kernel — Scheduling

• Scheduling can occur when a thread enters the ready


or wait state, when a thread terminates, or when an
application changes a thread’s priority

Real-time threads are given preferential access to


the CPU; but XP does not guarantee that a real-time
thread will start to execute within any particular time
limit .

• This is known as soft realtime.


 Linux: Design Principles

• Linux is a multiuser, multitasking system with a full set of


UNIX-compatible tools

• Its file system adheres to traditional UNIX semantics, and it


fully implements the standard UNIX networking model

• Main design goals are speed, efficiency, and standardization

• Linux is designed to be compliant with the relevant POSIX


documents

• The Linux programming interface adheres to the SVR4 UNIX


semantics, rather than to BSD behavior
 Components of a Linux System
 Windows XP: Design Principles
• Extensibility — layered architecture
• Executive, which runs in protected mode, provides the basic
system services
• On top of the executive, several server subsystems operate
in user mode
• Modular structure allows additional environmental
subsystems to be added without affecting the executive

• Portability —XP can be moved from one hardware architecture


to another with relatively few changes
• Written in C and C++
• Processor-dependent code is isolated in a dynamic link
library (DLL) called the “hardware abstraction layer” (HAL)
 Design Principles (Cont.)

• Reliability —XP uses hardware protection for virtual


memory, and software protection mechanisms for operating
system resources
• Compatibility — applications that follow the IEEE 1003.1
(POSIX) standard can be complied to run on XP without
changing the source code
• Performance —XP subsystems can communicate with one
another via high-performance message passing
• Preemption of low priority threads enables the system to
respond quickly to external events
• Designed for symmetrical multiprocessing
• International support — supports different locales via the
national language support (NLS) API
 Depiction of XP Architecture

You might also like