0% found this document useful (0 votes)
25 views

Cpu Scheduling

Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views

Cpu Scheduling

Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 50

Chapter 6: CPU Scheduling

Operating System Concepts – 9th Edition Silberschatz, Galvin and Gagne ©2013
Chapter 6: CPU Scheduling
 Basic Concepts

 Scheduling Criteria

 Scheduling Algorithms

 Thread Scheduling

 Multiple-Processor
Scheduling

 Real-Time CPU Scheduling

 Operating Systems
Examples
Operating System Concepts – 9th Edition 6.2 Silberschatz, Galvin and Gagne ©2013
Objectives
 CPU scheduling is the basis for multi-programmed operating
systems

 Process Scheduling
 By switching among processes (see Chap-3)
– Increases productivity of computer

 Thread Scheduling
 By switching among kernel threads (see Chap-4)

 To describe various CPU-scheduling algorithms

 To discuss evaluation criteria for selecting a CPU-scheduling


algorithm for a particular system

 To System
Operating examine
Concepts the
th scheduling algorithms
– 9 Edition 6.3 of several operating
Silberschatz, Galvin and Gagne ©2013
Basic Concepts
 Objective of multiprogramming:
Achieve maximum CPU utilization as
possible
 CPU always running a process
 No idle CPU

 CPU–I/O Burst Cycle:


 Process execution consists of a
cycle of
CPU execution and I/O wait
 When a process is waiting, then
assign the CPU to another
process
 See Process Scheduler on Chap-3

 CPU burst followed by I/O burst

Operating System Concepts – 9th Edition 6.4 Silberschatz, Galvin and Gagne ©2013
Histogram of CPU-burst Times

Operating System Concepts – 9th Edition 6.5 Silberschatz, Galvin and Gagne ©2013
Diagram of Process State

Operating System Concepts – 9th Edition 6.6 Silberschatz, Galvin and Gagne ©2013
CPU Scheduler
 Short-term scheduler selects from among the processes in ready
queue and allocates the CPU to one of them
 Queue may be ordered in various ways: FIFO, LIFO, Random,
Priority, … etc

 CPU scheduling decisions may take place when a process:


1. Switches from running to waiting state; ex: as result of I/O request
or wait()
2. Switches from running to ready state; ex: when an interrupt
occurs
3. Switches from waiting to ready; ex: at completion of I/O
4. Terminates

 Scheduling taking place only under circumstances 1 and 4 is


nonpreemptive
 No choice in terms of scheduling; new process must be selected for
CPU
 Process keeps the CPU until it either terminates or switches to
Operating Systemwaiting
Concepts – 9state
th Edition 6.7 Silberschatz, Galvin and Gagne ©2013
Representation of Process Scheduling
 Queueing diagram represents queues, resources,
flows

Jobs

Operating System Concepts – 9th Edition 6.8 Silberschatz, Galvin and Gagne ©2013
Dispatcher
 Dispatcher module gives control of the CPU to the process selected by
the short- term scheduler; this involves:

 Switching context

 Switching to user mode

 Jumping to the proper location in the user program to restart that


program

 Dispatch latency – time it takes for the dispatcher to stop one process
and start another running

 Dispatcher is invoked during every process switch; hence it


should be as fast
as possible
Operating System Concepts – 9th Edition 6.9 Silberschatz, Galvin and Gagne ©2013
Scheduling Criteria
 CPU utilization – keep the CPU as busy as possible
 Ranges from 40% to 90% in a real system; i.e., from light to
heavy loaded
 Scheduling algorithm optimization criteria: Max CPU utilization
 Throughput – # of processes that complete their execution per time
unit
 Ranges from 10 processes/second to 1 process/hour
 Scheduling algorithm optimization criteria: Max throughput
 Turnaround time – amount of time to execute a particular process
 Sum of times spent in job pool + ready queue + CPU execution +
doing I/O
 Scheduling algorithm optimization criteria: Min turnaround time
 Waiting time – amount of time a process has been waiting in the
ready queue
 Sum of times spent waiting in the ready queue
 Scheduling algorithm optimization criteria: Min waiting time
 Response time – amount of time it takes from when a request was
6.10
Operating System Concepts – 9th Edition Silberschatz, Galvin and Gagne ©2013
First-Come First-Served (FCFS) Scheduling Algorithm

Process Burst Time

P1 24
P2 3
P3 3

 Suppose that the processes arrive in the order: P1 , P2 , P3

 The Gantt Chart for the schedule is: [includes start and finish time of
each process]

P1 P2 P3

0 24 27 30

 Waiting time for P1 = 0; P2 = 24; P3 = 27

 Average waiting time: (0 + 24 + 27)/3 = 17

 The simplest scheduling algorithm but usually very bad average


Operating System Concepts – 9th Edition 6.11 Silberschatz, Galvin and Gagne ©2013
FCFS Scheduling
Suppose that the processes arrive in the
order:
P 2 , P3 , P 1
 The Gantt chart for the schedule is:
P2 P3 P1

0 3 6 30

 Waiting time for P1 = 6; P2 = 0; P3 = 3


 Average waiting time: (6 + 0 + 3)/3 = 3; substantial reduction in
waiting time
 Much better than previous case
 Convoy effect - short process behind long process;
 CPU-bound holds CPU while I/O-bounds wait in ready queue for the CPU
 I/O devices are idle until CPU released… then CPU is idle.. Then … this
repeats
 Consider one CPU-bound and many I/O-bound processes
 I/O-bounds spend most of the time waiting for CPU-bound to release
CPU
Operating System Concepts – 9th Edition 6.12 Silberschatz, Galvin and Gagne ©2013
Shortest-Job-First (SJF) Scheduling Algorithm

 Associate with each process the length of its next CPU burst

 Use these lengths to schedule the process with the shortest time

 Always assign the CPU to the process that has the smallest next CPU
burst

 FCFS breaks the tie if two process have the same next CPU burst
length

 Better term: Shortest-Next-CPU-Burst-Scheduling (SNCB) algorithm

 But most books use SJF

 SJF is optimal – gives minimum average waiting time for a given set of
processes

 The difficulty is knowing the length of the next CPU request

 Could ask users to estimate their processes’ time limits; for job
scheduling
Operating System Concepts – 9 Edition
th 6.13 Silberschatz, Galvin and Gagne ©2013
Example of SJF
Process Burst
Time
P1 6
P2 8
P3 7
P4 3
 SJF scheduling
chart
P4 P1 P3 P2
0 3 9 16 24

 Average waiting time = (3 + 16 + 9 + 0) / 4 = 7

 With FCFS it would be 10.25 time units with this order P1 P 2 P3 P 4

 Moving a short process before a long one decreases its waiting time
more than it increases the long process’s waiting time. Thus, decrease
of average waiting time
Operating System Concepts – 9th Edition 6.14 Silberschatz, Galvin and Gagne ©2013
Determining Length of Next CPU Burst
 SJF algo cannot be implemented at the level of the short-term CPU
scheduling
 No way to know exact length of process’s next CPU burst
Estimate it using lengths of past bursts: next = average of all past
bursts
 Then pick process with shortest predicted next CPU burst

 Exponential averaging: next = average of (past estimate + past actual)

1. tn  actual length of n th CPU burst


2. n1  predicted value for the next CPU burst
3. , 0    1
4. Define :
 n 1   t n  1   n .

 Relative weight of recent history (tn) and past history (τn): α = ½ usual
but in [0, 1]
 Preemptive
Operating System Concepts version
th – 9 Edition called shortest-remaining-time-first
6.15 (SRTF Galvin and Gagne ©2013
Silberschatz,
Prediction of the Length of the Next CPU Burst

Operating System Concepts – 9th Edition 6.16 Silberschatz, Galvin and Gagne ©2013
Examples of Exponential Averaging
  =0
 n+1 = n
 Recent history does not count

  =1
 n+1 = tn
 Only the actual last CPU burst counts

 If we expand the formula, we get:

n+1 = tn + (1 - )tn-1 + … + (1 - )jtn-j + … + (1 -


)n+10

 Since both  and (1 - ) are less than or equal to 1, each successive


term has less weight than its predecessor
Operating System Concepts – 9th Edition 6.17 Silberschatz, Galvin and Gagne ©2013
Example of Shortest-Remaining-Time-First
 Now we add the concepts of varying arrival times and preemption to
the analysis
Process Arrival Time Burst
Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5

 Preemptive SJF Gantt


Chart
P1 P2 P4 P1 P3

0 1 5 1 1 2
0 7 6

 Average waiting time = [(10-1) + (1-1) + (17-2) + (5-3)]/4 = 26/4 =


6.5 time units

System Concepts – 9th Edition


Operating Nonpreemptive SJF will result 6.18
in 7.75 time units Silberschatz, Galvin and Gagne ©2013
Priority Scheduling
 A priority number (integer) is associated with each process

 The CPU is allocated to the process with the highest priority

 We assume that smallest integer  highest priority


 Internally defined priority: based on criteria within OS. Ex:
memory needs
 Externally defined priority: based on criteria outside OS. Ex: paid
process

 Preemptive: a running process is preempted by a new higher


priority process
 Nonpreemptive: running process holds CPU until termination or I/O
request

 SJF is priority scheduling: priority is the inverse of predicted next CPU


burst time

 Problem  Starvation – some low priority process may never get the
CPU
Operating System Concepts – 9th Edition 6.19 Silberschatz, Galvin and Gagne ©2013
Example of Priority Scheduling
Process Burst Time Priorit
y
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
 Priority scheduling Gantt
Chart

P1 P2 P1 P3 P4

0 1 6 16 18 19

 Average waiting time = 8.2


msec

Operating System Concepts – 9th Edition 6.20 Silberschatz, Galvin and Gagne ©2013
Round Robin (RR)
 Each process gets a small unit of CPU time (time quantum or time
slice q), usually 10-100 milliseconds. After this time has elapsed,
the process is preempted and added to the end of the ready
queue.

 Ready queue is treated as a circular queue

 If there are n processes in the ready queue and the time quantum
is q, then each process gets 1/n of the CPU time in chunks of at
most q time units at once. No process waits more than (n-
1)×q time units.

 Timer interrupts at every quantum to schedule next process

 Performance depends on heavily on the size of the time quantum

 If q very large  RR scheduling = FCFS scheduling

 If q very small  q must be large with respect to context switch

Operating System  Otherwise overhead of number


Concepts – 9th Edition 6.21 of context switches
Silberschatz,will
Galvinbe is ©2013
and Gagne
Example of RR with Time Quantum = 4
Process Burst
Time
P1 24
P2 3
P3 3
 The Gantt chart
is:

P1 P2 P3 P1 P1 P1 P1 P1

0 4 7 10 14 2 2 3
18 2 6 0

 Average waiting time under RR scheduling is


often long
 Typically, higher average turnaround than SJF, but better
response

 q should be large compared to context switch time

 q usually
Operating between
System Concepts
th– 9 Edition 10ms to 100ms,
6.22 context switch < 10
Silberschatz, Galvin and Gagne ©2013
Time Quantum and Context Switch Time

Operating System Concepts – 9th Edition 6.23 Silberschatz, Galvin and Gagne ©2013
Turnaround Time Varies With The Time Quantum

80% of CPU bursts


should be shorter than q

Operating System Concepts – 9th Edition 6.24 Silberschatz, Galvin and Gagne ©2013
Multilevel Queue
 Ready queue is partitioned into separate queues; e.g., two queues
containing
 foreground (interactive) processes
 May have externally defined priority over background processes
 background (batch) processes

 Process permanently associated to a given queue; no move to a


different queue

 Each queue has its own scheduling need, and hence, different
algorithm:
 foreground – RR or …
 background – FCFS or …

 Must schedule among the queues too (not just processes):


 Fixed priority scheduling; (i.e., serve all from foreground then from
background). Possibility of starvation.
 Time slice – each queue gets a certain amount of CPU time
Operating System Concepts – 9th Edition 6.25 Silberschatz, Galvin and Gagne ©2013
Multilevel Queue Scheduling

Operating System Concepts – 9th Edition 6.26 Silberschatz, Galvin and Gagne ©2013
Multilevel Feedback Queue
 A process can move between the queues; aging can be
implemented this way

 Move heavy foreground CPU-bound process to the background


queue

 Move starving background process to the foreground queue

 Multilevel-feedback-queue scheduler defined by the following


parameters:

 number of queues

 scheduling algorithms for each queue

 method used to determine when to upgrade a process

 method used to determine when to demote a process

 method used to determine which queue a process will enter


when
Operating System that
Concepts
th – 9 Edition 6.27 Silberschatz, Galvin and Gagne ©2013
Example of Multilevel Feedback Queue
 Three queues:

 Q0 – RR with time quantum 8 milliseconds


 Highest priority. Preempts Q1 and Q2
proc’s
 Q1 – RR time quantum 16 milliseconds
 Medium priority. Preempts processes in
Q2
 Q2 – FCFS
 Lowest priority

 Scheduling

 A new job enters queue Q0 which uses RR-


8
 When it gains CPU, job
receives 8 milliseconds
 If it does not finish in 8 milliseconds,
job is moved to queue Q1

 At Q1 job is receives 16 additional


milliseconds
Operating System Concepts – 9th Edition 6.28 Silberschatz, Galvin and Gagne ©2013
Thread Scheduling
 Distinction between user-level and kernel-level threads

 When OS support threading, then kernel threads are scheduled; not


processes

 On OS using many-to-one or many-to-many mapping models, the


thread library
schedules user threads to run on an available lightweight process (LWP)

 Known as process-contention scope (PCS) since scheduling


competition is within the process; i.e., among user threads
belonging to the same process

 Which user thread goes to the kernel…?

 Typically done via user thread priorities set by the programmer

 Kernel thread scheduled onto available CPU is system-contention scope


(SCS)

 Competition is among all kernel


Operating System Concepts – 9th Edition6.29 threads in the system
Silberschatz, Galvin and Gagne ©2013
Real-Time CPU Scheduling
 Can present obvious
challenges

 Soft real-time systems – no


guarantee as to when critical
real- time process will be
scheduled

 Hard real-time systems – task


must be serviced by its
deadline

 Event latency: [occurred to


serviced]

 Two types of event latencies


affect performance
1. Interrupt latency – time from
arrival of interrupt to start of
routine that services interrupt
2. Dispatch latency – time for Silberschatz, Galvin and Gagne ©2013
Operating System Concepts – 9th Edition 6.30
Real-Time CPU Scheduling
[Dispatch Latency]

 Conflict phase of
dispatch latency:

1. Preemption of any
process
running in kernel
mode

2. Release by low-
priority processes of
resources needed by
a high-priority
process

Operating System Concepts – 9th Edition 6.31 Silberschatz, Galvin and Gagne ©2013
Priority-Based Scheduling
 Real-time OS responds immediately to a real-time process when it
request CPU
 Real-time scheduler must support preemptive priority-based
algorithm
 But only guarantees soft real-time functionality
 Hard real-time systems must also provide ability to meet task
deadlines
 Periodic processes require CPU at constant intervals
– Each process has a processing time t, a deadline d, and a period p
– 0≤t≤d≤p
– Rate of periodic task is 1/p
 Admission-control: process announces it requirements, then scheduler
admits the process if it can complete it on time, or, reject it if it cannot
serviced it by deadline

Operating System Concepts – 9th Edition 6.32 Silberschatz, Galvin and Gagne ©2013
Rate-Monotonic Scheduling
 Static priority policy with preemption
 A priority is assigned to a process based on the inverse of its
period p
 Shorter periods = higher priority; Longer periods = lower
priority
 Assumes processing time t is the same for each CPU burst
 Rationale: higher priority to tasks that require the CPU more
often
 Example:
 Process P1 : t1 = 20, d1 = complete CPU burst by start of next period, p1
= 50.
 Process P2 : t2 = 35, d2 = complete CPU burst by start of next period, p2
= 100.
 P1 is assigned a higher priority than P2.
 CPU utilization = ti / pi. Thus total CPU utilization = 20/50 + 35/100
= 75%
Operating System Concepts – 9th Edition 6.33 Silberschatz, Galvin and Gagne ©2013
Missed Deadlines with Rate-Monotonic Scheduling

 Process P1 : t1 = 25, d1 = complete CPU burst by start of next period, p1 =


50.
 Process P2 : t2 = 35, d2 = complete CPU burst by start of next period, p2 =
80.
 Total CPU utilization = 25/50 + 35/80 = 94%
 P1 is assigned a higher priority than P2.
 P2 has missed the deadline for completion of its CPU burst at time 80
 Reason - bounded CPU utilization; worst case for N processes: B =
N(21/N - 1)
 Bounded at B = 83% for N = 2 processes
 Theory: cannot guarantee that processes can be scheduled to
meet their deadlines if actual CPU utilization > Bound B
 94% > 83% in the example above

Operating System Concepts – 9th Edition 6.34 Silberschatz, Galvin and Gagne ©2013
Earliest Deadline First Scheduling (EDF)
 Dynamic priority policy with preemption
 Priorities are dynamically assigned according to deadlines:
 Earlier deadline = higher priority; later deadline = lower priority
 New runnable process must announce its deadline requirements to
scheduler
 Scheduler will adjust current priorities accordingly
 Process need not be periodic
 CPU burst time need not be constant

 Process P1 : t1 = 25, d1 = complete CPU burst by start of next period, p1 = 50.


 Process P2 : t2 = 35, d2 = complete CPU burst by start of next period, p2 = 80.
 P1 is initially assigned a higher priority than P2 since d1 = 50 ≤ d2 = 80

Operating System Concepts – 9th Edition 6.35 Silberschatz, Galvin and Gagne ©2013
Proportional Share Scheduling
 T shares of time are allocated among all processes in the system

 An application receives N shares of time where N < T

 This ensures each application will receive N / T of the total processor


time

 Example: three processes A, B and C, with T = 100 shares

 A, B and C assigned each 50, 15 and 20 shares, respectively

 Thus A will have 50% of total processor times, and so on with B and
C

 Scheduler must use admission-control policy:

 Admit a request only if sufficient shares are available

 Example: If new process D needs 30 share then scheduler will deny


him CPU
 Concepts – 9th Edition
Operating System 6.36 Silberschatz, Galvin and Gagne ©2013
End of Chapter 6

Operating System Concepts – 9th Edition Silberschatz, Galvin and Gagne ©2013
Multiple-Processor Scheduling
 CPU scheduling more complex when multiple
CPUs are available
 Homogeneous processors within a multiprocessor
 Asymmetric multiprocessing – only one processor
accesses the system data structures, alleviating the
need for data sharing
 Symmetric multiprocessing (SMP) – each processor is
self- scheduling, all processes in common ready queue,
or each has its own private queue of ready processes
 Currently, most common
 Processor affinity – process has affinity for processor on
which it is currently running
 soft affinity
 hard affinity
 Variations including processor sets

Operating System Concepts – 9th Edition 6.38 Silberschatz, Galvin and Gagne ©2013
NUMA and CPU Scheduling

Note that memory-placement algorithms can also consider affinity

Operating System Concepts – 9th Edition 6.39 Silberschatz, Galvin and Gagne ©2013
Multiple-Processor Scheduling – Load Balancing

 If SMP, need to keep all CPUs loaded for efficiency


 Load balancing attempts to keep workload evenly
distributed
 Push migration – periodic task checks load on each
processor,
and if found pushes task from overloaded CPU to other
CPUs
 Pull migration – idle processors pulls waiting task from
busy processor

Operating System Concepts – 9th Edition 6.40 Silberschatz, Galvin and Gagne ©2013
Multicore Processors

 Recent trend to place multiple processor cores


on same physical chip
 Faster and consumes less power
 Multiple threads per core also growing
 Takes advantage of memory stall to make
progress on another thread while memory
retrieve happens

Operating System Concepts – 9th Edition 6.41 Silberschatz, Galvin and Gagne ©2013
Multithreaded Multicore System

Operating System Concepts – 9th Edition 6.42 Silberschatz, Galvin and Gagne ©2013
Virtualization and Scheduling
 Virtualization software schedules multiple
guests onto CPU(s)
 Each guest doing its own scheduling
 Not knowing it doesn’t own the CPUs
 Can result in poor response time
 Can effect time-of-day clocks in guests
 Can undo good scheduling algorithm efforts of
guests

Operating System Concepts – 9th Edition 6.43 Silberschatz, Galvin and Gagne ©2013
Algorithm Evaluation
 How to select CPU-scheduling algorithm for an OS?
 Determine criteria, then evaluate algorithms
 Deterministic modeling
 Type of analytic evaluation
 Takes a particular predetermined workload and
defines the
performance of each algorithm for that workload
 Consider 5 processes arriving at time 0:

Operating System Concepts – 9th Edition 6.44 Silberschatz, Galvin and Gagne ©2013
Deterministic Evaluation

 For each algorithm, calculate minimum average waiting


time
 Simple and fast, but requires exact numbers for input,
applies only to those inputs
 FCS is 28ms:

 Non-preemptive SFJ is
13ms:

 RR is
23ms:

Operating System Concepts – 9th Edition 6.45 Silberschatz, Galvin and Gagne ©2013
Queueing Models
 Describes the arrival of processes, and CPU and I/O
bursts probabilistically
 Commonly exponential, and described by mean
 Computes average throughput, utilization,
waiting time, etc
 Computer system described as network of servers,
each with queue of waiting processes
 Knowing arrival rates and service rates
 Computes utilization, average queue length,
average wait time, etc

Operating System Concepts – 9th Edition 6.46 Silberschatz, Galvin and Gagne ©2013
Little’s
 n = average queue length
Formula
 W = average waiting time in queue
 λ = average arrival rate into queue
 Little’s law – in steady state, processes leaving queue
must equal processes arriving, thus:
n=λxW
 Valid for any scheduling algorithm and arrival
distribution
 For example, if on average 7 processes arrive per
second, and normally 14 processes in queue, then
average wait time per process = 2 seconds

Operating System Concepts – 9th Edition 6.47 Silberschatz, Galvin and Gagne ©2013
Simulations
 Queueing models limited
 Simulations more accurate
 Programmed model of computer system
 Clock is a variable
 Gather statistics indicating algorithm performance
 Data to drive simulation gathered via
 Random number generator according to probabilities
 Distributions defined mathematically or empirically
 Trace tapes record sequences of real events in real
systems

Operating System Concepts – 9th Edition 6.48 Silberschatz, Galvin and Gagne ©2013
Evaluation of CPU Schedulers by Simulation

Operating System Concepts – 9th Edition 6.49 Silberschatz, Galvin and Gagne ©2013
Implementation
 Even simulations have limited accuracy
 Just implement new scheduler and test in real systems
 High cost, high risk
 Environments vary
 Most flexible schedulers can be modified per-site or
per-system
 Or APIs to modify priorities
 But again environments vary

Operating System Concepts – 9th Edition 6.50 Silberschatz, Galvin and Gagne ©2013

You might also like