ch02.3 - CPU Scheduling
ch02.3 - CPU Scheduling
Objectives
Operating System Concepts – 10th Edition 5a.2 Silberschatz, Galvin and Gagne ©2018
1
Outline
Basic Concepts
Scheduling Criteria
Scheduling Algorithms
Operating System Concepts – 10th Edition 5a.3 Silberschatz, Galvin and Gagne ©2018
Scheduling in OS
Operating System Concepts – 10th Edition 5a.4 Silberschatz, Galvin and Gagne ©2018
2
Basic Concepts
Single program: A process is executed until it must
wait for some I/O request.
• => the CPU then just sits idle.
• waiting time is wasted; no useful work is
accomplished
Multiprogramming: to maximize CPU utilization.
• Processes are kept in memory at one time.
• When 1 process has to wait, the OS takes the
CPU away from that process and gives the CPU
to another process.
• => keeping the CPU busy is extended to all
processing cores on the system.
Operating System Concepts – 10th Edition 5a.5 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition 5a.6 Silberschatz, Galvin and Gagne ©2018
3
Histogram of CPU-burst Times
CPU burst distribution is of main concern
• An I/O-bound program typically has many short CPU bursts.
• A CPU-bound program might have a few long CPU bursts.
A large number of short CPU bursts and a small number of long CPU
bursts.
This distribution can be important when implementing a CPU-scheduling
Operating System Concepts – 10th Edition 5a.7 Silberschatz, Galvin and Gagne ©2018
CPU Scheduler
The CPU scheduler selects from among the processes in ready queue,
and allocates a CPU core to one of them
• The ready queue may be ordered in various ways: FIFO, FCFS, SJF
• The records in the queues are generally process control blocks (PCBs)
of the processes
CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state => need schedule
2. Switches from running to ready state => a choice (an option; a decision)
3. Switches from waiting to ready state => a choice
4. Terminates => need schedule
Operating System Concepts – 10th Edition 5a.8 Silberschatz, Galvin and Gagne ©2018
4
Preemptive and Nonpreemptive Scheduling
For situations 1 and 4, there is no choice in terms of scheduling. A new process
(in the ready queue) must be selected for execution:
• nonpreemptive or cooperative.
For situations 2 and 3, however, there is a choice . (an option; a decision)
• Preemptive
Nonpreemptive (không ưu tiên trước - Điều phối độc quyền )
• is used when a process terminates, or switches from running to the waiting.
• once the CPU cycles are allocated to a process, the process holds the CPU
till it gets terminated or reaches a waiting state.
Preemptive (có ưu tiên - Điều phối không độc quyền)
• is used when a process switches from running / waiting to ready.
• The CPU cycles are allocated to the process for a limited amount of time
and then taken away, and the process is again placed back in the ready
queue to get its next chance to execute.
Virtually all modern operating systems including Windows, MacOS, Linux, and
UNIX use preemptive scheduling algorithms.
Operating System Concepts – 10th Edition 5a.9 Silberschatz, Galvin and Gagne ©2018
Para-
Preemptive scheduling non-preemptive scheduling
meter
Once resources (CPU Cycle) are allocated to a
In this resources (CPU Cycle) are allocated
Basic process, the process holds it till it completes its
to a process for a limited time.
burst time or switches to waiting state.
Operating System Concepts – 10th Edition 5a.10 Silberschatz, Galvin and Gagne ©2018
5
Preemptive Scheduling and Race Conditions
Preemptive scheduling can result in race conditions when data are shared
among several processes.
Consider the case of two processes that share data (Race Conditions).
• While one process is updating the data, it is preempted so that the
second process can run.
• The second process then tries to read the data, which are in an
inconsistent state.
We saw this in the bounded buffer example
This issue will be explored in detail in Chapter 6.
Operating System Concepts – 10th Edition 5a.11 Silberschatz, Galvin and Gagne ©2018
Dispatcher
Dispatcher module gives control of the CPU to
the process selected by the CPU scheduler; this
involves:
• Switching context
• Switching to user mode
• Jumping to the proper location in the user
program to restart that program
Dispatch latency – time it takes for the
dispatcher to stop one process and start another
running
Operating System Concepts – 10th Edition 5a.12 Silberschatz, Galvin and Gagne ©2018
6
Scheduling Criteria
Min Waiting time – amount of time a process has been waiting in the
ready queue
Min Response time – amount of time it takes from when a request was
submitted until the first response is produced.
Optimization Criteria for Scheduling Algorithms
Operating System Concepts – 10th Edition 5a.13 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition 5a.14 Silberschatz, Galvin and Gagne ©2018
7
Outline
Basic Concepts
Scheduling Criteria
Scheduling Algorithms
Operating System Concepts – 10th Edition 5a.15 Silberschatz, Galvin and Gagne ©2018
Scheduling Algorithms
Operating System Concepts – 10th Edition 5a.16 Silberschatz, Galvin and Gagne ©2018
8
First- Come, First-Served (FCFS) Scheduling
FCFS: the process that requests the CPU first is allocated the CPU first
The implementation of the FCFS is easily managed with a FIFO queue.
• When a process enters the ready queue,
• its PCB is linked onto the tail of the queue.
• When the CPU is free, it is allocated to the process at the head of the
queue.
• The running process is then removed from the queue.
The average waiting time under the FCFS policy is often quite long.
Operating System Concepts – 10th Edition 5a.17 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition 5a.18 Silberschatz, Galvin and Gagne ©2018
9
Scheduling Algorithms
Operating System Concepts – 10th Edition 5a.19 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition 5a.20 Silberschatz, Galvin and Gagne ©2018
10
FCFS Scheduling (Cont.)
Ex, Consider one CPU-bound and many I/O-bound processes
• The CPU-bound process will get and hold the CPU, all the other processes will
finish their I/O and will move into the ready queue, waiting for the CPU.
=> I/O devices are idle.
• the CPU-bound process finishes its CPU burst and moves to an I/O device. All the
I/O-bound processes, execute quickly and move back to the I/O queues
=> CPU sits idle.
Hence in Convoy Effect, one slow process slows down the performance of
the entire set of processes, and leads to wastage of CPU time and other
devices.
To avoid Convoy Effect, preemptive scheduling algorithms like Round Robin
Scheduling can be used
• as the smaller processes don’t have to wait much for CPU time – making
their execution faster and leading to less resources sitting idle.
Operating System Concepts – 10th Edition 5a.21 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition 5a.22 Silberschatz, Galvin and Gagne ©2018
11
Shortest-Job-First (SJF) Scheduling
The SJF algorithm can be either preemptive or nonpreemptive.
• Non-preemptive SJF algorithm will allow the currently running process to
finish its CPU burst.
• A preemptive SJF algorithm will preempt the currently executing process,
(a new process arrives with less work than the remaining time of currently executing proc:
=> will run)
Moving a short process before a long one decreases the waiting time of the short
process more than it increases the waiting time of the long process. Consequently, the
average waiting time decreases.
P2 P0 P3 P1 P2 P3 P0 P1 P2
0 6 8 12 16 0 1 5 7 11 16
12
Example of SRTF (Preemption SJF)
Preemption SJF to the analysis - arri early + CPU time small
ProcessA aArrival TimeTBurst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
Analysis: Trace
time 0 1 2 3 5 10 17 26
P1 8 7 7 7 7 7 0 0
P2 4 3 2 0 0 0 0
P3 9 9 9 9 9 0
P4 5 4 0 0 0
Gantt Chart -
P1 P2 P4 P1 P3
0 1 5 10 17 26
13
Determining Length of Next CPU Burst
Can only estimate the length – should be similar to the previous one
• Then pick process with shortest predicted next CPU burst
Can be done by using the length of previous CPU bursts, using exponential
averaging
n 1 t n 1 n .
Equation:
• n+1 = giá trị dự đoán cho thời gian sử dụng CPU tiếp sau
• tn = thời gian thực tế của sự sử dụng CPU thứ n
• , 0 1
• 0 là một hằng số
Operating System Concepts – 10th Edition 5a.27 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition 5a.28 Silberschatz, Galvin and Gagne ©2018
14
Examples of Exponential Averaging
=0
n 1 t n 1 n .
• n+1 = n = 0
• Recent history does not count
=1
• n+1 = tn = tn
• Only the actual last CPU burst counts
If we expand the formula, we get:
n+1 = tn+(1 - ) tn -1 + …
+(1 - )j tn -j + …
+(1 - )n +1 0
Operating System Concepts – 10th Edition 5a.29 Silberschatz, Galvin and Gagne ©2018
Scheduling Algorithms
Operating System Concepts – 10th Edition 5a.30 Silberschatz, Galvin and Gagne ©2018
15
Round Robin (RR)
RR: similar to FCFS scheduling, but preemption is added to enable the
system to switch between processes
Each process gets a small unit of CPU time (time quantum q - định lượng
thời gian),
• usually 10-100 milliseconds.
• After this time has elapsed, the process is preempted and added to the
end of the ready queue.
If there are n processes in the ready queue and the time quantum is q,
• then each process gets 1/n of the CPU time in chunks of at most q
time units at once.
• No process waits more than (n-1)q time units.
Timer interrupts every quantum to schedule next process
Operating System Concepts – 10th Edition 5a.31 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition 5a.32 Silberschatz, Galvin and Gagne ©2018
16
Time Quantum and Context Switch Time
Performance
q large FIFO (same FCFS)
q small q must be large with respect to context
switch, otherwise overhead is too high,
=> performance decrease
Operating System Concepts – 10th Edition 5a.33 Silberschatz, Galvin and Gagne ©2018
Rule:
80% of CPU bursts should
be shorter than q
Operating System Concepts – 10th Edition 5a.34 Silberschatz, Galvin and Gagne ©2018
17
Scheduling Algorithms
Operating System Concepts – 10th Edition 5a.35 Silberschatz, Galvin and Gagne ©2018
Priority Scheduling
Operating System Concepts – 10th Edition 5a.36 Silberschatz, Galvin and Gagne ©2018
18
Example of Priority Scheduling
Operating System Concepts – 10th Edition 5a.37 Silberschatz, Galvin and Gagne ©2018
Run the process with the highest priority. Processes with the same
priority run round-robin
Example:
Process a Burst Time Priority
P1 4 3
P2 5 2
P3 8 2
P4 7 1
P5 3 3
Gantt Chart with time quantum = 2
Operating System Concepts – 10th Edition 5a.38 Silberschatz, Galvin and Gagne ©2018
19
Scheduling Algorithms
Operating System Concepts – 10th Edition 5a.39 Silberschatz, Galvin and Gagne ©2018
Multilevel Queue
The ready queue consists of multiple queues
Example:
• Priority scheduling, where each priority has its separate queue.
• Schedule the process in the highest-priority queue!
Operating System Concepts – 10th Edition 5a.40 Silberschatz, Galvin and Gagne ©2018
20
Multilevel Queue
Operating System Concepts – 10th Edition 5a.41 Silberschatz, Galvin and Gagne ©2018
Scheduling Algorithms
Operating System Concepts – 10th Edition 5a.42 Silberschatz, Galvin and Gagne ©2018
21
Multilevel Feedback Queue
A process can move between the various queues.
Multilevel-feedback-queue scheduler defined by the following parameters:
• Number of queues
• Scheduling algorithms for each queue
• Method used to determine when to upgrade a process
• Method used to determine when to demote a process
• Method used to determine which queue a process will enter when that
process needs service
Aging can be implemented using multilevel feedback queue
Operating System Concepts – 10th Edition 5a.43 Silberschatz, Galvin and Gagne ©2018
Scheduling:
• A new process enters queue Q0 which is served in RR
When it gains CPU, the process receives 8 milliseconds
If it does not finish in 8 milliseconds, the process is moved to queue Q1
• At Q1 job is again served in RR and receives 16 additional milliseconds
If it still does not complete, it is preempted and moved to queue Q2
Operating System Concepts – 10th Edition 5a.44 Silberschatz, Galvin and Gagne ©2018
22
ex
0 8 16 20 24 32 48 60 64 68
Q1 P1(36) P2(20) P3(12) P3(12)
8 P2(16)
Q2 P1(24) P1(20) P1(20) P1(20) P1(20)
16 P2(12) P2(12) P2(12) P2(0)
P3(4) P3(4) P3(4) P3(0)
Q3 P1(4) P1(4) P1(4) P1(0)
Operating System Concepts – 10th Edition 5a.45 Silberschatz, Galvin and Gagne ©2018
Expl
Q1 Q2 Q1 Q1 Q2 Q2 Q2 Q3
Tại 0: P1 vào Q1, run với q=8, not done, move Q2. Q1: null
Tại 8: Do trong Q1 null, nên P1 trong Q2 run. Q2: P1
Tại 16: P2 vào Q1 vì Q1 có ưu tiên cao hơn Q2 nên P2 được run trước, P1 bị ngắt, P1 còn 20 (36-8-8),
P2 run với q=8 đến time 24 sẽ còn 12 (20-8). Q1:P2, Q2:P1. Nhưng
Tại 20: P3 tới Q1 nhưng P2 chưa run xong. Lúc này P2, P3 cùng trong Q1 nên P2 đc run xong với q=8,
còn 12 (20-8) nên move Q2. lúc này Q1: P3, Q2: P1,P2
Tại 24: P3 ở Q1, do Q1 có ưu tiên cao hơn Q2 nên P3 được run trước, sau q=8 thì ngắt, P3 còn 4 (12-
8). P3 move Q2, lúc này Q1: null, Q2: P1,2,3.
Tại 32: Theo RR, P1 trong Q2 sẽ run trước với q=16, P1 còn 4 (20-16). P1 move to Q3. trong Q2: P2,3
Tại 48: theo RR, P2 trong Q2 run vì được ưu tiên hơn Q3, sau 12m thì P2 done
Tại 60: P2 done, P3 run sau 4m còn lại thì done tại 64. Trong Q2 null, đến Q3 run
Tại 64: P1 trong Q3 được run sau 4m còn lại thì done tại 68
Operating System Concepts – 10th Edition 5a.46 Silberschatz, Galvin and Gagne ©2018
23
Chapter 2: Process Management
Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018
Outline
Thread Scheduling
Multi-Processor Scheduling
Real-Time CPU Scheduling
Operating Systems Examples
Algorithm Evaluation
Operating System Concepts – 10th Edition 5a.48 Silberschatz, Galvin and Gagne ©2018
24
Objectives
Describe various CPU scheduling algorithms
Assess CPU scheduling algorithms based on scheduling criteria
Explain the issues related to multiprocessor and multicore scheduling
Describe various real-time scheduling algorithms
Describe the scheduling algorithms used in the Windows, Linux, and
Solaris operating systems
Apply modeling and simulations to evaluate CPU scheduling
algorithms
Operating System Concepts – 10th Edition 5a.49 Silberschatz, Galvin and Gagne ©2018
Thread Scheduling
Distinction between user-level and kernel-level threads
When threads supported, threads scheduled, not processes
Many-to-one and many-to-many models, thread library schedules
user-level threads to run on LWP (light weight process)
• Known as process-contention scope (PCS) since scheduling
competition is within the process
• Typically done via priority set by programmer
Kernel thread scheduled onto available CPU is system-contention
scope (SCS) – competition among all threads in system
Operating System Concepts – 10th Edition 5a.50 Silberschatz, Galvin and Gagne ©2018
25
Thread Scheduling
Operating System Concepts – 10th Edition 5a.51 Silberschatz, Galvin and Gagne ©2018
Pthread Scheduling
API allows specifying either PCS or SCS during thread creation
• PTHREAD_SCOPE_PROCESS schedules threads using PCS
scheduling
• PTHREAD_SCOPE_SYSTEM schedules threads using SCS
scheduling
Can be limited by OS – Linux and macOS only allow
PTHREAD_SCOPE_SYSTEM
Operating System Concepts – 10th Edition 5a.52 Silberschatz, Galvin and Gagne ©2018
26
Pthread Scheduling API
#include <pthread.h>
#include <stdio.h>
#define NUM_THREADS 5
int main(int argc, char *argv[]) {
int i, scope;
pthread_t tid[NUM THREADS];
pthread_attr_t attr;
/* get the default attributes */
pthread_attr_init(&attr);
/* first inquire on the current scope */
if (pthread_attr_getscope(&attr, &scope) != 0)
fprintf(stderr, "Unable to get scheduling scope\n");
else {
if (scope == PTHREAD_SCOPE_PROCESS)
printf("PTHREAD_SCOPE_PROCESS");
else if (scope == PTHREAD_SCOPE_SYSTEM)
printf("PTHREAD_SCOPE_SYSTEM");
else
fprintf(stderr, "Illegal scope value.\n");
}
Operating System Concepts – 10th Edition 5a.53 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition 5a.54 Silberschatz, Galvin and Gagne ©2018
27
Outline
Thread Scheduling
Multi-Processor Scheduling
Real-Time CPU Scheduling
Operating Systems Examples
Algorithm Evaluation
Operating System Concepts – 10th Edition 5a.55 Silberschatz, Galvin and Gagne ©2018
Multiple-Processor Scheduling
CPU scheduling more complex when multiple CPUs are available
Multiprocess may be any one of the following architectures:
• Multicore CPUs
• Multithreaded cores
• NUMA systems
• Heterogeneous multiprocessing
• Homogeneous
Operating System Concepts – 10th Edition 5a.56 Silberschatz, Galvin and Gagne ©2018
28
Multiple-Processor Scheduling
Operating System Concepts – 10th Edition 5a.57 Silberschatz, Galvin and Gagne ©2018
Multicore Processors
Recent trend to place multiple processor cores on same physical chip
Faster and consumes less power
Multiple threads per core also growing
• Takes advantage of memory stall to make progress on another
thread while memory retrieve happens
Figure
Operating System Concepts – 10th Edition 5a.58 Silberschatz, Galvin and Gagne ©2018
29
Multithreaded Multicore System
Each core has > 1 hardware threads.
If one thread has a memory stall, switch to another thread!
Figure
Operating System Concepts – 10th Edition 5a.59 Silberschatz, Galvin and Gagne ©2018
Chip-multithreading (CMT)
assigns each core multiple
hardware threads. (Intel refers to
this as hyperthreading.)
Operating System Concepts – 10th Edition 5a.60 Silberschatz, Galvin and Gagne ©2018
30
Multithreaded Multicore System
Operating System Concepts – 10th Edition 5a.61 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition 5a.62 Silberschatz, Galvin and Gagne ©2018
31
Multiple-Processor Scheduling – Processor Affinity
When a thread has been running on one processor, the cache contents
of that processor stores the memory accesses by that thread.
• We refer to this as a thread having affinity for a processor (i.e.,
“processor affinity”)
Load balancing may affect processor affinity as a thread may be moved
from one processor to another to balance loads, yet that thread loses
the contents of what it had in the cache of the processor it was moved
off of.
Soft affinity – the operating system attempts to keep a thread running
on the same processor, but no guarantees.
Hard affinity – allows a process to specify a set of processors it may
run on.
Operating System Concepts – 10th Edition 5a.63 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition 5a.64 Silberschatz, Galvin and Gagne ©2018
32
Outline
Thread Scheduling
Multi-Processor Scheduling
Real-Time CPU Scheduling
Operating Systems Examples
Algorithm Evaluation
Operating System Concepts – 10th Edition 5a.65 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition 5a.66 Silberschatz, Galvin and Gagne ©2018
33
Real-Time CPU Scheduling
Event latency – the amount of time
that elapses from when an event
occurs to when it is serviced.
Two types of latencies affect
performance
1. Interrupt latency – time from
arrival of interrupt to start of
routine that services interrupt
2. Dispatch latency – time for
schedule to take current process
off CPU and switch to another
Operating System Concepts – 10th Edition 5a.67 Silberschatz, Galvin and Gagne ©2018
Interrupt Latency
The interrupt latency refers to the delay between the start of an Interrupt
Request (IRQ) and the start of the respective Interrupt Service Routine (ISR).
The interrupt latency is expressed in core clock cycles.
Operating System Concepts – 10th Edition 5a.68 Silberschatz, Galvin and Gagne ©2018
34
Dispatch Latency
Operating System Concepts – 10th Edition 5a.69 Silberschatz, Galvin and Gagne ©2018
Priority-based Scheduling
For real-time scheduling, scheduler must support preemptive, ex:
priority-based scheduling
• But only guarantees soft real-time
For hard real-time must also provide ability to meet deadlines
Processes have new characteristics: periodic ones require CPU at
constant intervals
• Has processing time t, deadline d, period p
• 0≤t≤d≤p
• Rate of periodic task is 1/p
Operating System Concepts – 10th Edition 5a.70 Silberschatz, Galvin and Gagne ©2018
35
Rate Monotonic Scheduling
A priority is assigned based on the inverse of its period
Shorter periods = higher priority;
Longer periods = lower priority
P1 is assigned a higher priority than P2.
Operating System Concepts – 10th Edition 5a.71 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition 5a.72 Silberschatz, Galvin and Gagne ©2018
36
Earliest Deadline First Scheduling (EDF)
Operating System Concepts – 10th Edition 5a.73 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition 5a.74 Silberschatz, Galvin and Gagne ©2018
37
POSIX Real-Time Scheduling
The POSIX.1b standard
API provides functions for managing real-time threads
Defines two scheduling classes for real-time threads:
1. SCHED_FIFO - threads are scheduled using a FCFS strategy with
a FIFO queue. There is no time-slicing for threads of equal priority
2. SCHED_RR - similar to SCHED_FIFO except time-slicing occurs
for threads of equal priority
Defines two functions for getting and setting scheduling policy:
1. pthread_attr_getsched_policy(pthread_attr_t
*attr, int *policy)
2. pthread_attr_setsched_policy(pthread_attr_t
*attr, int policy)
Operating System Concepts – 10th Edition 5a.75 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition 5a.76 Silberschatz, Galvin and Gagne ©2018
38
POSIX Real-Time Scheduling API (Cont.)
Operating System Concepts – 10th Edition 5a.77 Silberschatz, Galvin and Gagne ©2018
Outline
Thread Scheduling
Multi-Processor Scheduling
Real-Time CPU Scheduling
Operating Systems Examples
Algorithm Evaluation
Operating System Concepts – 10th Edition 5a.78 Silberschatz, Galvin and Gagne ©2018
39
Operating System Examples
Linux scheduling
Windows scheduling
Solaris scheduling
Operating System Concepts – 10th Edition 5a.79 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition 5a.80 Silberschatz, Galvin and Gagne ©2018
40
Linux Scheduling in Version 2.6.23 +
Completely Fair Scheduler (CFS)
Scheduling classes
• Each has specific priority
• Scheduler picks highest priority task in highest scheduling class
• Rather than quantum based on fixed time allotments, based on
proportion of CPU time
• Two scheduling classes included, others can be added
1. default
2. real-time
Operating System Concepts – 10th Edition 5a.81 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition 5a.82 Silberschatz, Galvin and Gagne ©2018
41
CFS Performance
Operating System Concepts – 10th Edition 5a.83 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition 5a.84 Silberschatz, Galvin and Gagne ©2018
42
Linux Scheduling (Cont.)
Linux supports load balancing, but is also NUMA-aware.
Scheduling domain is a set of CPU cores that can be balanced
against one another.
Domains are organized by what they share (i.e., cache memory.) Goal
is to keep threads from migrating between domains.
Operating System Concepts – 10th Edition 5a.85 Silberschatz, Galvin and Gagne ©2018
Windows Scheduling
Windows uses priority-based preemptive scheduling
Highest-priority thread runs next
Dispatcher is scheduler
Thread runs until (1) blocks, (2) uses time slice, (3)
preempted by higher-priority thread
Real-time threads can preempt non-real-time
32-level priority scheme
Variable class is 1-15, real-time class is 16-31
Priority 0 is memory-management thread
Queue for each priority
If no run-able thread, runs idle thread
Operating System Concepts – 10th Edition 5a.86 Silberschatz, Galvin and Gagne ©2018
43
Windows Priority Classes
Win32 API identifies several priority classes to which a process can
belong
• REALTIME_PRIORITY_CLASS, HIGH_PRIORITY_CLASS,
ABOVE_NORMAL_PRIORITY_CLASS,NORMAL_PRIORITY_CL
ASS, BELOW_NORMAL_PRIORITY_CLASS,
IDLE_PRIORITY_CLASS
• All are variable except REALTIME
A thread within a given priority class has a relative priority
• TIME_CRITICAL, HIGHEST, ABOVE_NORMAL, NORMAL,
BELOW_NORMAL, LOWEST, IDLE
Priority class and relative priority combine to give numeric priority
Base priority is NORMAL within the class
If quantum expires, priority lowered, but never below base
Operating System Concepts – 10th Edition 5a.87 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition 5a.88 Silberschatz, Galvin and Gagne ©2018
44
Windows Priorities
Operating System Concepts – 10th Edition 5a.89 Silberschatz, Galvin and Gagne ©2018
Solaris
Priority-based scheduling
Six classes available
• Time sharing (default) (TS)
• Interactive (IA)
• Real time (RT)
• System (SYS)
• Fair Share (FSS)
• Fixed priority (FP)
Given thread can be in one class at a time
Each class has its own scheduling algorithm
Time sharing is multi-level feedback queue
• Loadable table configurable by sysadmin
Operating System Concepts – 10th Edition 5a.90 Silberschatz, Galvin and Gagne ©2018
45
Solaris Dispatch Table
Operating System Concepts – 10th Edition 5a.91 Silberschatz, Galvin and Gagne ©2018
Solaris Scheduling
Operating System Concepts – 10th Edition 5a.92 Silberschatz, Galvin and Gagne ©2018
46
Solaris Scheduling (Cont.)
Scheduler converts class-specific priorities into a per-thread
global priority
• Thread with highest priority runs next
• Runs until (1) blocks, (2) uses time slice, (3) preempted by
higher-priority thread
• Multiple threads at same priority selected via RR
Operating System Concepts – 10th Edition 5a.93 Silberschatz, Galvin and Gagne ©2018
Outline
Thread Scheduling
Multi-Processor Scheduling
Real-Time CPU Scheduling
Operating Systems Examples
Algorithm Evaluation
Operating System Concepts – 10th Edition 5a.94 Silberschatz, Galvin and Gagne ©2018
47
Algorithm Evaluation
How to select CPU-scheduling algorithm for an OS?
Determine criteria, then evaluate algorithms
Deterministic modeling
• Type of analytic evaluation
• Takes a particular predetermined workload and defines
the performance of each algorithm for that workload
Consider 5 processes arriving at time 0:
Operating System Concepts – 10th Edition 5a.95 Silberschatz, Galvin and Gagne ©2018
Deterministic Evaluation
• RR is 23ms (q=10):
Operating System Concepts – 10th Edition 5a.96 Silberschatz, Galvin and Gagne ©2018
48
Queueing Models
Describes the arrival of processes, and CPU and I/O bursts
probabilistically
• Commonly exponential, and described by mean
• Computes average throughput, utilization, waiting time, etc.
Computer system described as network of servers, each with queue of
waiting processes
• Knowing arrival rates and service rates
• Computes utilization, average queue length, average wait time,
etc.
Operating System Concepts – 10th Edition 5a.97 Silberschatz, Galvin and Gagne ©2018
Little’s Formula
n = average queue length
W = average waiting time in queue
λ = average arrival rate into queue
Little’s law – in steady state, processes leaving queue must
equal processes arriving, thus:
n=λxW
• Valid for any scheduling algorithm and arrival distribution
For example, if on average 7 processes arrive per second, and
normally 14 processes in queue, then average wait time per
process = 2 seconds
Operating System Concepts – 10th Edition 5a.98 Silberschatz, Galvin and Gagne ©2018
49
Simulations
Queueing models limited
Simulations more accurate
• Programmed model of computer system
• Clock is a variable
• Gather statistics indicating algorithm performance
• Data to drive simulation gathered via
Random number generator according to probabilities
Distributions defined mathematically or empirically
Trace tapes record sequences of real events in real systems
Operating System Concepts – 10th Edition 5a.99 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition 5a.100 Silberschatz, Galvin and Gagne ©2018
50
Implementation
Even simulations have limited accuracy
Just implement new scheduler and test in real systems
• High cost, high risk
• Environments vary
Most flexible schedulers can be modified per-site or per-system
Or APIs to modify priorities
But again environments vary
Operating System Concepts – 10th Edition 5a.101 Silberschatz, Galvin and Gagne ©2018
End of Chapter 5
Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018
51