0% found this document useful (0 votes)
4 views

ch5_CPU Scheduling

Chapter 5 discusses CPU scheduling, a critical function in operating systems that enhances CPU utilization through multiprogramming. It covers various scheduling algorithms, including First-Come, First-Served (FCFS), Shortest-Job-First (SJF), Round Robin (RR), and Priority Scheduling, along with their advantages and disadvantages. The chapter also outlines scheduling criteria such as CPU utilization, throughput, turnaround time, waiting time, and response time, which are essential for evaluating the effectiveness of scheduling algorithms.

Uploaded by

ramimos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

ch5_CPU Scheduling

Chapter 5 discusses CPU scheduling, a critical function in operating systems that enhances CPU utilization through multiprogramming. It covers various scheduling algorithms, including First-Come, First-Served (FCFS), Shortest-Job-First (SJF), Round Robin (RR), and Priority Scheduling, along with their advantages and disadvantages. The chapter also outlines scheduling criteria such as CPU utilization, throughput, turnaround time, waiting time, and response time, which are essential for evaluating the effectiveness of scheduling algorithms.

Uploaded by

ramimos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 46

Chapter 5: CPU Scheduling

Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018
Outline
 Basic Concepts
 Scheduling Criteria
 Scheduling Algorithms

Operating System Concepts – 10th Edition 5.2 Silberschatz, Galvin and Gagne ©2018
Objectives
 To introduce CPU scheduling, which is the basis for multi-
programmed operating systems

 To describe various CPU-scheduling algorithms

 To discuss evaluation criteria for selecting a CPU-scheduling


algorithm for a particular system

Operating System Concepts – 10th Edition 5.3 Silberschatz, Galvin and Gagne ©2018
Basic Concepts

 Maximum CPU utilization is obtained with multiprogramming


 In a simple computer system, the CPU then just sits idle. All this
waiting time is wasted; no useful work is accomplished.
 However, with multiprogramming, we try to use this time productively.
When one process has to wait, the operating system takes the CPU
away from that process and gives the CPU to another process. This
pattern continues.

and thus CPU scheduling is a fundamental OS function

Operating System Concepts – 10th Edition 5.4 Silberschatz, Galvin and Gagne ©2018
Basic Concepts

 CPU–I/O Burst Cycle


• Process execution consists of a
cycle of CPU execution and I/O
wait

• Process execution begins with a


CPU burst followed by an I/O burst
and so on. Finally, it ends with a
CPU burst as a request to terminate
execution

• CPU burst distribution is of main


concern

Operating System Concepts – 10th Edition 5.5 Silberschatz, Galvin and Gagne ©2018
Histogram of CPU-burst Times

Large number of short CPU bursts

Small number of longer CPU bursts

Operating System Concepts – 10th Edition 5.6 Silberschatz, Galvin and Gagne ©2018
CPU Scheduler
 The CPU scheduler selects from among the processes in ready queue, and
allocates a CPU core to one of them
• Queue may be ordered in various ways

 CPU scheduling decisions may take


place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates

• For situations 1 and 4, there is no choice in terms of scheduling. A new process (if
one exists in the ready queue) must be selected for execution.
• For situations 2 and 3, however, there is a choice (because there are some other
processes waiting).

Operating System Concepts – 10th Edition 5.7 Silberschatz, Galvin and Gagne ©2018
Preemptive and Nonpreemptive Scheduling

 When scheduling takes place only under circumstances 1 and 4, the


scheduling scheme is nonpreemptive. Otherwise, it is preemptive.

 Under Nonpreemptive scheduling, once the CPU has been allocated to a


process, the process keeps the CPU until it releases it either by terminating or
by switching to the waiting state.

o Preemptive scheduling:
● OS can force (preempt) a process from CPU at anytime,
e.g., an interrupt to allocate CPU to another higher priority process
● However, it can result in race conditions (Ch6) when data are shared among several
processes
● Virtually all modern operating systems including Windows, MacOS, Linux, and UNIX
use preemptive scheduling algorithms.

Operating System Concepts – 10th Edition 5.8 Silberschatz, Galvin and Gagne ©2018
Preemptive Scheduling and Race Conditions

 Preemptive scheduling can result in race conditions when data are


shared among several processes.
 Consider the case of two processes that share data. While one process
is updating the data, it is preempted so that the second process can
run. The second process then tries to read the data, which are in an
inconsistent state.
 This issue will be explored in detail in Chapter 6.

Operating System Concepts – 10th Edition 5.9 Silberschatz, Galvin and Gagne ©2018
Dispatcher
 Dispatcher module gives control of
the CPU to the process selected by
the CPU scheduler; this involves:
• Switching context
• Switching to user mode
• Jumping to the proper location in
the user program to restart that
program

Dispatch latency – time it takes for the


dispatcher to stop one process and
start another running

Operating System Concepts – 10th Edition 5.10 Silberschatz, Galvin and Gagne ©2018
Scheduling Criteria

 CPU utilization – keep the CPU as busy as possible (40-90%)


 Throughput – number of processes that complete their execution
per time unit
 Turnaround time – amount of time to execute a particular process.
This includes:
• Time spent waiting to get into memory
• Waiting in the ready queue
• Executing on CPU
• Doing I/O
 Waiting time – amount of time a process has been waiting in the
ready queue (this is the only time affected by the scheduling
algorithm)
 Response time – amount of time it takes from when a request was
submitted until the first response is produced.
 The difference between the arrival time and the time at which the
process first gets the CPU is called Response Time.
Operating System Concepts – 10th Edition 5.11 Silberschatz, Galvin and Gagne ©2018
Scheduling Algorithm Optimization Criteria

 Max CPU utilization


 Max throughput
 Min turnaround time
 Min waiting time
 Min response time

Operating System Concepts – 10th Edition 5.12 Silberschatz, Galvin and Gagne ©2018
First- Come, First-Served (FCFS) Scheduling
• The simplest algorithm to implement using a FIFO queue.
• With this scheme, the process that requests the CPU first is allocated the CPU first.
• On the negative side, the average waiting time under the FCFS policy is often quite
long.

Process Burst Time (ms)


P1 24
P2 3
P3 3
 Suppose that the processes arrive at time 0 in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:

P1 P2 P3
0 24 27 30

 Waiting time for P1 = 0; P2 = 24; P3 = 27


 Average waiting time: (0 + 24 + 27)/3 = 17 ms

Operating System Concepts – 10th Edition 5.13 Silberschatz, Galvin and Gagne ©2018
1. Example of FCFS Scheduling (Cont.)

Suppose that the processes arrive in the order:


P2 , P3 , P1
 The Gantt chart for the schedule is:

P2 P3 P1
0 3 6 30

 Waiting time for P1 = 6; P2 = 0; P3 = 3


 Average waiting time: (6 + 0 + 3)/3 = 3
 Much better than previous case

 Convoy effect - short process behind/after long process may lower device
utilization. Consider one CPU-bound and many I/O-bound processes
■ FCFS is non-preemptive

Operating System Concepts – 10th Edition 5.14 Silberschatz, Galvin and Gagne ©2018
2. Example of FCFS

Operating System Concepts – 10th Edition 5.15 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition 5.16 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition 5.17 Silberschatz, Galvin and Gagne ©2018
Shortest-Job-First (SJF) Scheduling
 Associate with each process the length of its next CPU burst. Use these
lengths to schedule the process with the shortest time

 Two schemes:
 Non-Preemptive – once CPU given to the process it cannot be preempted
until completes its CPU burst
 Preemptive – if a new process arrives with CPU burst length less than
remaining time of current executing process, preempt. This scheme is
known as the Shortest-Remaining-Time-First (SRTF)

Operating System Concepts – 10th Edition 5.18 Silberschatz, Galvin and Gagne ©2018
1. Example of SJF (Non-Preemptive)

ProcessArriva l Time Burst Time


P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3

 SJF scheduling chart

P4 P1 P3 P2
0 3 9 16 24

 Average waiting time = (3 + 16 + 9 + 0) / 4 = 7 ms

Operating System Concepts – 10th Edition 5.19 Silberschatz, Galvin and Gagne ©2018
2. Example of Shortest-remaining-time-first

 Now we add the concepts of varying arrival times and preemption to


the analysis
ProcessA arri Arrival TimeT Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
 Preemptive SJF Gantt Chart

P1 P2 P4 P1 P3
0 1 5 10 17 26

 Average waiting time = [(10-1)+(1-1)+(17-2)+(5-3)]/4 = 26/4 = 6.5

Waiting time=start time of the last burst – arrival time – no. of


ms executed so far

Operating System Concepts – 10th Edition 5.20 Silberschatz, Galvin and Gagne ©2018
3. Example of Non-
Preemptive
Process
SJFArrival Time
P Burst Time
0.
0 7
1

P 2. 4
0 1
2
4. 4
P 0
■ SJF (non- 3 5.
preemptive) P P1 0 P3 P2 P4
40 3 7 8 1
12 6

■ Average waiting time = (0 + (8-2) + (7-4) +(12-5))/4


= 16/4= 4 ms
4. Example of Preemptive
SJF Process Arrival Burst
Time Time
P1 7

0.0
P2 4

2.0
P3 1
P1 P2 P4 P1
4.0
P3
0 2 P44P2 7 1 4 1
1 6
5.0
5
■ Average waiting time = ((11-2) + (5-2-2) + (4-4)
■ SJF
+(7-5))/4 = 3
(preemptive)
Shortest-Job-First (SJF) Scheduling

 SJF is optimal – gives minimum average waiting time for a given set
of processes
• The difficulty is knowing the length of the next CPU request

 How do we determine the length of the next CPU burst?


• Could ask the user
• Estimate

Operating System Concepts – 10th Edition 5.23 Silberschatz, Galvin and Gagne ©2018
Round Robin (RR) Algorithm
 Especially designed for timesharing systems. It is similar to FCFS scheduling, but
preemption is added to enable the system to switch between processes.

 Each process gets a small unit of CPU time (time quantum q), usually 10-100
milliseconds. After this time has elapsed, the process is preempted and added to
the end of the ready queue.

 The ready queue is treated as a circular queue. The CPU scheduler goes
around the ready queue, allocating the CPU to each process for a time interval of
up to 1 time quantum.

 Timer interrupts every quantum to schedule next process

 If a process finishes before the quantum, a new Process is loaded.


Otherwise context switch and wait for next turn.

Operating System Concepts – 10th Edition 5.24 Silberschatz, Galvin and Gagne ©2018
-Reading

Operating System Concepts – 10th Edition 5.25 Silberschatz, Galvin and Gagne ©2018
1. Example of RR with Time Quantum = 4

Process Burst Time


P1 24
P2 3
P3 3
 The Gantt chart is:
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30

Waiting time: Turnaround time:


P1=10-0-4=6 P1=completion time-arrival=30
P2=4 P2=7
P3=7 P3=10
Average =17/3=5.6 ms Average=15.6 ms

Operating System Concepts – 10th Edition 5.26 Silberschatz, Galvin and Gagne ©2018
Time Quantum and Context Switch Time

■ Performance of RR depends on the size of the time quantum:


• Quantum too big ( like FCFS)
• Quantum too small ( lots of Context Switches)

■ Suppose only one process then

Operating System Concepts – 10th Edition 5.27 Silberschatz, Galvin and Gagne ©2018
Round Robin (RR) Algorithm

 Typically, RR has higher average turnaround than SJF, but better


response

 q (quantum) should be large compared to context switch time


• q usually 10 milliseconds to 100 milliseconds,
• Context switch < 10 microseconds

Operating System Concepts – 10th Edition 5.28 Silberschatz, Galvin and Gagne ©2018
Priority Scheduling
 A priority number (integer) is associated with each process and the
CPU is allocated to the process with the highest priority (smallest
integer  highest priority)

SJF is priority scheduling where priority


is the inverse of predicted next CPU
burst time; the larger the CPU burst, the
lower the priority, and vice versa.

 Types:
• Preemptive: it will preempt the CPU if the priority of the newly arrived
process is higher than the priority of the currently running process
• Non-preemptive: it will simply put the new process at the head of the
ready queue.

Operating System Concepts – 10th Edition 5.29 Silberschatz, Galvin and Gagne ©2018
Starvation & Aging
 Problem  Starvation (indefinite blocking) – low priority processes may
never execute

 Solution  Aging – as time passes, increase the priority of the processes


that wait in the system for a long time.

Operating System Concepts – 10th Edition 5.30 Silberschatz, Galvin and Gagne ©2018
1. Example of Priority Scheduling (Non-preemptive)

ProcessA arri Burst TimeT Priority


P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

 Priority scheduling Gantt Chart

 Average waiting time = (6+0+16+18+1)/5=8.2

Operating System Concepts – 10th Edition 5.31 Silberschatz, Galvin and Gagne ©2018
2. Priority Scheduling w/ Round-Robin
ProcessA arri Burst TimeT Priority
P1 4 3
P2 5 2
P3 8 2
P4 7 1
P5 3 3
 Run the process with the highest priority. Processes with the same
priority run round-robin

 Gantt Chart with time quantum = 2

Operating System Concepts – 10th Edition 5.32 Silberschatz, Galvin and Gagne ©2018
3. Example (Preemptive)

Operating System Concepts – 10th Edition 5.33 Silberschatz, Galvin and Gagne ©2018
Operating System Concepts – 10th Edition 5.34 Silberschatz, Galvin and Gagne ©2018
Part 2 Outline
 Multilevel Queue Scheduling
 Multilevel feedback queue Scheduling
 Thread Scheduling
 Algorithm Evaluation

Operating System Concepts – 10th Edition 5.35 Silberschatz, Galvin and Gagne ©2018
>> Multilevel Queue Scheduling
 Another class of scheduling algorithms has been created for situations
in which processes are easily classified into different groups.

 A multilevel queue scheduling algorithm partitions the ready queue


into several separate queues (as shown in the next slide)

 Each queue has its own scheduling algorithm


• foreground – RR
• background – FCFS

Operating System Concepts – 10th Edition 5.36 Silberschatz, Galvin and Gagne ©2018
Multilevel Queue
 Prioritization based upon process type

■ Scheduling must be done between the queues


 Fixed priority preemptive scheduling; (i.e., serve all from foreground then
from background). Possibility of starvation.
 OR: Time slice – each queue gets a certain amount of CPU time which it can
schedule amongst its processes; i.e.
• 80% to foreground in RR
• 20% to background in FCFS

Operating System Concepts – 10th Edition 5.37 Silberschatz, Galvin and Gagne ©2018
Multilevel Queue
 With priority scheduling, we have separate queues for each priority.
 Every queue has full priority over lower level queues (guaranteed with
preemptive scheduling)

Operating System Concepts – 10th Edition 5.38 Silberschatz, Galvin and Gagne ©2018
>> Multilevel Feedback Queue
 A process can move between the various queues.

 The idea is to separate processes according to the characteristics of


their CPU bursts.
• If a process uses too much CPU time, it will be moved to a lower-priority
queue. This scheme leaves I/O-bound and interactive processes in the
higher-priority queues.

 Aging can be implemented using multilevel feedback queue


• a process that waits too long in a lower-priority queue may be moved to a
higher-priority queue. This form of aging prevents starvation

Operating System Concepts – 10th Edition 5.39 Silberschatz, Galvin and Gagne ©2018
Multilevel Feedback Queue
 Multilevel-feedback-queue scheduler is defined by the following
parameters:
• Number of queues
• Scheduling algorithms for each queue
• Method used to determine when to upgrade a process
• Method used to determine when to demote (downgrade) a process
• Method used to determine which queue a process will enter when that
process needs service

Operating System Concepts – 10th Edition 5.40 Silberschatz, Galvin and Gagne ©2018
Example of Multilevel Feedback Queue
 Three queues:
• Q0 – RR with time quantum 8 milliseconds
• Q1 – RR time quantum 16 milliseconds
• Q2 – FCFS
 Scheduling
• A new process enters queue Q0 which is
served in RR
 When it gains CPU, the process receives 8
milliseconds
 If it does not finish in 8 milliseconds, the
process is moved to the tail of queue Q1
• At Q1 job is again served in RR and
receives 16 additional milliseconds
 If it still does not complete, it is preempted
and moved to queue Q2
Q1 Gets CPU time only
if Q0 is empty, and so
on

Operating System Concepts – 10th Edition 5.41 Silberschatz, Galvin and Gagne ©2018
Thread Scheduling
 When threads supported, threads scheduled, not processes
 Distinction between user-level and kernel-level threads:
1. For systems using many-to-one and many-to-many models,
 thread library schedules user-level threads to run on lightweight
process (LWP)
 This scheme is known as process-contention scope (PCS) since
competition for the CPU takes place among threads belonging to
the same process.
 Typically done via priority set by the programmer

 While to decide which kernel-level thread to schedule onto a CPU,


the kernel uses system-contention scope (SCS) – competition for
the CPU takes place among all threads in system

2. Systems using the one-to-one model, such as Windows, Linux, and


Solaris, schedule threads using only system-contention scope (SCS)

Operating System Concepts – 10th Edition 5.42 Silberschatz, Galvin and Gagne ©2018
Thread Scheduling

Operating System Concepts – 10th Edition 5.43 Silberschatz, Galvin and Gagne ©2018
Algorithm Evaluation
 How to select CPU-scheduling algorithm for an OS?
 Determine criteria, then evaluate algorithms
 Deterministic modeling
• Type of analytic evaluation
• Takes a particular predetermined workload and defines
the performance of each algorithm for that workload
 Consider 5 processes arriving at time 0:

Operating System Concepts – 10th Edition 5.44 Silberschatz, Galvin and Gagne ©2018
Deterministic Evaluation
 For each algorithm, calculate minimum average waiting time
 Simple and fast, but requires exact numbers for input, applies
only to those inputs
• FCS is 28ms:

• Non-preemptive SFJ is 13ms:

• RR is 23ms:

Operating System Concepts – 10th Edition 5.45 Silberschatz, Galvin and Gagne ©2018
Thanks

Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018

You might also like