0% found this document useful (0 votes)
24 views

Process &: Cpu Scheduling

Processes are represented in memory by a process control block that contains status information. The CPU scheduler selects processes from ready queues for execution based on scheduling policies. Common policies include first-come, first-served (FIFO), round robin, and shortest job first. Multilevel feedback queues prioritize processes into multiple queues to provide both fair sharing and response time based on priority level. The goal is to maximize CPU utilization and throughput while minimizing wait time, response time, and turnaround time.

Uploaded by

Mehul Patel
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views

Process &: Cpu Scheduling

Processes are represented in memory by a process control block that contains status information. The CPU scheduler selects processes from ready queues for execution based on scheduling policies. Common policies include first-come, first-served (FIFO), round robin, and shortest job first. Multilevel feedback queues prioritize processes into multiple queues to provide both fair sharing and response time based on priority level. The goal is to maximize CPU utilization and throughput while minimizing wait time, response time, and turnaround time.

Uploaded by

Mehul Patel
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 38

PROCESS

&
CPU SCHEDULING

Diagram of Process State

Process Control Block (PCB)


Information associated with each
process.
Process state
Program counter
CPU registers
CPU scheduling information
Memory-management information
Accounting information
I/O status information

Representation of Process Scheduling

CONTEXT SWITCH
Save the state of the old process
Load the saved state for the new process.
Context-switch time is overhead; the system does no useful
work while switching.
Time dependent on hardware support.

CPU Scheduler

A CPU scheduler, running in the dispatcher, is


responsible for selecting of the next running
process.

Based on a particular strategy

When does CPU scheduling happen?

A process switches from the running state to waiting state


(e.g. I/O request)
A process switches from the running state to the ready state.
A process switches from waiting state to ready state
(completion of an I/O operation)
A process terminates

Scheduling queues

CPU schedulers use various queues in the


scheduling process:

Job queue: consists of all processes

All jobs (processes), once submitted, are in the job queue.


Some processes cannot be executed (e.g. not in memory).

Ready queue

All processes that are ready and waiting for execution are in
the ready queue.
Usually, a long-term scheduler/job scheduler selects
processes from the job queue to the ready queue.
CPU scheduler/short-term scheduler selects a process from
the ready queue for execution.

Simple systems may not have a long-term job scheduler

Scheduling queues

Device queue

When a process is blocked in an I/O


operation, it is usually put in a device queue
(waiting for the device).
When the I/O operation is completed, the
process is moved from the device queue to
the ready queue.

Performance metrics for CPU


scheduling

CPU utilization: percentage of the time that CPU


is busy.
Throughput: the number of processes completed
per unit time
Turnaround time: the interval from the time of
submission of a process to the time of completion.
Wait time: the sum of the periods spent waiting in
the ready queue
Response time: the time of submission to the
time the first response is produced

Goal of CPU scheduling


Other performance metrics:

fairness: It is important, but harder to define


quantitatively.

Goal:

Maximize CPU utilization, Throughput, and


fairness.
Minimize turnaround time, waiting time, and
response time.

Which metric is used more?

CPU utilization
Trivial in a single CPU system
Throughput, turnaround time, wait time,
and response time
Can usually be computed for a given
scenario.

Deterministic modeling
example:

Suppose we have processes A, B, and C,


submitted at time 0
We want to know the response time, waiting
time, and turnaround time of process A
turnaround time
wait time
response time = 0

A B C A B C A C A C
Gantt chart: visualize how processes execute.

Time

Deterministic modeling example

Suppose we have processes A, B, and C,


submitted at time 0
We want to know the response time, waiting
time, and turnaround time of process B
turnaround time
wait time
response time

+
A B C A B C A C A C

Time

Deterministic modeling example

Suppose we have processes A, B, and C,


submitted at time 0
We want to know the response time, waiting
time, and turnaround time of process C
turnaround time
wait time
response time

A B C A B C A C A C

Time

Preemptive versus
nonpreemptive scheduling

Many CPU scheduling algorithms have both


preemptive and nonpreemptive versions:

Preemptive: schedule a new process even when the


current process does not intend to give up the CPU
Non-preemptive: only schedule a new process when the
current one does not want CPU any more.

When do we perform non-preemptive scheduling?

A process switches from the running state to waiting state


(e.g. I/O request)
A process switches from the running state to the ready state.
A process switches from waiting state to ready state
(completion of an I/O operation)
A process terminates

Scheduling Policies

FIFO (first in, first out)


Round robin
SJF (shortest job first)
Priority Scheduling
Multilevel feedback queues
Many more...

FIFO

FIFO: assigns the CPU based on the


order of requests
Non-preemptive: A process keeps running on
a CPU until it is blocked or terminated
Also known as FCFS (first come, first serve)
- Simple
- Short jobs can get stuck behind long jobs

Turnaround time is not ideal

Round Robin

Round Robin (RR) periodically releases


the CPU from long-running jobs

Based on timer interrupts so short jobs can get


a fair share of CPU time
Pre-emptive: a process can be forced to
leave its running state and replaced by
another running process
Time slice: interval between timer interrupts

More on Round Robin

If time slice is too long

Scheduling degrades to FIFO

If time slice is too short

Throughput suffers
Context switching cost dominates

FIFO vs. Round Robin

With zero-cost context switch, is RR


always better than FIFO?

FIFO vs. Round Robin

Suppose we have three jobs of equal length


turnaround time of C
turnaround time of B
turnaround time of A
A B C A B C A B C

Time

Round Robin
turnaround time of C
turnaround time of B
turnaround time of A
A

C
FIFO

Time

FIFO vs. Round Robin

Round Robin
+ Shorter response time
+ Fair sharing of CPU
- Not all jobs are preemptable
- Not good for jobs of the same length
-

More precisely, not good in terms of the turnaround


time.

Shortest Job First (SJF)

SJF runs whatever job puts the least


demand on the CPU, also known as STCF
(shortest time to completion first)
+ Probably optimal in terms of turn-around time .
+ Great for short jobs
+ Small degradation for long jobs

SJF Illustrated
turnaround time of C
turnaround time of B
turnaround time of A
wait time of C
wait time of B
wait time of A = 0
response time of C
response time of B
response time of A = 0
A

C
Shortest Job First

Time

Drawbacks of Shortest Job First


- Starvation: constant arrivals of short jobs
can keep long ones from running
- There is no way to know the completion
time of jobs (most of the time)

Some solutions

Ask the user, who may not know any better


If a user cheats, the job is killed

Priority Scheduling (Multilevel


Queues)

Priority scheduling: The process with


the highest priority runs first
C
Priority 0:
Priority 1: A
Priority 2: B
Assume that low numbers represent high
priority
C

Priority Scheduling

Time

Multilevel Feedback Queues

Multilevel feedback queues use multiple


queues with different priorities

Round robin at each priority level


Run highest priority jobs first
Once those finish, run next highest priority, etc
Jobs start in the highest priority queue
If time slice expires, drop the job by one level
If time slice does not expire, push the job up
by one level

Multilevel Feedback Queues


time = 0

Priority 0 (time slice = 1):


Priority 1 (time slice = 2):
Priority 2 (time slice = 4):

Time

Multilevel Feedback Queues


time = 1

Priority 0 (time slice = 1):


Priority 1 (time slice = 2): A
Priority 2 (time slice = 4):

Time

Multilevel Feedback Queues


time = 2

Priority 0 (time slice = 1):


Priority 1 (time slice = 2): A
Priority 2 (time slice = 4):

A B

C
B

Time

Multilevel Feedback Queues


time = 3

Priority 0 (time slice = 1):


Priority 1 (time slice = 2): A
Priority 2 (time slice = 4):

A B C

Time

Multilevel Feedback Queues


time = 3

Priority 0 (time slice = 1):


Priority 1 (time slice = 2): A
Priority 2 (time slice = 4):

suppose process A is blocked on an I/O

A B C

Time

Multilevel Feedback Queues


time = 3

Priority 0 (time slice = 1): A


Priority 1 (time slice = 2): B
Priority 2 (time slice = 4):

suppose process A is blocked on an I/O

A B C

Time

Multilevel Feedback Queues


time = 5

Priority 0 (time slice = 1): A


Priority 1 (time slice = 2):
Priority 2 (time slice = 4):

suppose process A is returned from an I/O

A B C

Time

Multilevel Feedback Queues


time = 6

Priority 0 (time slice = 1):


Priority 1 (time slice = 2):
Priority 2 (time slice = 4):

A B C

Time

Multilevel Feedback Queues


time = 8

Priority 0 (time slice = 1):


Priority 1 (time slice = 2):
Priority 2 (time slice = 4): C

A B C

Time

Multilevel Feedback Queues


time = 9

Priority 0 (time slice = 1):


Priority 1 (time slice = 2):
Priority 2 (time slice = 4):

A B C

Time

You might also like