0% found this document useful (0 votes)
4 views

06. Scheduled

The document discusses processes and threads, focusing on scheduling algorithms, inter-process communication (IPC), and CPU utilization. It covers various scheduling strategies such as First-Come First Served (FCFS), Shortest Job First (SJF), and Round-Robin (RR), along with their characteristics and performance metrics. Additionally, it addresses classical IPC problems and the importance of scheduling decisions in operating systems.

Uploaded by

b4lb4l1010
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

06. Scheduled

The document discusses processes and threads, focusing on scheduling algorithms, inter-process communication (IPC), and CPU utilization. It covers various scheduling strategies such as First-Come First Served (FCFS), Shortest Job First (SJF), and Round-Robin (RR), along with their characteristics and performance metrics. Additionally, it addresses classical IPC problems and the importance of scheduling decisions in operating systems.

Uploaded by

b4lb4l1010
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 66

Processes & Threads

Scheduling

Kiều Trọng Khánh


Review
• Process Model
– Pseudo-parallelism (Multi-programming, quantum or time slice)
– Context Switch (user mode  kernel mode, switch CPU to other
process – load/store PCB)
– Scheduling algorithm
• PCB
– Id, registers, scheduling information, memory management
information, accounting information, I/O status information, …
– State (New, Running, Ready, Blocked, Terminal)
• CPU Utilization
– Shows the CPU utilization
– 1 – pn
Review
• Threads
– Share the same address space and resources of the
process
– Each thread has its own PC, registers and stack of
execution
– There is no protection between threads in one process
– Have its own stack
®Improve context switch among processes, optimize
quantum
– Are implemented in 3 modes: user, kernel, hybrid
Review
• IPC
– Resolving
• Race condition (Critical Region, Mutual Exclusion)
• Busy waiting (Priority Inversion, Waste CPU time)
• No Indivisible (or Atomically)
– Lock Variable, Strict Alternation
– Good solution for race condition
• Software: Peterson solution (2 control variables)
• Hardware: TSL (atomically, individual)
– Good solution for race condition and busy waiting
• Sleep & Wakeup
• Software: binary semaphore (recommending about order in
using), monitors
• Hardware: mutexes
• Scheduling
Objectives
– Introduction
– Process Behavior
– When to schedule
– Categories of Scheduling algorithms
– Criteria/ Properties Term
– Scheduling in Batch System
• FCFS
• SJF
• SRT
– Scheduling in Interactive System
– Scheduling in Real-time System
– Thread scheduling
Objectives
• Classical IPC Problems
– The Dining Philosophers Problem
– The Readers and Writers Problem
Scheduling
Introduction
• Scheduler
– OS component that decides what process will be run and for
how long
– Uses a scheduling algorithm
• On OSs that support kernel-level threads, it is threads that
are being scheduled
• History
– In the old days of batch system with input the form of card images
on a magnetic tape: just run the next job on the tape
– In early computers, CPU time was a scarce resource: good
scheduling was of paramount importance!
– Nowadays, the CPU is not a scarce resource any more!
Furthermore, in PCs there aren’t many users competing…
However, scheduling algorithms have become more
sophisticated!
Scheduling
Process Behavior
• All process execution consists of a cycle of bursts of
computing (CPU execution) and I/O request (I/O wait)
• Compute-bound processes
– Spend most of their time computing
– Have long CPU bursts and thus infrequent I/O waits
• I/O-bound processes
– Spend most of their time waiting for I/O
– Have short CPU bursts and thus frequent I/O waits

Tanenbaum, Fig. 2-38.


Scheduling
Process Behavior
• Example

load store
add store CPU burst Process execution
read from file begin with CPU
burst, then I/O
wait for I/O I/O burst burst … the last
store increment index CPU burst will end
write to file CPU burst with a system
request to terminal
wait for I/O I/O burst execution
Scheduling
When to schedule
• A key issued related to scheduling is when to make
scheduling decisions
• Process creation
– A decision needs to be made whether run the parent or child
process
• Process termination
– A decision must be made when a process exits. That process can
no longer run, so some other process must be chosen from the set
of ready processes. If no process is ready, a system-supplied idle
process is normally run
• Process blocking
– When a process blocks, another process has to be selected to run
Scheduling
When to schedule
• Interrupt occurrence
– If the interrupt came from an I/O device that has now completed
its work, some process that was blocked waiting for the I/O may
now be ready to run
• Clock interrupt occurrence
– non-preemptive scheduling algorithms
– preemptive scheduling algorithms
• CPU scheduling decisions may take place when a process:
– Switches from running to blocked state (I/O or wait for child
processes).
– Switches from running to ready state (interrupted).
– Switches from blocked to ready (completion of I/O).
– Terminates.
→ The process executes following above steps in order, we can say it
is scheduled in non-preemptive. Otherwise, it is scheduled in
preemptive
Scheduling
When to schedule
• non-preemptive scheduling algorithms
– Picks a process to run and then just lets it run until it blocks or
until it voluntarily releases the CPU (will not be forceably
suspended, no scheduling decisions)
– Once a process is in the running state, it will continue until it
terminates or blocks itself for I/O
– Applying to the batch system
• preemptive scheduling algorithms
– The process can run (continuously) for a maximum of some fixed
time. If it is still running at the end of this time, it is suspended
and the scheduler will pick another process to run (needs timer)
– Currently running process may be interrupted and moved to the
Ready state by the operating system
– Allows for better service since any one process cannot
monopolize the processor for very long
– Applying to the time-sharing or real time
Scheduling
Categories of Scheduling Algorithms
• Batch
– Non-preemptive algorithms
– Preemptive algorithms with long time periods for each process
– Reduces processes switches and increase performance
• Interactive
– Preemptive algorithms are needed to prevent the situation that
the preemptive keeps one process from hogging the CPU and
denying service to the others. Even if no process intentionally ran
forever, one process might shut out all the others indefinitely due
to a program bug
• Real-Time
– Preemption normally used, but sometimes not needed because the
processes know that they may not run for long periods of time and
usually do their work and block quickly
Scheduling
Criteria/ Properties Term
• Fairness – equivalent processes get equivalent CPU times
• Policy enforcement – if the local policy is that safety
control processes get to run whenever they want to, even if
it means the payroll is 30 sec late, the scheduler has to make
sure this policy enforced
• Policy vs. Mechanism
– The policies what is to be done
– The mechanism specifies how it is to be done
– The mechanism is a thing that implements the policy
– Ex:
• The timer construct for ensuring CPU protection (mechanism)
• The decision of how long the timer is set for a particular user (policy)
Scheduling
Criteria/ Properties Term
• Throughput
– The number of processes that complete their
execution per time unit.
– Ex: In long processes, the rate may be one process per
hour. For short processes, it may be 10 processes per
second.
• Turnaround time
– Amount of time to execute a particular process.
– Is the sum of the periods spent waiting in the ready
queue, executing on the CPU, doing I/O, etc…
– Is the time from the process is submitted until it is
completed (time of complete – arrival time)
Scheduling
Criteria/ Properties Term
• CPU utilization
– The utilization of CPU.
– Can range from 0 to 100 percent.
– It should range from 40 percent (lightly loaded system) to 90
percent (heavily used system).
• Response time
– In an interactive system, turnaround time may not be the best
criterion.
– Often, a process can produce some output fairly early and can
continue computing new results.
– This measure is the amount of time it takes from when a
request was submitted until the first response is produced.
• Proportionality
– When a request that is perceived as complex takes a long time,
users accept that, but when a request that is perceived as simple
takes a long time, users get irritated
Scheduling
Categories of Scheduling Algorithms
• All systems
– Fairness – giving each process a fair share of CPU
– Policy enforcement – seeing that stated policy is carried out
– Balance – keeping all parts of the system busy
• Batch systems
– Throughput – maximize jobs per hour
– Turnaround time – minimize time between submission and
termination
– CPU utilization – keep the CPU busy all the time
• Interactive systems
– Response time – respond (react) to request quickly
– Proportionality – meet, if possible, user’s expectations
• Real-time systems
– Meeting deadlines – avoid losing data
– Predictability – avoid quality degradation in multimedia
systems
Scheduling in Batch System
First-Come First Served (FCFS)
• The simplest CPU scheduling algorithm!
• Is non-preemptive
• The process that entered the ready state first, will get
the CPU first and will hold it until it is blocked (or
finished)
• Simple to understand and implement
• It requires a single queue of ready processes:
– If a process enters the ready state, it is linked onto the tail of
the ready queue.
– If the CPU is free, it takes the process at the head.
→ Scheduled Event: Terminated State
Scheduling in Batch System
First-Come First Served (FCFS)
CPU
tail head


Scheduling in Batch System
First-Come First Served (FCFS)
CPU
tail head
P

New
Scheduling in Batch System
First-Come First Served (FCFS)
CPU
tail head


Scheduling in Batch System
First-Come First Served (FCFS)
CPU
tail head
P

New
Scheduling in Batch System
First-Come First Served (FCFS)
CPU
tail head


Scheduling in Batch System
First-Come First Served (FCFS)
CPU
tail head


Scheduling in Batch System
First-Come First Served (FCFS)
CPU
tail head

• Waiting time of each process: started process time – arrival time


• Ex:
– (Process:BurstTime) in order (P1:24), (P2:3), (P3:3)
– Waiting time for P1 = 0; P2 = 24; P3 = 27
– Average waiting time: (0 + 24 + 27)/3 = 17
– Average Turnaround time: (24 + 27 + 30)/3 = 27
Scheduling in Batch System
Example

Avg waiting time = 4.6


Avg turnaround time = 8.6

0 5 10 15 20

A
B
C
D
E
Scheduling in Batch System
First-Come First Served (FCFS)
• Consider FCFS scheduling in a dynamic situation where we have
one CPU-bound process and many I/O-bound process.
– The CPU-bound process will get and hold the CPU.
– All the other processes will finish their I/O and will move into the ready
queue, waiting for the CPU.
– Eventually, the CPU bound process moves to an I/O devices.
– All the I/O-bound process execute quickly and move back to the I/O queues.
– Again, the CPU-bound process will then move back and hold the CPU, and
all the I/O processes have to wait in the ready queue.
• The above situation is called a convoy effect.
– All the other processes wait for the one big process to get of the CPU.
– Result in lower CPU and device utilization.
• Ex:
– (Process:BurstTime) in order (P2:3), (P3:3), (P1:24)
– Waiting time for P1 = 6; P2 = 0; P3 = 3
– Average waiting time: (6 + 0 + 3)/3 = 3
– Average turnaround time: (6 + 3 + 30)/3 = 13
Scheduling in Batch System
First-Come First Served (FCFS)

(P1:24), (P2:3),
(P2:3), (P3:3),
(P3:3) (P1:24)
Avg Waiting Time 17 3 (6)
Avg Turnaround Time 27 13 (2)
Scheduling in Batch System
Shortest Job First (SJF)
• Runtime is known in advance (non-preemptive)
• When several equally important jobs are sitting in
the input queue waiting to be started, the scheduler
picks the shorted job first
– Another more appropriate term – shortest-next-CPU-
burst scheduling algorithm
– When the CPU is available, it is assigned to the process
that has the smallest next CPU burst
• Is the optimal algorithm (only) when all the jobs
are available simultaneously
→ Scheduled Event: Terminated State
Scheduling in Batch System
Shortest Job First (SJF)
• Ex:
– (Process:BurstTime) (P1:6), (P2:8), (P3:7), (P4: 3)
– Average waiting time: (3+ 16 + 9 + 0)/4 = 7
– Average turnaround time: (3 + 9 + 16 + 24)/4 = 13
Scheduling in Batch System
Example

Avg waiting time = 3.6


Avg turnaround time = 7.6

0 5 10 15 20

A
B
C
D
E
Scheduling in Batch System
Shortest Remaining Time Next (SRT)
• Is a preemptive version of shortest job first
• The scheduler always chooses the process whose
remaining runtime is the shortest
• Preempt the currently executing process, if the next
CPU burst of the newly arrived process is shorter than
“what is left” of the currently executing process
• When a new job arrives, its total is compared to the
current process’s remaining time.
• If the new job needs less time to finish than the current
process, the current process is suspended and the new
job started
→ Scheduled Event: Ready State (Process Creation),
Terminated State
Scheduling in Batch System
Shortest Remaining Time Next (SRT)
• Ex:
– (Process:ArrivalTime:BurstTime) (P1:0:9), (P2:2:4),
(P3:4:1), (P4:5:4)
– Average waiting time: (9+ 1+ 0 + 2)/4 = 3
– Average turnaround time: (18 + 5 + 1 + 6)/4 = 7.5
Scheduling in Batch System
Example

Avg waiting time = 3.2


Avg turnaround time = 7.2

0 5 10 15 20

A
B min(1, 6) Min(5, 4)

C Min(5, 5, 2)

E Min(5, 2, 5)
Min(5, 5)
D
Scheduling in Interactive Systems
Round-Robin Scheduling (RR)
• Each process is assigned a time interval (quantum or time
slice)
• Is a preemptive algorithm
• Think of First-Come-First-Served, but with the following
addition:
– A process can hold the CPU for a maximum of quantum or time slice; if it
still running at the end of the quantum, the CPU is preempted and given to
process at the head of the ready queue; the process that was running is moved
to the tail of the ready queue (ready queue is treated as circular queue)
– If the process has blocked or finished before the quantum has elapsed, the
CPU switching is done the process blocks
• The length of the quantum
– Too short  lower the CPU efficiency and causes too many process switches
– Too large  poor response to short interactive requests (FCFS)
– 20-50 msec is a reasonable compromise
→ Scheduled Event: Quantum timeout, Terminated/Blocked
state
Scheduling in Interactive Systems
Round-Robin Scheduling (RR)
• Ex: (Process:BurstTime) (P1:24), (P2:3), (P3:3) with
quantum = 4

– Average waiting time = (6 + 7 + 4)/3 = 5.7


– Average turnaround time = (30 + 7 + 10)/ 3 = 15.6
Scheduling in Interactive Systems
Example (quantum = 1)

Avg waiting time = 6.8


Avg turnaround time = 10.8

0 5 10 15 20

A
B
C
D
E
Scheduling in Interactive Systems
Priority Scheduling
• Each process has a priority assigned
• The greatest priority runnable process is always run
• Give a chance to other processes  change the priority of
the running process or assign it a quantum
• Priorities assignment
– Statically
• Static (or externally defined) priorities are predetermined for each
process: E.g., the process of the boss; or the process of the guy who paid
more than others to run on this machine
– Dynamically
• Dynamic (or internally defined) priorities are assigned by the system to
achieve certain goals: E.g., boost the priority of I/O-bound processes
• Priority classes
– Use priority scheduling between classes (4 class 1 – 4)
– Use another scheduling algorithm with each class
Scheduling in Interactive Systems
Priority Scheduling
• Ex: (Process:BurstTime:Priority), applying non-preemptive
– (P1:10:3), (P2:1:1), (P3:2:4), (P4:1:2)
– Average waiting time: (2 + 0 + 12 + 1)/ 4 = 3.75
– Average turnaround time: (12 + 1 + 14 + 2)/ 4 = 7.25

• Problem
– Starvation
• The low priority processes can be waited indefinitely (cannot be executed
if the system occurred errors in runtime) for CPU by higher priority
processes
• Solution
– Aging
• Is a technique of gradually increasing the priority of processes that wait in
the system for a long time (using the clock interrupt)
• Ex: every 15 minutes, decreasing the priority of a waiting process from (1
→ 127), means that the priority with 127 → 1 at least 32 hours
Scheduling in Interactive Systems
Example

Avg waiting time = 4.2


Avg turnaround time = 8.2

0 5 10 15 20

A
B
C
D
E
Scheduling in Interactive Systems
Multiple Queues
• One simple way of mapping priorities onto actual
scheduling decisions would be to give to each process a
time slice related to its priority (e.g., more time slice to
higher-priority threads)
• A more convenient approach is to view the ready state as
not only one queue of processes, but multiple queues,
each with its own priority!
• Again, several options may exist:
– Processes of queues of higher priority may have to complete
before processes of queues of lower priority start running!
– Higher priority queues may get more time than lower priority
queues!
– Processes may move between queues (dynamically adjusted
priority)
Scheduling in Interactive Systems
Multiple Queues
Scheduling in Interactive Systems
Shortest Process Next (SPT)
• Non-preemptive policy
• Is to make estimates based on past behavior and run
process with the shortest estimated running time
• The technique of estimating the next value in a series by
taking the weighted average of the current measured
value and the previous estimate is sometimes (aging with
special value as ½, 0 ≤ aging ≤ 1)
• Suppose that the estimated time per command for some
terminal is T0 . T1 is next run. The estimate by taking a
weighed sum of these two numbers is
aT0 + (1 – a)T1
general: Tn+1 = aTn + (1 – a)n+1
Scheduling in Interactive Systems
Shortest Process Next (SPT)
• Ex:
Scheduling in Interactive Systems
Example

0 5 10 15 20
Scheduling in Interactive Systems
Example

Avg waiting time = 5.9


Avg turnaround time = 11.1

0 5 10 15 20

A
B
C
D
E
Scheduling in Interactive Systems
Guaranteed Scheduling
• If a single-user system with n processes running, each
one should get 1/n of the CPU cycles (fairness)
• Mechanism
– The system must keep track of how much CPU each process has
had since its creation.
– Then, the system computes the amount of CPU each one is
entitled to, namely the time since creation divided by n
– The algorithm is then to run the process with the lowest ratio
until its ratio has move above its closet competitor
Scheduling in Interactive Systems
Lottery Scheduling
• Give processes lottery tickets
• More important processes can be given extra tickets (to
increase their odds of winning)
• Whenever a scheduling decision has to made, a lottery
ticket is chosen at random, and the process holding that
ticket gets the resource
• Lottery scheduling is highly responsive
• Lottery scheduling can be used to solve problems that are
difficult to handle with other methods
Scheduling in Interactive Systems
Fair-Share Scheduling
• Each process is scheduled on its own, without regard to
who its owner is. As a result, if user 1 starts up 9 processes
and user 2 starts up 1 process, with round robin or equal
priorities, user 1 will get 90% of the CPU and user 2 will
get only 10% of it
• To prevent this situation, some systems take into account
who owns a process before scheduling it. In this model,
each user is allocated some fraction of the CPU and the
scheduler picks processes in such a way to enforce it.
Thus if two users have each been promised 50% of the
CPU, they will each get that, no matter how many
processes they have in existence
Scheduling in Interactive Systems
Fair-Share Scheduling – Example
• The system has 2 users A and B
– User A has 4 processes A, B, C, D
– User B has 1 process E
– How is the Fair Share Scheduling applied to this system using
the Round-Robin scheduling?
 A E B E C E D E ….
– How is the Fair Share Scheduling applied to this system using
the Round-Robin with quantum equal 2
 A B E (2) C D E(2) A B E(2) C D E(2) …
Scheduling in Real-Time Systems
• Context
– A real-time system is one in which time plays an essential role
– A real-time system has two kinds as hard real time and soft real time
– A real-time system divides the program into a number of processes whose
behavior is predictable and known in advance.
– These processes are generally short lived and can run to completion in well
under a second.
→ The scheduler schedules the processes in such a way that all
deadlines are met
• Static scheduling (applied to hard real-time)
– Make their scheduling decisions before the system starts running
– Only works when there is perfect information available in advance about the
work to be done and the deadlines that have to be met
• Dynamic scheduling (applied to soft real-time)
– Make their scheduling decisions at runtime
– Do not have static’s restrictions
Thread Scheduling
User-level Threads
• The kernel scheduler schedules the process
• The thread scheduler in each process
decides which thread to run
– If any thread have long CPU burst, this thread
will consume all of process’s time (until it
finishes) because the user mode is not supported
clock interrupt.
– Otherwise, each thread runs for a little, then it
return the CPU back to the thread scheduler
before the kernel allocate quantum to other
process.
• Round-robin and priority scheduling is
applied
• Switching a thread takes a handful of
machine instructions Tanenbaum, Fig. 2-43.
• The user-level threads can employ an
application specific thread scheduler
because it know what all the threads do to
pick the needing thread to run
Thread Scheduling
User-level Threads

Cooperative scheduling of user-level


Implementing threads :: Operating systems 2018 (uu.se)
Thread Scheduling
User-level Threads

Preemptive scheduling of user-level threads


Implementing threads :: Operating systems 2018 (uu.se)
Thread Scheduling
User-level Threads

Cooperative and preemptive (hybrid) scheduling of user-level threads


Implementing threads :: Operating systems 2018 (uu.se)
Thread Scheduling
Kernel-level Threads
• The kernel scheduler schedules the
thread
• However, a kernel requires a full
text switch (changing a memory
map, invalidating caching … ),
specially is the switching the thread
in process to other thread in other
process taking expensive costs.
• Moreover, the kernel would never
know what each thread did
→ User-level thread is more
performance than kernel-level Tanenbaum, Fig. 2-43.

thread in scheduling
Classical IPC Problems
The Dining Philosophers Problem
• Five philosophers are seated around a circular table
• Each philosopher has a plate with spaghetti
• Between each pair of plates is one fork
• A philosopher needs two forks to eat
• A philosopher eats (not thinks) and thinks (not eats)
– When he/she gets hungry, he/she tries to pick up the two
forks that are closer him/her
– Obviously, he/she can not pick up forks that is already in the
hand of a neighbour
– When he/she has finished eating, he/she puts down both
forks and starting thinking again
• Problems: each philosopher that does what it is supposed
to do and never get stuck
Classical IPC Problems
The Dining Philosophers Problem
• It represents the need to allocate several
resources among several processes in
– A deadlock (all of them take their left forks
simultaneously. None will be able to take
their rights fork)
– Starvation (all of them could start the
algorithm simultaneously, pick up their left
forks, seeing that their right forks were not
available, putting down their left forks,
waiting, and picking up … forever)
Tanenbaum, Fig. 2-44 & 45.
Classical IPC Problems
The Dining Philosophers Problem
• A random solution
– The philosophers would just wait a random time instead of the
same time after failing to acquire the right hand fork (that
means all applications trying again later)
– However, it is not adequate in the absolutely works
• A adequate solution (applying binary semaphore)
– Before starting to acquire forks, a philosopher would do a down
on mutex
– After replacing the forks, he/she would do an up on mutex
– However, it has a performance bug (in practical), only one
philosopher can be rating at any instant instead of two ones)
• Solution: uses an array of state and semaphore combine
with binary semaphore allowing the maximum
parallelism for arbitrary number of philosophers by
keeping track
Classical IPC Problems
The Dining Philosophers Problem

Tanenbaum, Fig. 2-46


Classical IPC Problems
The Dining Philosophers Problem

Tanenbaum, Fig. 2-46


Classical IPC Problems
The Readers and Writers Problem
• Models access to a database (file)
• A process that read data = reader
• A process that modify data = writer
• If two readers access the shared data simultaneously, no adverse
affects will result
• Multiple readers are allowed, but not in the same time with a writer
• Only one writer is allowed to act on the database at one moment
• Solution: The readers-writers problem has several variations, all
involving priorities.
– The first readers-writers problem:
• No reader will be kept waiting unless a writer has already obtained permission to
use the shared object.
• Or, no reader should wait for other readers to finish.
• Or readers have higher priorities.
– The second readers-writers problem:
• Once a writer is ready, the writer performs its write as soon as possible.
Classical IPC Problems
The Readers and Writers Problem

Tanenbaum, Fig. 2-46.


Classical IPC Problems
The Readers and Writers Problem

Tanenbaum, Fig. 2-46.


Summary
• Scheduling
• Classical IPC Problem

Q&A
Next Lecture
• Memory Management
− No Memory Abstraction (from Single to Multiple
programs)
− Memory Abstraction (mechanism and policy
applied to manage memory)
− Virtual Memory (mechanism and policy applied to
manage memory)
− Page replacement algorithms (applying to manage
memory)

You might also like