0% found this document useful (0 votes)
2 views

CPU Scheduling_2.2.2024 (2)

The document discusses basic concepts of CPU scheduling in operating systems, highlighting the differences between single-processor and multiprogramming systems. It covers various scheduling algorithms such as FCFS, SJF, Round Robin, and Priority scheduling, along with their advantages and disadvantages. Additionally, it addresses multi-processor scheduling approaches and the complexities introduced by multicore processors.

Uploaded by

pranavjha.et21
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

CPU Scheduling_2.2.2024 (2)

The document discusses basic concepts of CPU scheduling in operating systems, highlighting the differences between single-processor and multiprogramming systems. It covers various scheduling algorithms such as FCFS, SJF, Round Robin, and Priority scheduling, along with their advantages and disadvantages. Additionally, it addresses multi-processor scheduling approaches and the complexities introduced by multicore processors.

Uploaded by

pranavjha.et21
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 44

Basic Concepts

 In a single-processor system,
Only one process may run at a time.
Other processes must wait until the CPU is rescheduled.
 Objective of multiprogramming:
To have some process running at all times, in order to maximize CPU
utilization.
CPU–I/O Burst Cycle

 Process execution consists of a cycle of


 CPU execution and
 I/O wait
 Process execution begins with a CPU burst, followed by an I/O burst, then
• another CPU burst, etc…
 Finally, a CPU burst ends with a request to terminate execution.
 An I/O-bound program typically has many short CPU bursts.
 A CPU-bound program might have a few long CPU bursts.
Alternating sequence of CPU and I/O bursts
CPU Scheduler
CPU Scheduling
 Four situations under which CPU scheduling decisions takeplace:
1.When a process switches from the running state to the
waiting state. For ex; I/O request.
2.When a process switches from the running state to the
ready state. For ex: when an interrupt occurs.
3.When a process switches from the waiting state to the
ready state. For ex: completion of I/O.
4.When a process terminates.
 Scheduling under 1 and 4 is non- preemptive. Scheduling under 2 and
3 is preemptive.
 Non Preemptive Scheduling
 Once the CPU has been allocated to a process, the process keeps the CPU until
it releases the CPU either
 by terminating or
 by switching to the waiting state.

• Preemptive Scheduling
 This is driven by the idea of prioritized computation.
 Processes that are runnable may be temporarily suspended
 Disadvantages:
1. Incurs a cost associated with access to shared-data.
2. Affects the design of the OS kernel.

• Virtually all modern operating systems includingWindows,macOS, Linux,


and UNIX use preemptive scheduling algorithms.
• Preemptive scheduling can result in race conditions when data are shared among
several processes.
• Consider the case of two processes that share data. While one process is updating
the data, it is preempted so that the second process can run.
• The second process then tries to read the data, which are in an inconsistent
state.
• Preemption also affects the design of the operating-system kernel.
• Operating-system kernels can be designed as either non preemptive or
preemptive.
• A nonpreemptive kernel will wait for a system call to complete or for a process to
block while waiting for I/O to complete to take place before doing a context
switch. This scheme ensures that the kernel structure is simple, since the kernel
will not preempt a process while the kernel data structures are in an inconsistent
state.
• Unfortunately, this kernel-execution model is a poor one for supporting real-time
computing, where tasks must complete execution within a given time frame.
• A non preemptive kernel will wait for a system call to complete or for a
process to block while waiting for I/O to complete to take place before doing
a context switch. This scheme ensures that the kernel structure is simple,
since the kernel will not preempt a process while the kernel data structures
are in an inconsistent state.
• Unfortunately, this kernel-execution model is a poor one for supporting real-
time computing, where tasks must complete execution within a given time
frame.
• A preemptive kernel requires mechanisms such as mutex locks to prevent
race conditions when accessing shared kernel data structures.
• Most modern operating systems are now fully preemptive when running in
kernel mode.
Dispatcher
 It gives control of the CPU to the process selected by the short-term scheduler.
 The function involves:
1. Switching context from one process to another
2. Switching to user mode &
3. Jumping to the proper location in the user program to restart that program

 It should be as fast as possible, since it is invoked during every context switch.

 Dispatch latency means the time taken by the dispatcher to


 stop one process and
 start another running.
The role of the dispatcher
Dispatcher

• The number of context switches can be obtained by using the vmstat


command that is available on Linux systems.
• vmstat 1 3
• This command provides 3 lines of output over a 1-second delay:
------cpu-----
• 24 - average number of context switches over 1 second since the system booted
225
339
• /proc file system to determine the number of context switches for a
given process.
• cat /proc/2166/status provides the following trimmed
output:
voluntary ctxt switches 150
Scheduling Criteria

• In choosing which algorithm to use in a particular situation, depends upon the properties of
the various algorithms. Many criteria have been suggested for comparing CPU- scheduling
algorithms. The criteria include the following:
• CPU utilization
• Throughput
• Turnaround time
• Waiting time
• Response time
• It is desirable to maximize CPU utilization and throughput and to minimize turnaround
time, waiting time, and response time.
• Investigators have suggested that, for interactive systems, ( it is more important to
minimize the variance in the response time than to minimize the average response time.
• A system with reasonable and predictable response time may be considered more desirable
than a system that is faster on the average but is highly variable.
SCHEDULING ALGORITHMS
• CPU scheduling deals with the problem of deciding which of the
processes in the ready-queue is to be allocated the CPU.
 Following are some schedulingalgorithms:
1. FCFS scheduling (First Come FirstServed)
2. Round Robin scheduling
3. SJF scheduling (Shortest JobFirst)
4. SRT scheduling
5. Priority scheduling
6. Multilevel Queue schedulingand
7. Multilevel Feedback Queuescheduling
FCFS Scheduling
• The process that requests the CPU first is allocated the CPU first.
 The implementation is easily done using a FIFO queue.

 Procedure:
1. When a process enters the ready-queue, its PCB is linked onto the tail of the queue.

2. When the CPU is free, the CPU is allocated to the process at the queue’s head.
3. The running process is then removed from the queue.
 Advantage:
• 1. Code is simple to write & understand.

 Disadvantages:
1. Convoy effect: All other processes wait for one big process to get off the CPU.
2. Non-preemptive (a process keeps the CPU until it releases it).
3. Not good for time-sharing systems.
4. The average waiting time is generally not minimal.
FCFS Scheduling
Shortest-Job-First Scheduling
 The CPU is assigned to the process that has the smallest next CPU burst.
 If two processes have the same length CPU burst, FCFS scheduling is
used to break the tie.
 For long-term scheduling in a batch system, we can use the process
time limit specified by the user, as the ‘length’.
 SJF can't be implemented at the level of short-term scheduling,
because there is no way to know the length of the next CPU burst.

 Advantage:
1. The SJF is optimal, i.e. it gives the minimum average waiting time for a
given set of processes.

 Disadvantage:
1. Determining the length of the next CPU burst.
Shortest-Job-First Scheduling
• SJF algorithm may be either 1) non-preemptive or
2)preemptive.
1. Non preemptive SJF : The current process is allowed to
finish its CPU burst.
2. preemptive SJF : If the new process has a shorter next
CPU burst than what is left of the executing process, that
process is preempted. It is also known as SRTF scheduling
(Shortest-Remaining-Time-First)
Shortest-Job-First Scheduling
Shortest-Job-First Scheduling
• SJF algorithm is optimal, but cannot be implemented at the level of CPU
scheduling, as there is no way to know the length of the next CPU burst.
• One approach to this problem - try to approximate SJF scheduling
• The next CPU burst is generally predicted as an exponential average of the
measured lengths of previous CPU bursts.
• tn - length of the nth CPU burst, and let τn+1 be predicted value for the next
CPU burst.
• Then, for α, 0 ≤ α ≤1, define
• τn+1 = α tn + (1 − α)τn.
• tn contains our most recent information, while τn stores the
past history.
• α controls the relative weight of recent and past history
Shortest-Job-First Scheduling
• If α = 0, then τn+1 = τn, and
recent history has no effect
• If α = 1, then τn+1 = tn, and
only the most recent CPU
burst matters
• More commonly, α = 1/2,
so recent history and past
history are equally
weighted. The initial τ0 can
be defined as a constant or
as an overall system
average. Figure shows an
exponential average with α
= 1/2 and τ0 = 10.
Shortest-Job-First Scheduling
• τn+1 = αtn + (1 − α)αtn−1 + · · · + (1 − α)jαtn−j + · · ·+(1 − α)n+1τ0.
• α is less than 1. As a result, (1 − α) is also less than 1, and each
successive term has less weight than its predecessor.
• The SJF algorithm can be either preemptive or non preemptive.
• Preemptive SJF scheduling is sometimes called shortest-remaining
time-first scheduling.
Shortest-Job-First Scheduling
Round Robin Scheduling

 Designed especially for time sharing systems.


 It is similar to FCFS scheduling, but with preemption.
 A small unit of time is called a time quantum(or time slice).
 Time quantum is ranges from 10 to 100ms.
 The ready-queue is treated as a circular queue.
 The CPU scheduler
 goes around the ready-queue and
 allocates the CPU to each process for a time interval of up to
1 time quantum.
 To implement:
• The ready-queue is kept as a FIFO queue of processes
Round Robin Scheduling
 CPU scheduler
1. Picks the first process from the ready-queue.
2. Sets a timer to interrupt after 1 time quantum and
3. Dispatches the process.
 One of two things will then happen.
1. The process may have a CPU burst of less than 1 time quantum. In this
case, the process itself will release the CPU voluntarily.
2. If the CPU burst of the currently running process is longer than 1 time
quantum, the timer will go off and will cause an interrupt to the OS. The
process will be put at the tail of the ready-queue.
 Advantage:
 Higher average turnaround than SJF.
 Disadvantage:
 Better response time than SJF.
Round Robin Scheduling
Round Robin Scheduling

 The RR scheduling algorithm is preemptive.


• No process is allocated the CPU for more than 1 time quantum in
a row. If a process' CPU burst exceeds 1 time quantum, that
process is preempted and is put back in the ready- queue.
 The performance of algorithm depends heavily on the size of the time
quantum.
1. If time quantum=very large, RR policy is the same as the FCFSpolicy.
2. If time quantum=very small, RR approach appears to the users as though each
of n processes has its own processor running at l/n the speed of the real
processor.
Round Robin Scheduling
Round Robin Scheduling
Priority Scheduling

 A priority is associated with each process.


 The CPU is allocated to the process with the highest priority.
 Equal-priority processes are scheduled in FCFS order.
 Priorities can be defined either internally or externally.
• Internally-defined priorities.
• Use some measurable quantity to compute the priority of a process.
• For example: time limits, memory requirements, no. f open files.
• Externally-defined priorities.
• Set by criteria that are external to the OS For example:
 importance of the process, political factors
Priority Scheduling
• Priority scheduling can be either preemptive or non-preemptive.
1.. Preemptive : The CPU is preempted if the priority of the newly arrived process is
higher than the priority of the currently running process.
2. Non Preemptive : The new process is put at the head of the ready-queue
 Advantage:
 Higher priority processes can be executed first.

 Disadvantage:
 Indefinite blocking, where low-priority processes are left waiting
indefinitely for CPU. Solution: Aging is a technique of increasing priority
of processes that wait in system for a long time.
Priority Scheduling
• Another option is to combine round-robin and priority scheduling in such a way that the
system executes the highest-priority process and runs processes with the same priority
using round-robin scheduling.
Multilevel Queue Scheduling
 Useful for situations in which processes are easily classified into different
groups.
 For example, a common division is made between
 foreground (or interactive) processes and
 background (or batch) processes.
 The ready-queue is partitioned into several separate queues.
 The processes are permanently assigned to one queue based on some
property like
 memory size
 process priority or
 process type.
 Each queue has its own scheduling algorithm.
• For example, separate queues might be used for foreground and background
processes.
Multilevel Queue Scheduling
Multilevel Feedback Queue Scheduling

• A process may move between queues


• The basic idea: Separate processes according to the features of their CPU bursts.
 For example
1. If a process uses too much CPU time, it will be moved to a lower-priority
queue. This scheme leaves I/O-bound and interactive processes in the higher-
priority queues.
2. If a process waits too long in a lower-priority queue, it may be moved to a
higher-priority queue This form of aging prevents starvation
Multilevel Feedback Queue Scheduling
• In general, a multilevel feedback queue scheduler is defined by the
following parameters:
1. The number of queues.
2. The scheduling algorithm for each queue.
3. The method used to determine when to upgrade a process to a higher
priority queue.
4. The method used to determine when to demote a process to a lower
priority queue.
5. The method used to determine which queue a process will
enter when that process needs service
Multi-Processor Scheduling
• If multiple CPUs are available, load sharing,where multiple
threads may run in parallel, becomes possible,& the scheduling
problem becomes more complex.
• multiprocessor : multiple physical processors, where each
processor contained one single-core CPU.
• on modern computing systems, multiprocessor now applies
to the following system architectures:
• • Multicore CPUs
• • Multithreaded cores
• • NUMA systems
• • Heterogeneous multiprocessing
Approaches to Multiple-Processor Scheduling
 Two approaches:
 Asymmetric Multiprocessing: The
basic idea is:
• A master server is a single processor responsible for all scheduling
decisions, I/O processing and other system activities.
• The other processors execute only user code.
• Advantage: This is simple because only one processor accesses the system
data structures, reducing the need for data sharing.
• Symmetric Multiprocessing: The basic idea is:
• Each processor is self-scheduling.
• To do scheduling, the scheduler for each processor
• Examines the ready-queue and
• Selects a process to execute.
Approaches to Multiple-Processor Scheduling
Approaches to Multiple-Processor Scheduling
• Private, preprocessor run queues may lead to more efficient use of cache
memory.
• There are issues with per-processor run queues—workloads of varying
sizes.
• Balancing algorithms can be used to equalize workloads among all
processors.
• Virtually all modern operating systems support SMP, including Windows,
Linux, and macOS
• Mobile systems including Android and iOS.
Multicore Processors
• most contemporary computer hardware now places multiple computing
cores on the same physical chip, resulting in a multicore processor.
• SMP systems that use multicore processors are faster and consume less
power than systems in which each CPU has its own physical chip.
• Multicore processors may complicate scheduling issues.
• memory stall : modern processors operate at much faster speeds than
memory or cache miss.
• Chip multithreading (CMT) : From an operating system perspective, each
hardware thread maintains its architectural state, such as instruction pointer
and register set, and thus appears as a logical CPU that is available to run a
software thread.
Multicore Processors
Multicore Processors
• Intel processors use the term hyper-threading (also known as simultaneous
multithreading or SMT) to describe assigning multiple hardware threads to
a single processing core.
• Contemporary Intel processors—such as the i7—support two threads per
core,
• Oracle Sparc M7 processor supports eight threads per core, with eight cores
per processor, thus providing the operating system with 64 logical CPUs.
• Two ways to multithread a processing core: coarsegrained and fine-graine
multithreading.

You might also like