0% found this document useful (0 votes)
5 views

Module 2-1 ktu

The document discusses multiprogramming and processor scheduling, emphasizing the importance of maximizing CPU utilization by managing multiple processes in memory. It outlines different levels of scheduling (long-term, medium-term, and short-term) and various scheduling algorithms, including First-Come First-Served, Shortest Job First, Round Robin, and Priority-Based Scheduling, each with its advantages and disadvantages. The document also highlights the criteria for evaluating scheduling algorithms, such as CPU utilization, throughput, turnaround time, waiting time, and response time.

Uploaded by

nandanam3469
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Module 2-1 ktu

The document discusses multiprogramming and processor scheduling, emphasizing the importance of maximizing CPU utilization by managing multiple processes in memory. It outlines different levels of scheduling (long-term, medium-term, and short-term) and various scheduling algorithms, including First-Come First-Served, Shortest Job First, Round Robin, and Priority-Based Scheduling, each with its advantages and disadvantages. The document also highlights the criteria for evaluating scheduling algorithms, such as CPU utilization, throughput, turnaround time, waiting time, and response time.

Uploaded by

nandanam3469
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 63

MODULE II

Introduction
• The objective of multiprogramming is to have some process running at all times, to maximize CPU
utilization.
• In a simple computer system, the CPU then just sits idle. All this waiting time is wasted; no useful
work is accomplished.
• With multiprogramming, we try to use this time productively. Several processes are kept in
memory at one time. When one process has to wait, the operating system takes the CPU away from
that process and gives the CPU to another process.
• Aim of processor scheduling is to assign processes to be executed by the processor or processors
over time, in a way that meets system objectives, such as response time, throughput, and processor
efficiency.
• Scheduling affects the performance of the system because it determines which processes will wait
and which will progress.
• Scheduling helps to manage queues and minimize queueing delay and to optimize performance in a
queueing environment
• Scheduling affects the performance of the system because it determines which processes will wait
and which will progress.
• Scheduling helps to manage queues and minimize queueing delay and to optimize performance in
a queueing environment.

Scheduling falls into one of two categories:


•Non-Preemptive: In this case, a process’s resource cannot be taken before the process has
finished running. When a running process finishes and transitions to a waiting state, resources
are switched.

•Preemptive: In this case, the OS can switch a process from running state to ready state. This
switching happens because the CPU may give other processes priority and substitute the
currently active process for the higher priority process.
Levels of Scheduling

• Long-term scheduling is performed when a


new process is created. This is a decision
whether to add a new process to the set of
processes that are currently active.
• Medium-term scheduling is a part of the
swapping function. This is required at the
times when a suspended or swapped-
out process is to be brought into a pool
of ready processes.
• Short-term scheduling is the actual decision of
which ready process to execute next.
Long-Term Scheduler

The long-term scheduler works with the batch queue and selects the next batch job to be executed. Thus it
plans the CPU scheduling for batch jobs. Processes, which are resource intensive and have a low priority
are called batch jobs. These jobs are executed in a group or bunch. For example, a user requests for
printing a bunch of files. We can also say that a long-term scheduler selects the processes or jobs from
secondary storage device eg, a disk and loads them into the memory for execution. It is also known as
a job scheduler. The long-term scheduler is called “long-term” because the time for which the scheduling
is valid is long. This scheduler shows the best performance by selecting a good process mix of I/O-bound
and CPU-bound processes. I/O bound processes are those that spend most of their time in I/O than
computing. A CPU-bound process is one that spends most of its time in computations rather than
generating I/O requests.
Medium-Term Scheduler

The medium-term scheduler is required at the times when a suspended or swapped-out process is to be
brought into a pool of ready processes. A running process may be suspended because of an I/O request or
by a system call. Such a suspended process is then removed from the main memory and is stored in a
swapped queue in the secondary memory in order to create a space for some other process in the main
memory. This is done because there is a limit on the number of active processes that can reside in the main
memory. The medium-term scheduler is in charge of handling the swapped-out process. It has nothing to
do with when a process remains suspended. However, once the suspending condition is removed, the
medium terms scheduler attempts to allocate the required amount of main memory and swap the process in
and make it ready. Thus, the medium-term scheduler plans the CPU scheduling for processes that have
been waiting for the completion of another process or an I/O task.
Short-Term Scheduler

The short-term scheduler selects processes from the ready queue that are residing in the
main memory and allocates CPU to one of them. Thus, it plans the scheduling of the
processes that are in a ready state. It is also known as a CPU scheduler. As compared to
long-term schedulers, a short-term scheduler has to be used very often i. e. the frequency of
execution of short-term schedulers is high. The short-term scheduler is invoked whenever
an event occurs. Such an event may lead to the interruption of the current process or it may
provide an opportunity to preempt the currently running process in favor of another.
Scheduling Criteria

In choosing which algorithm to use in a particular situation, we must consider the properties of the various
algorithms.

• CPU utilization - The objective of any CPU scheduling algorithm is to keep the CPU busy if
possible and to maximize its usage.
• Throughput - It is a measure of the work that is done by the CPU which is directly proportional
to the number of processes being executed and completed per unit of time.
Turnaround time - From the point of view of a particular process, the important criterion is how long it
takes to execute that process. The interval from the time of submission of a process to the time of
completion is the turnaround time. Turnaround time is the sum of the periods spent waiting to get into
memory, waiting in the ready queue, executing on the CPU, and doing I/O.
Waiting time - Once the execution starts, the scheduling process does not hinder the time that is required
for the completion of the process. The only thing that is affected is the waiting time of the process, i.e the
time that is spent by a process waiting in a queue. Waiting time is the sum of the periods spent waiting in the
ready queue.
Response time - In an interactive system, turnaround time may not be the best criterion. Often, a process
can produce some output fairly early and can continue computing new results while previous results are
being output to the user. Thus, another measure is the time from the submission of a request until the first
response is produced. This measure, called response time, is the time it takes to start responding, not the
time it takes to output the response.
SCHEDULING
ALGORITHMS
SCHEDULING ALGORITHMS - CPU scheduling deals with the problem of deciding which
of the processes in the ready queue is to be allocated the CPU. There are many different CPU
scheduling algorithms.

A. FIRST-COME , FIRST- SERVED SCHEDULING

• With this scheme, the process that request the CPU first is allocated the CPU first.
• The implementation of the FCFS policy is easily managed with FIFO queue.
• When a process enters the ready queue , its Process Control Block is linked onto the
tail of the queue.
• When the CPU is free, it is allocated to the process at the head of the queue.
• The running process is then removed from the queue.
• The code for FCFS scheduling is simple to write and understand.
• FCFS is the simplest non-preemptive algorithm
• The average waiting time under the FCFS policy, however, is often quite long.
• Consider the following set of processes that arrive at time 0, with length of the CPU
burst given in milliseconds:
The average waiting time under an FCFS policy is generally not minimal and may vary
substantially if the process’s CPU burst times vary greatly.
ADVANTAGES:
1. Easy to understand and program.
2. Straight forward and easy to implement.
3. Suitable specially for Batch Operating system where the longer time periods for each process are
often acceptable.

DISADVANTAGES:
1. As it is a Non-preemptive CPU Scheduling Algorithm, hence it will run till it finishes the execution.

2. The average waiting time in the FCFS is much higher than in the others.

3. It suffers from the Convoy effect.

4. Not very efficient due to its simplicity.

5. Processes that are at the end of the queue, have to wait longer to finish.

6. It is not suitable for time-sharing operating systems where each process should get the same amount of
CPU time.
B. SHORTEST-JOB-FIRST SCHEDULING
• This algorithm associates with the length of the process’s CPU burst.
• When the CPU is available, it is assigned to the process that has the smallest next CPU
burst.
• If the next CPU bursts of two processes are the same , FCFS scheduling is used to break
the tie.
• More appropriate term for this scheduling method would be the shortest-next-CPU-burst
algorithm, because scheduling depends on the length of the next CPU burst of a process,
rather than its total length.
As an example of SJF scheduling , consider the following set of processes, with the length of
the CPU burst given in milliseconds:
Using SJF scheduling , we would schedule these processes according to the following Gantt
chart
.

The waiting time is 3 milliseconds for process P1, 16 milliseconds for process P2, 9
milliseconds for process P3, and 0 milliseconds for process P4. Thus, the average waiting
time is (3 + 16 + 9 + 0)/4 = 7 milliseconds. By comparison, if we were using the FCFS
scheduling scheme, the average waiting time would be 10.25 milliseconds
• Moving a short process before a long one decreases the waiting time of the short process.
Consequently, the average waiting time decreases.
• The real difficulty with the SJF algorithm is knowing the length of the next CPU request.
• The SJF algorithm can be either preemptive or non-preemptive.
• The choice arises when a new process arrives at the ready queue while a previous
process is still executing. The next CPU burst of the newly arrived process may be
shorter than what is left of the currently executing process.
• A preemptive SJF algorithm will preempt the currently executing process , whereas a
non-preemptive SJF algorithm will allow the currently running process to finish its CPU
burst.
As an example , consider the following four processes, with the length of the CPU burst
given in milliseconds:

If the processes arrives at the ready queue at the times shown and need the identical burst
times, then the resulting preemptive SJF scheduling is as depicted in the following Gantt
chart:
At t=0ms only one process P1 is in the system, whose burst time is 8ms; starts its execution.
After 1ms i.e., at t=1, new process P2 (Burst time= 4ms) arrives in the ready queue. Since its
burst time is less than the remaining burst time of P1 (7ms) P1 is preempted and execution of
P2 is started. Again at t=2, a new process P3 arrive in the ready queue but its burst time (9ms)
is larger than remaining burst time of currently running process (P2 3ms). So P2 is not
preempted and continues its execution. Again at t=3 , new process P4 (burst time 5ms )
arrives . Again for same reason P2 is not preempted until its execution is completed.
Turnaround time (TAT) is the time it takes to complete a process or fulfill a request.
Preemptive SJF scheduling is sometimes called shortest-remaining-time-first scheduling

Advantages
1. Maximum throughput
2. Minimum average waiting and turnaround time
Disadvantages
1. May suffer with the problem of starvation.
2. It is not implementable because the exact Burst time for a process can't be known in
advance.
C. Round-Robin Scheduling Algorithms:

• One of the oldest, simplest, fairest and most widely used algorithm is round robin (RR).
• In the round robin scheduling, processes are dispatched in a FIFO manner but are given
a limited amount of CPU time called a time-slice or a quantum.

• If a process does not complete before its CPU-time expires, the CPU is preempted and
given to the next process waiting in a queue.
• The preempted process is then placed at the back of the ready list.

• If the process has blocked or finished before the quantum has elapsed the CPU
switching is done.

• Round Robin Scheduling is preemptive (at the end of time-slice) therefore it is effective
in time sharing environments in which the system needs to guarantee reasonable
response times for interactive users.
• The only interesting issue with round robin scheme is the length of the quantum.
Setting the quantum too short causes too many context switches and lower the CPU
efficiency. On the other hand, setting the quantum too long may cause poor response
time and approximates FCFS.
• In any event, the average waiting time under round robin scheduling is on quite long.
Consider the following set of processes that arrives at time 0 ms.

If we use time quantum of 4ms then calculate the average waiting time using R-R scheduling.
According to R-R scheduling processes are executed in FCFS order. So, firstly P1
(burst time = 20ms) is executed but after 4ms it is preempted and new process P2 (Burst time =
3ms) starts its execution whose execution is completed before the time quantum. Then next
process P3 (Burst time=4ms) starts its execution and finally remaining part of P1 gets executed
with time quantum of 4ms.

Waiting time of Process P1: 0ms + (11 – 4)ms = 7ms


Waiting time of Process P2: 4ms
Waiting time of Process P3: 7ms
Average Waiting time: (7+4+7)/3=6ms
Advantages
The Advantages of Round Robin CPU Scheduling are:
1.A fair amount of CPU is allocated to each job.
2.Because it doesn't depend on the burst time, it can truly be implemented in the system.
3.It is not affected by the convoy effect or the starvation problem as occurred in First
Come First Serve CPU Scheduling Algorithm.

Disadvantages
The Disadvantages of Round Robin CPU Scheduling are:
1.Low Operating System slicing times will result in decreased CPU output.
2.Round Robin CPU Scheduling approach takes longer to swap contexts.
3.Time quantum has a significant impact on its performance.
4.The procedures cannot have priorities established.
Q: Draw the Gantt chart and find average waiting time and turn around time.
Assume time quantum as 2msec.
Question:

Schedule the given 5 processes with Round Robin scheduling.

Draw the Gantt chart and calculate the average waiting time and turn-around
time for these processes if time quantum is 2 units,
Turn Around Time = Completion Time – Arrival Time
Waiting Time = Turn Around Time – Burst Time
D. PRIORITY BASED SCHEDULING:

➢ Assign each process a priority. Schedule highest priority first. All processes within
same priority are FCFS.
➢ Priority may be determined by user or by some default mechanism. The system may
determine the priority based on memory requirements, time limits, or other resource
usage.
➢ Starvation occurs if a low priority process never runs.
➢ An SJF algorithm is simply a priority algorithm where the priority (p) is the inverse of
the (predicted) next CPU burst. The larger the CPU burst, the lower the priority, and
vice versa.
Priorities can be defined either internally or externally.

• Internally defined priorities use some measurable quantity or quantities to compute the
priority of a process. For example, time limits, memory requirements, the number of
open files, and the ratio of average I/O burst to average CPU burst have been used in
computing priorities.

• External priorities are set by criteria outside the operating system, such as the
importance of the process, deadline or time sensitivity, Priority assigned by a system
administrator etc.
.
Priority scheduling can be either preemptive or non-preemptive. When a process
arrives at the ready queue, its priority is compared with the priority of the currently
running process. A preemptive priority scheduling algorithm will preempt the CPU if
the priority of the newly arrived process is higher than the priority of the currently
running process. A non-preemptive priority scheduling algorithm will simply put the
new process at the head of the ready queue.
A major problem with priority scheduling algorithms is indefinite blocking, or
starvation. A process that is ready to run but waiting for the CPU can be
considered blocked. A priority scheduling algorithm can leave some low-priority
processes waiting indefinitely.

A solution to the problem of indefinite blockage of low-priority processes is


aging. Aging is a technique of gradually increasing the priority of processes that
wait in the system for a long time.
Multilevel Queue Scheduling

• Multilevel queue scheduling is a type of CPU scheduling in which the processes in the ready state are
divided into different groups, each group having its own scheduling needs.
• The ready queue is divided into different queues according to different properties of the process like
memory size, process priority, or process type.
• All the different processes can be implemented in different ways, i.e., each process queue can have a
different scheduling algorithm.
• A multilevel queue scheduling algorithm partitions the ready queue into several separate queues shown in
below figure.
Properties of Multilevel Queue Scheduling
• Multilevel Queue Scheduling distributes the processes into multiple queues based on
the properties of the processes.
• Each queue has its own priority level and scheduling algorithm to manage the processes
inside that queue.
• Queues can be arranged in a hierarchical structure.
• High-priority queues might use Round Robin scheduling. Low-priority queues might
use First Come, First Serve scheduling. (Optional)
• Processes cannot move between queues. This algorithm prioritizes different types of
processes and ensures fair resource allocation
How are the Queues Scheduled?
The scheduling of the processes in the queue is necessary to decide upon the process
that should get the CPU time first. Two methods are employed to do this:
• Fixed priority preemptive scheduling
• Time slicing.
Fixed Priority Preemptive Scheduling Method:
When setting the priority of processes in a queue, every queue has an absolute
priority over that with a lower priority queue.
Here, unless Queue1 is empty, no process in Queue2 can be executed and so on
Time Slicing:
Each queue gets a slice or portion of the CPU time for scheduling its own processes.
For example, suppose Queue1 gets 40% of the CPU time then the remaining 60% of
the CPU time may be assigned as 40% to Queue2 and 20% to Queue3.
Multilevel Feedback-Queue Scheduling

• In multilevel queue scheduling algorithm, processes are permanently assigned to a queue


when they enter the system. i.e . processes do not move from one queue to the other,
since processes do not change their foreground or background nature.
• This setup has the advantage of low scheduling overhead, but it is inflexible.
• The multilevel feedback-queue scheduling algorithm, allows a process to move between
queues. The idea is to separate processes according to the characteristics of their CPU
bursts.
• If a process uses too much CPU time it will be moved to a lower-priority queue.
• This scheme leaves I/O-bound and interactive processes in the higher-priority queues. A
process that waits too long in a lower-priority queue may be moved to a higher-priority
queue.
• This form of aging prevents starvation.
• Consider a multilevel feedback-queue scheduler with three queues, numbered from 1 to 3

• The scheduler first executes all processes in queue 1. Only when queue 1 is empty it will execute processes
in queue 2.
• Processes in queue 3 will only be executed if queues 1 and 2 are empty.
• A process entering the ready queue is put in queue 1. A process in queue 1 is given a time quantum of 8
milliseconds.
• If it does not finish within this time, it is moved to the tail of queue 2. If queue 1 is empty, the process at the
head of queue 2 is given a quantum of 16 milliseconds. If it does not complete, it is preempted and is put into
queue 3. Processes in queue 3 are run on an FCFS basis but are run only when queues 1 and 2 are empty.
• This scheduling algorithm gives highest priority to any process with a CPU burst of 8 milliseconds or less.
Such a process will quickly get the CPU, finish its CPU burst, and go off to its next I/O burst.

• A multilevel feedback-queue scheduler is defined by the following parameters:


• The number of queues
• The scheduling algorithm for each queue
• The method used to determine when to upgrade a process to a higher priority queue
• The method used to determine when to demote a process to a lower priority queue
• The method used to determine which queue a process will enter when that process needs service
• The definition of a multilevel feedback-queue scheduler makes it the most general CPU-scheduling
algorithm.
• Unfortunately, it is also the most complex algorithm, since defining the best scheduler requires some means
by which to select values for all the parameters.
Advantages of MFQS

• This is a flexible Scheduling Algorithm


• This scheduling algorithm allows different processes to move between different queues.
• In this algorithm, a process that waits too long in a lower priority queue may be moved
to a higher priority queue which helps in preventing starvation.

Disadvantages of MFQS

• This algorithm is too complex.


• As processes are moving around different queues which leads to the production of more
CPU overheads.
• In order to select the best scheduler this algorithm requires some other means to select
the values.
THREADS
➢ Process and threads are the basic components of OS. The process is a program
under execution whereas a thread is part of the process. Threads allow a program to
perform multiple tasks simultaneously, like downloading a file while you browse a
website or running animations while processing user input.

➢ Thread is a sequential flow of tasks within a process.


➢ A thread is a small unit of processing that runs independently within a program.
➢ Threads are also called lightweight processes as they are the smallest unit of
execution within a process.
➢ A thread has three states: Running, Ready, and Blocked.
➢ Main motive for thread
• Support multiple activities in a single application at the same time.
• Since light weight – easier to create and destroy than process.
• Performance enhancement.
Example: Word processor can have different threads for-
• Displaying graphics.
• Responding to keystrokes from the user.
• Performing spelling and grammar checking in the background.
• Autosave. etc
Components of Thread
A thread has the following three components:
1.Program Counter
2.Register Set
3.Stack space
but share common code, data, and certain structures such as open files.
Benefits of Threads
•Enhanced throughput of the system: When the process is split into many threads, and each thread is
treated as a job, the number of jobs done in the unit time increases. That is why the throughput of the
system also increases.
•Effective Utilization of Multiprocessor system: When you have more than one thread in one process,
you can schedule more than one thread in more than one processor.
•Faster context switch: The context switching period between threads is less than the process context
switching. The process context switch means more overhead for the CPU.
•Responsiveness: When the process is split into several threads, and when a thread completes its execution,
that process can be responded to as soon as possible.
•Communication: Multiple-thread communication is simple because the threads share the same address
space, while in process, we adopt just a few exclusive communication strategies for communication
between two processes.
•Resource sharing: Resources can be shared between all threads within a process, such as code, data, and
files. Note: The stack and register cannot be shared between threads. There is a stack and register for each
thread.
TYPES OF THREADS

1. User-Level Threads (ULT) and


2. Kernel-Level Threads (KLT)

1. USER-LEVEL THREADS In a pure ULT facility, all of the work of thread management is
done by the application and the kernel is not aware of the existence of threads.
• Any application can be programmed to be multithreaded by using a threads library, which is
a package of routines for ULT management.
• The threads library contains code for creating and destroying threads, for passing messages
and data between threads, for scheduling thread execution, and for saving and restoring
thread contexts.
• A user thread is an entity used by programmers to handle multiple flows of controls
within a program.
• The API for handling user threads is provided by the threads library.
• A user thread only exists within a process; a user thread in process A cannot reference a
user thread in process B.
• By default, an application begins with a single thread and begins running in that thread.
• This application and its thread are allocated to a single process managed by the kernel.
• At any time that the application in Running state, may create a new thread to run within
the same process.
• Spawning is done by invoking the spawn utility in the threads library which is invoked
by a procedure call.
• The threads library, creates a data structure for the new thread and then passes control to
one of the threads within this process that is in the Ready state, using some scheduling
algorithm.
Advantages of User-level threads
1.The user threads can be easily implemented than the kernel thread.
2.User-level threads can be applied to such types of operating systems that do not support threads at the
kernel-level.
3.It is faster and efficient.
4.Context switch time is shorter than the kernel-level threads.
5.It does not require modifications of the operating system.
6.User-level threads representation is very simple. The register, PC, stack, and mini thread control
blocks are stored in the address space of the user-level process.
7.It is simple to create, switch, and synchronize threads without the intervention of the process.

Disadvantages of User-level threads


1.User-level threads lack coordination between the thread and the kernel.
2.If a thread causes a page fault, the entire process is blocked.
2. KERNEL-LEVEL THREADS
Kernel level threads are supported and managed directly by the operating system.
•The kernel knows about and manages all threads.
•One process control block (PCB) per process.
•One thread control block (TCB) per thread in the system.
•Provide system calls to create and manage threads from user space.
Advantages of Kernel Level Thread
•Kernel threads can simultaneously schedule multiple threads from the same process
or multiple process.
•If one thread is blocked, kernel can schedule another thread of the same process.
•Because kernel has full knowledge of threads. Scheduler may decide to give more
times to a process having large number of threads than process having small number
of threads.

Disadvantages of Kernel Level Thread


•Kernel threads are slower to create and manage than user level.
•Since kernel must manage and schedule threads as well as processes. It requires a
full Thread Control Block(TCB) for each thread to maintain information about
threads. As a result, there is a significant overhead and increased in kernel complexity.
Multithreading

Multithreading is different from multitasking which allows multiple tasks at the same time, whereas
multithreading allows multiple threads of single tasks to be processed by CPU at the same time.
Multithreading is thread-based multitasking. It allows multiple threads of the same process to execute
simultaneously.

Example: VLC media player, where one thread is used for opening the VLC media player, one for playing
a particular song and another thread for adding new songs to the playlist.

Multithreading increases responsiveness. One program contains multiple threads. So, if one thread is
taking too long to execute or if it gets blocked, the rest of the thread keep executing without any problem.
Multithreading is less costly. Creating a new process and allocating resource is a time consuming task but
in a one process allocating multiple threads and switching between them is comparatively easier.
Multithreading models

• Many operating systems provide user level thread and kernel level thread in
combined way. The best example of this combined operating system is Solaris.
• In a combined system, multiple threads within the same application can run in
parallel on multiple processors and a blocking system call need not block the entire
process.

Multithreading models have three types.


1.Many to one model
2.One to one model
3.Many to Many model
Many-to-One
• Many user-level threads mapped to single kernel thread.
• It is efficient because it is implemented in user space.
• A process using this model blocked entirely if a thread makes a blocking system call.
• Only one thread can access the kernel at a time so it can not be run in parallel on
multiprocessor.
• Ex: Ticket booking
Many-to-Many Model
• Multiplexes many user-level threads to a smaller or equal number of kernel threads.
• Allows the operating system to create a sufficient number of kernel threads.
• Number of kernel threads may be specific to a either a particular application or a
particular machine.
• The user can create any number of threads and corresponding kernel level threads can
run in parallel on multiprocessor.
• When a thread makes a blocking system call, the kernel can execute another thread.
One-to-One
• Each user-level thread maps to kernel thread.
• It provides more concurrency because it allows another thread to execute when threads
invoke the blocking system call.
• It facilitates the parallelism in multiprocessor systems.
• Each user thread requires a kernel thread, which may affect the performance of the
system.
• Creation of threads in this model is restricted to certain number.
Multiprocessor scheduling

• The goal of multiple processor scheduling, also known as multiprocessor scheduling, is to create a
system's scheduling function that utilizes several processors.
• In multiprocessor scheduling, multiple CPUs split the workload (load sharing) to enable concurrent
execution of multiple processes.
• Multiple CPUs share the load in multiprocessor scheduling so that various processes run
simultaneously.
• The multiple CPUs in the system are in close communication, which shares a common bus, memory,
and other peripheral devices. So we can say that the system is tightly coupled.
• In some instances of multiple-processor scheduling, the functioning of the processors is
homogeneous, or identical. Any process in the queue can run on any available processor.
• Multiprocessor systems can be homogeneous (the same CPU) or heterogeneous (various types of
CPUs).
There are two different architectures utilized in multiprocessor systems: −
Symmetric Multiprocessing
In an SMP system, each processor is comparable and has the same access to memory and I/O resources. The
CPUs are not connected in a master-slave fashion, and they all use the same memory and I/O subsystems.
This suggests that every memory location and I/O device are accessible to every processor without
restriction. An operating system manages the task distribution among the processors in an SMP system,
allowing every operation to be completed by any processor.
Asymmetric Multiprocessing
In the AMP asymmetric architecture, one processor, known as the master processor, has complete access to
all of the system's resources, particularly memory and I/O devices. The master processor is in charge of
allocating tasks to the other processors, also known as slave processors. Every slave processor is responsible
for doing a certain set of tasks that the master processing has assigned to it. The master processor receives
tasks from the operating system, which the master processor then distributes to the subordinate processors.
Types of Multiprocessor Scheduling Algorithms

• Round-Robin Scheduling

• Priority Scheduling

• Scheduling with the shortest job first (SJF)

• Scheduling using a multilevel feedback queue (MLFQ)

• Earliest deadline first (EDF) scheduling − Each process in this algorithm is given a deadline, and the
process with the earliest deadline is the one that will execute first.

You might also like