0% found this document useful (0 votes)
1 views

Unit_2_OS

Uploaded by

Vedika Patil
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

Unit_2_OS

Uploaded by

Vedika Patil
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

Unit -2

What is a Process in an Operating System?


A process is essentially running software. The execution of any process must occur in a specific
order. A process refers to an entity that helps in representing the fundamental unit of work that
must be implemented in any system.

In other words, we write the computer programs in the form of a text file, thus when we run
them, these turn into processes that complete all of the duties specified in the program.

A program can be segregated into four pieces when put into memory to become a process: stack,
heap, text, and data.

Components of a Process
It is divided into the following four sections:

Stack
Temporary data like method or function parameters, return address, and local variables are stored
in the process stack.
Heap
This is the memory that is dynamically allocated to a process during its execution.

Text
This comprises the contents present in the processor’s registers as well as the current activity
reflected by the value of the program counter.

Data
The global as well as static variables are included in this section.

Difference between Program and Process


1. Program: When we execute a program that was just compiled, the OS will generate
a process to execute the program. Execution of the program starts via GUI mouse
clicks, command line entry of its name, etc. A program is a passive entity as it resides
in the secondary memory, such as the contents of a file stored on disk. One program can
have several processes.
2. Process: The term process (Job) refers to program code that has been loaded into a
computer’s memory so that it can be executed by the central processing unit (CPU). A
process can be described as an instance of a program running on a computer or as an
entity that can be assigned to and executed on a processor. A program becomes a
process when loaded into memory and thus is an active entity.
Program Process

Program contains a set of instructions


Process is an instance of an executing program.
designed to complete a specific task.

Program is a passive entity as it Process is a active entity as it is created during


resides in the secondary memory. execution and loaded into the main memory.

Program exists at a single place and Process exists for a limited span of time as it
continues to exist until it is deleted. gets terminated after the completion of task.

Program is a static entity. Process is a dynamic entity.

Program does not have any resource Process has a high resource requirement, it
requirement, it only requires memory needs resources like CPU, memory address, I/O
space for storing the instructions. during its lifetime.
Program Process

Program does not have any control Process has its own control block called Process
block. Control Block.

In addition to program data, a process also


Program has two logical components:
requires additional information required for the
code and data.
management and execution.

Many processes may execute a single program.


There program code may be the same but
Program does not change itself.
program data may be different. these are never
same.

Program contains instructions Process is a sequence of instruction execution.

Process Life Cycle


When a process runs, it goes through many states. Distinct operating systems have different stages, and
the names of these states are not standardized. In general, a process can be in one of the five states listed
below at any given time:

New State

 Process it submitted to the process queue, it in turns acknowledges submission.


 Once submission is acknowledged, the process is given new status.

Ready State

 It then goes to Ready State, at this moment the process is waiting to be assigned a processor by
the OS

Running State

 Once the Processor is assigned, the process is being executed and turns in Running State.

Wait and Termination State

 Now the process can follow the following transitions –


o The process may have all resources it needs and may get directly executed and goes
to Termination State.
o Process may need to go to waiting state any of the following
 Access to Input/Output device (Asking user the values th that
at are required to be
added) via console
 Process maybe intentionally interrupted by OS, as a higher priority operation
maybe required, to be completed first
 A resource or memory access that maybe locked by another process, so current
process goes to waiting state and waits for the resource to get free.
o Once requirements are completed i.e. either it gets back the priority to executed or
requested locked resources are available to use, the process will go to running
state again where, it may directly go to te
termination
rmination state or may be required to wait again
for a possible required input/resource/priority interrupt.
 Termination

Apart from the above some new systems also propose 2 more states of process which are –

1. Suspended Ready – There may be no possibility to add a new process in the queue. In
such cases it can be said to be suspended ready state.
2. Suspended Block – If the waiting queue is full

What is the Process Control Block?


A Process Control Block (PCB) refers to a data structure that keeps track of information about a
specific process. The CPU requires this information to complete the job.

A process’s Process Control Block looks like this


this:
Each process has its own PCB (process control block) that identifies it.

It is also referred to as the context of the process.

Process Attributes
Here are the various attributes of any process that is stored in the PCB:

Process Id
The process Id is a one-of-a-kind identifier for each system process. Each process is given a
unique identifier when it is created.

Program Counter
The address of the next instruction to be executed is specified by the program counter. The
address of the program’s first instruction is used to initialize the program counter before it is
executed.

The value of the program counter is incremented automatically to refer to the next instruction
when each instruction is executed. This process continues till the program ends.

Process State
Throughout its existence, each process goes through various phases. The present state of the
process is defined by the process state.

Priority
The priority of a process determines how important it is to complete it.
Among all the processes, the one with the greatest priority receives the most CPU time.

General Purpose Registers


General-purpose registers are used to store data created during the execution of a task. Each and
every process consists of its own set of registers, which its PCB keeps track of.

List of Open Files


Each process necessitates the presence of certain files in the main memory during execution.
During the process’s execution, PCB keeps track of the files it uses.

List of Open Devices


During the procedure’s execution, PCB keeps track of all open devices.

Important Notes
 Each process’s PCB is stored in the main memory.
 Each process has only one PCB associated with it.
 All of the processes’ PCBs are listed in a linked list

Advantages:

1. Efficient process management: The process table and PCB provide an efficient
way to manage processes in an operating system. The process table contains all the
information about each process, while the PCB contains the current state of the
process, such as the program counter and CPU registers.
2. Resource management: The process table and PCB allow the operating system to
manage system resources, such as memory and CPU time, efficiently. By keeping
track of each process’s resource usage, the operating system can ensure that all
processes have access to the resources they need.
3. Process synchronization: The process table and PCB can be used to synchronize
processes in an operating system. The PCB contains information about each
process’s synchronization state, such as its waiting status and the resources it is
waiting for.
4. Process scheduling: The process table and PCB can be used to schedule processes
for execution. By keeping track of each process’s state and resource usage, the
operating system can determine which processes should be executed next.
Disadvantages:

1. Overhead: The process table and PCB can introduce overhead and reduce system
performance. The operating system must maintain the process table and PCB for
each process, which can consume system resources.
2. Complexity: The process table and PCB can increase system complexity and make
it more challenging to develop and maintain operating systems. The need to manage
and synchronize multiple processes can make it more difficult to design and
implement system features and ensure system stability.
3. Scalability: The process table and PCB may not scale well for large-scale systems
with many processes. As the number of processes increases, the process table and
PCB can become larger and more difficult to manage efficiently.
4. Security: The process table and PCB can introduce security risks if they are not
implemented correctly. Malicious programs can potentially access or modify the
process table and PCB to gain unauthorized access to system resources or cause
system instability.
5. Miscellaneous accounting and status data – This field includes information about
the amount of CPU used, time constraints, jobs or process number, etc. The process
control block stores the register content also known as execution content of the
processor when it was blocked from running. This execution content architecture
enables the operating system to restore a process’s execution context when the
process returns to the running state. When the process makes a transition from one
state to another, the operating system updates its information in the process’s PCB.
The operating system maintains pointers to each process’s PCB in a process table so
that it can access the PCB quickly.
Process Queues
The Operating system manages various types of queues for each of the process states. The PCB
relatedd to the process is also stored in the queue of the same state. If the Process is moved from
one state to another state then its PCB is also unlinked from the corresponding queue and added
to the other state queue in which the transition is made.
There are the following queues maintained by the Operating system.

1. Job Queue
In starting, all the processes get stored in the job queue. It is maintained in the secondary
memory. The long term scheduler (Job scheduler) picks some of the jobs and put them in the
primary memory.

2. Ready Queue
Ready queue is maintained in primary memory. The short term scheduler picks the job from the
ready queue and dispatch to the CPU for the execution.

3. Waiting Queue
When the process needs some IO operation in order to complete its execution, OS changes the
state of the process from running to waiting. The context (PCB) associated with the process gets
stored on the waiting queue which will be used by the Processor when the process finishes the
IO.

Various Times related to the Process

1. Arrival Time
The time at which the process enters into the ready queue is called the arrival time.

2. Burst Time
The total amount of time required by the CPU to execute the whole process is called the Burst
Time. This does not include the waiting time. It is confusing to calculate the execution time for a
process even before executing it hence the scheduling problems based on the burst time cannot
be implemented in reality.

3. Completion Time
The Time at which the process enters into the completion state or the time at which the process
completes its execution, is called completion time.
4. Turnaround time
The total amount of time spent by the process from its arrival to its completion, is called
Turnaround time.

5. Waiting Time
The Total amount of time for which the process waits for the CPU to be assigned is called
waiting time.

6. Response Time
The difference between the arrival time and the time at which the process first gets the CPU is
called Response Time.

Process Schedulers in Operating System

Process Scheduling is responsible for selecting a processor process based on a scheduling method as well
as removing a processor process. It’s a crucial component of a multiprogramming operating system.
Process scheduling makes use of a variety of scheduling queues. The scheduler’s purpose is to implement
the virtual machine so that each process appears to be running on its own computer to the user.

What is a Process Scheduler in an Operating System?


The process manager’s activity is process scheduling, which involves removing the running
process from the CPU and selecting another process based on a specific strategy. The scheduler’s
purpose is to implement the virtual machine so that each process appears to be running on its
own computer to the user.
Categories in Scheduling
Scheduling falls into one of two categories:
 Non-preemptive: In this case, a process’s resource cannot be taken before the
process has finished running. When a running process finishes and transitions
to waiting state, resources are switched.
 Preemptive: In this case, the OS assigns resources to a process for a predetermined
period of time. The process switches from running state to ready state or from
waiting for state to ready state during resource allocation. This switching happens
because the CPU may give other processes priority and substitute the currently
active process for the higher priority process.

Types of Process Schedulers


Process schedulers are divided into three categories.

Long Term or Job Scheduler


It brings the new process to the ‘Ready State’. It controls the Degree of Multi-
programming, i.e., the number of processes present in a ready state at any point in time.
It is important that the long-term scheduler make a careful selection of both I/O and
CPU-bound processes. I/O-bound tasks are which use much of their time in input and
output operations while CPU-bound processes are which spend their time on the CPU.
The job scheduler increases efficiency by maintaining a balance between the two. They
operate at a high level and are typically used in batch-processing systems.

Functions of Long-Term Scheduler


 Long-term schedulers are in charge of determining the order in which processes are
executed and managing the execution of processes that may take a long time to
complete, such as batch jobs or background tasks.
 A long-term scheduler’s primary function is to minimize processing time by taking
the mixtures of CPU-bound jobs and I/O-bound jobs.
 CPU Bound Jobs: CPU-bound jobs are tasks or processes that necessitate
a significant amount of CPU processing time and resources (Central
Processing Unit). These jobs can put a significant strain on the CPU,
affecting system performance and responsiveness.
 I/O Bound Jobs: I/O bound jobs are tasks or processes that necessitate a
large number of input/output (I/O) operations, such as reading and writing
to discs or networks. These jobs are less dependent on the CPU and can
put a greater strain on the system’s I/O subsystem.
Limitations
 Response time: Because long-term schedulers operate at a higher level and do not
need to make scheduling decisions in real-time, they are typically slower to respond
than other types of schedulers, such as short-term schedulers. This may result in
longer wait times for processes awaiting admission to the system.
 Accuracy: Because they do not have real-time data on the state of the system, long-
term schedulers may be limited in their ability to accurately predict the resource
requirements of processes. This can lead to inefficient resource allocation and poor
system performance.
 Flexibility: Because they operate at a high level and are not designed to handle real-
time or interactive workloads, long-term schedulers are typically less flexible than
other types of schedulers. This can make it difficult for them to adapt to changing
workloads or system conditions.
 Overhead: Long-term schedulers can introduce overhead because they require more
time and resources to evaluate and manage process execution.

Short-Term or CPU Scheduler


It is responsible for selecting one process from the ready state for scheduling it on the
running state. Note: Short-term scheduler only selects the process to schedule it doesn’t
load the process on running. Here is when all the scheduling algorithms are used. The
CPU scheduler is responsible for ensuring no starvation due to high burst time
processes.The dispatcher is responsible for loading the process selected by the Short-
term scheduler on the CPU (Ready to Running State) Context switching is done by the
dispatcher only. A dispatcher does the following:
1. Switching context.
2. Switching to user mode.
3. Jumping to the proper location in the newly loaded program.
Functions

The central processing unit (CPU) allocation to processes is controlled by the short-
term scheduler, also referred to as the CPU scheduler. The short-term scheduler
specifically carries out the following duties:
 Process Selection: The scheduler chooses a process from the list of available
processes in the ready queue, which is where all of the processes are waiting to be
run. A scheduling algorithm, such as First-Come, First-Served (FCFS), Shortest Job
First (SJF), Priority Scheduling, or Round Robin, is typically used to make the
selection.
 CPU Allocation: The scheduler assigns the CPU to a process after it has been
chosen, enabling it to carry out its instructions.
 Preemptive Scheduling: The scheduler can also preempt a running process,
interrupting its execution and returning the CPU to the ready queue if a higher-
priority process becomes available.
 Context Switching: When a process is switched out, the scheduler saves the context
of the process, including its register values and program counter, to memory. When
the process is later resumed, the scheduler restores this saved context to the CPU.
 Process Ageing: Process aging is a function of the scheduler that raises a process’
priority when it has been sitting in the ready queue for a long time. This aids in
avoiding processes becoming locked in an endless waiting state.
 Process synchronization and coordination: In order to prevent deadlocks, race
situations, and other synchronization problems, the scheduler also synchronizes
shared resource access among processes and coordinates their execution and
communication.
 Load balancing: The scheduler also distributes the workload among multiple
processors or cores to optimize the system’s performance.
 Power management: The scheduler also manages the power consumption by
adjusting the CPU frequency and turning off the cores that are not currently in use.
Medium-Term Scheduler
It is responsible for suspending and resuming the process. It mainly does swapping
(moving processes from main memory to disk and vice versa). Swapping may be
necessary to improve the process mix or because a change in memory requirements has
overcommitted available memory, requiring memory to be freed up. It is helpful in
maintaining a perfect balance between the I/O bound and the CPU bound. It reduces the
degree of multiprogramming.
Responsibilities of Medium Term Scheduler
 The medium-term scheduler’s responsibility for ensuring equitable resource
distribution among all processes is one of its main responsibilities. This is necessary
to guarantee that every process has an equal chance to run and prevent any one
process from using up all of the system resources.
 The medium-term scheduler’s responsibility for ensuring effective process
execution is another crucial task. This may entail modifying the priority of processes
based on their present condition or resource utilization, or modifying the resource
distribution to various processes based on their present requirements.
So, the operating system’s medium-term scheduler controls the scheduling and resource
distribution of processes that are blocked or waiting. It aids in ensuring that resources
are distributed equally throughout all of the processes and that they are carried out
effectively.

Functions
A medium-term scheduler’s main responsibilities include:
 Managing blocked or waiting-for processes: Choosing which stalled or waiting-
for processes should be unblocked and permitted to continue running is the
responsibility of the medium-term scheduler. This may entail modifying the priority
of processes based on their present condition or resource utilization, or modifying
the resource distribution to various processes based on their present requirements.
 Managing resource usage: The medium-term scheduler is in charge of keeping
track of how much memory, CPU, and other resources are being utilized by the
different processes and modifying the resource allocation as necessary to guarantee
efficient and equitable use of resources.
 Process prioritization: The medium-term scheduler is in charge of prioritizing
processes based on a predetermined set of guidelines and criteria. This may entail
modifying the priority of processes based on their present condition or resource
utilization, or modifying the resource distribution to various processes based on their
present requirements.
 Process preemption: The medium-term scheduler has the ability to halt the
execution of lower-priority processes that have already consumed their time slices in
order to make room for higher-priority or more crucial activities.
 Aging of process: The medium-term scheduler can adjust the priority of a process
based on how long it has been waiting for execution. This is known as the aging of
process, which ensures that processes that have been waiting for a long time are
given priority over newer processes.
 Memory Management: The medium-term scheduler can also be responsible for
memory management, which involves allocating memory to processes and ensuring
that processes are not using more memory than they are supposed to.
 Security: A medium-term scheduler can assist guarantee that system resources are
not abused or misused by regulating the resource utilization of blocked or waiting-
for processes, adding an extra layer of security to the system.
Limitations:
 Limited to batch systems: Medium-term scheduler is not suitable for real-time
systems, as it is not able to meet the strict deadlines and timings required for real-
time applications.
 Overhead: Managing the scheduling and resource allocation of blocked or waiting
processes can add significant overhead to the system, which can negatively impact
overall performance.
 Comparison among Scheduler
Long Term Scheduler Short term schedular Medium Term Scheduler

It is a process-swapping
It is a job scheduler It is a CPU scheduler
scheduler.

Speed lies in between both


Generally, Speed is lesser Speed is the fastest
short and long-term
than short term scheduler among all of them.
schedulers.

It gives less control over


It controls the degree of how much It reduces the degree of
multiprogramming multiprogramming is multiprogramming.
done.

It is barely present or It is a minimal time- It is a component of systems


nonexistent in the time- sharing system. for time sharing.
Long Term Scheduler Short term schedular Medium Term Scheduler

sharing system.

It can re-enter the process


It selects those processes It can re-introduce the
into memory, allowing for
which are ready to process into memory and
the continuation of
execute execution can be continued.
execution.

What is Context Switching in OS?


Context switching refers to a technique/method used by the OS to switch processes from a given state to
another one for the execution of its function using the CPUs present in the system. When switching is
performed in the system, the old running process’s status is stored as registers, and the CPU is assigned to
a new process for the execution of its tasks. While new processes are running in a system, the previous
ones must wait in the ready queue. The old process’s execution begins at that particular point at which
another process happened to stop it. It describes the features of a multitasking OS where multiple
processes share a similar CPU to perform various tasks without the requirement for further processors in
the given system.

The need for Context switching


A context switching helps to share a single CPU across all processes to complete its execution
and store the system's tasks status. When the process reloads in the system, the execution of the
process starts at the same point where there is conflicting.

Following are the reasons that describe the need for context switching in the Operating system.

1. The switching of one process to another process is not directly in the system. A context switching
helps the operating system that switches between the multiple processes to use the CPU's
resource to accomplish its tasks and store its context. We can resume the service of the process at
the same point later. If we do not store the currently running process's data or context, the stored
data may be lost while switching between processes.
2. If a high priority process falls into the ready queue, the currently running process will be shut
down or stopped by a high priority process to complete its tasks in the system.
3. If any running process requires I/O resources in the system, the current process will be switched
by another process to use the CPUs. And when the I/O requirement is met, the old process goes
into a ready state to wait for its execution in the CPU. Context switching stores the state of the
process to resume its tasks in an operating system. Otherwise, the process needs to restart its
execution from the initials level.
4. If any interrupts occur while running a process in the operating system, the process status is saved
as registers using context switching. After resolving the interrupts, the process switches from a
wait state to a ready state to resume its execution at the same point later, where the operating
system interrupted occurs.
5. A context switching allows a single CPU to handle multiple process requests simultaneously
without the need for any additional processors.

Example of Context Switching


Suppose that multiple processes are stored in a Process Control Block (PCB). One process is
running state to execute its task with the use of CPUs. As the process is running, another process
arrives in the ready queue, which has a high priority of completing its task using CPU. Here we
used context switching that switches the current process with the new process requiring the CPU
to finish its tasks. While switching the process, a context switch saves the status of the old
process in registers. When the process reloads into the CPU, it starts the execution of the process
when the new process stops the old process. If we do not save the state of the process, we have to
start its execution at the initial level. In this way, context switching helps the operating system to
switch between the processes, store or reload the process when it requires executing its tasks.

Context switching triggers


Following are the three types of context switching triggers as follows.

1. Interrupts
2. Multitasking
3. Kernel/User switch

Interrupts: A CPU requests for the data to read from a disk, and if there are any interrupts, the
context switching automatic switches a part of the hardware that requires less time to handle the
interrupts.

Multitasking: A context switching is the characteristic of multitasking that allows the process to
be switched from the CPU so that another process can be run. When switching the process, the
old state is saved to resume the process's execution at the same point in the system.

Kernel/User Switch: It is used in the operating systems when switching between the user mode,
and the kernel/user mode is performed.

What is the PCB?


A PCB (Process Control Block) is a data structure used in the operating system to store all data
related information to the process. For example, when a process is created in the operating
system, updated information of the process, switching information of the process, terminated
process in the PCB.
Working Process Context Switching
So the context switching of two processes, the priority-based process occurs in the
ready queue of the process control block. These are the following steps.
 The state of the current process must be saved for rescheduling.
 The process state contains records, credentials, and operating system-specific
information stored on the PCB or switch.
 The PCB can be stored in a single layer in kernel memory or in a custom OS file.
 A handle has been added to the PCB to have the system ready to run.
 The operating system aborts the execution of the current process and selects a
process from the waiting list by tuning its PCB.
 Load the PCB’s program counter and continue execution in the selected process.
 Process/thread values can affect which processes are selected from the queue, this
can be important.

Goals of Process Context Switching


The purposes of a scheduling algorithm are as follows:

 Maximize the CPU utilization, meaning that keep the CPU as busy as possible.
 Fair allocation of CPU time to every process

 Maximize the Throughput

 Minimize the turnaround time

 Minimize the waiting time

 Minimize the response time

CPU Scheduling Algorithms in Operating Systems


What is the need for CPU scheduling algorithm?
CPU scheduling is the process of deciding which process will own the CPU to use
while another process is suspended. The main function of the CPU scheduling is to
ensure that whenever the CPU remains idle, the OS has at least selected one of the
processes available in the ready-to-use line.

Objectives of Process Scheduling Algorithm:


 Utilization of CPU at maximum level. Keep CPU as busy as possible.
 Allocation of CPU should be fair.
 Throughput should be Maximum. i.e. Number of processes that complete their
execution per time unit should be maximized.
 Minimum turnaround time, i.e. time taken by a process to finish execution should
be the least.
 There should be a minimum waiting time and the process should not starve in the
ready queue.
 Minimum response time. It means that the time when a process produces the first
response should be as less as possible.
What are the different terminologies to take care of in any
CPU Scheduling algorithm?
 Arrival Time: Time at which the process arrives in the ready queue.
 Completion Time: Time at which process completes its execution.
 Burst Time: Time required by a process for CPU execution.
 Turn Around Time: Time Difference between completion time and arrival time.
Turn Around Time = Completion Time – Arrival Time
 Waiting Time(W.T): Time Difference between turnaround time and burst time.
Waiting Time = Turn Around Time – Burst Time
Things to take care while designing a CPU Scheduling
algorithm?
Different CPU Scheduling algorithms have different structures and the choice of a
particular algorithm depends on a variety of factors. Many conditions have been raised
to compare CPU scheduling algorithms.
The criteria include the following:
 CPU utilization: The main purpose of any CPU algorithm is to keep the CPU as
busy as possible. Theoretically, CPU usage can range from 0 to 100 but in a real-
time system, it varies from 40 to 90 percent depending on the system load.
 Throughput: The average CPU performance is the number of processes performed
and completed during each unit. This is called throughput. The output may vary
depending on the length or duration of the processes.
 Turn round Time: For a particular process, the important conditions are how long
it takes to perform that process. The time elapsed from the time of process delivery
to the time of completion is known as the conversion time. Conversion time is the
amount of time spent waiting for memory access, waiting in line, using CPU, and
waiting for I / O.
 Waiting Time: The Scheduling algorithm does not affect the time required to
complete the process once it has started performing. It only affects the waiting time
of the process i.e. the time spent in the waiting process in the ready queue.
 Response Time: In a collaborative system, turnaround time is not the best option.
The process may produce something early and continue to computing the new
results while the previous results are released to the user. Therefore another method
is the time taken in the submission of the application process until the first response
is issued. This measure is called response time.
What are the different types of CPU Scheduling
Algorithms?
There are mainly two types of scheduling methods:
 Preemptive Scheduling: Preemptive scheduling is used when a process switches
from running state to ready state or from the waiting state to the ready state.
 Non-Preemptive Scheduling: Non-Preemptive scheduling is used when a process
terminates , or when a process switches from running state to waiting state.
Different types of CPU Scheduling Algorithms

1. First Come First Serve:


FCFS considered to be the simplest of all operating system scheduling algorithms. First
come first serve scheduling algorithm states that the process that requests the CPU first
is allocated the CPU first and is implemented by using FIFO queue.
Characteristics of FCFS:
 FCFS supports non-preemptive and preemptive CPU scheduling algorithms.
 Tasks are always executed on a First-come, First-serve concept.
 FCFS is easy to implement and use.
 This algorithm is not much efficient in performance, and the wait time is quite high.
Advantages of FCFS:
 Easy to implement
 First come, first serve method
Disadvantages of FCFS:
 FCFS suffers from Convoy effect.
 The average waiting time is much higher than the other algorithms.
 FCFS is very simple and easy to implement and hence not much efficient.
2. Shortest Job First(SJF):

Shortest job first (SJF) is a scheduling process that selects the waiting process with
the smallest execution time to execute next. This scheduling method may or may not be
preemptive. Significantly reduces the average waiting time for other processes waiting
to be executed. The full form of SJF is Shortest Job First.

Characteristics of SJF:
 Shortest Job first has the advantage of having a minimum average waiting time
among all operating system scheduling algorithms.
 It is associated with each task as a unit of time to complete.
 It may cause starvation if shorter processes keep coming. This problem can be
solved using the concept of ageing.
Advantages of Shortest Job first:
 As SJF reduces the average waiting time thus, it is better than the first come first
serve scheduling algorithm.
 SJF is generally used for long term scheduling
Disadvantages of SJF:
 One of the demerit SJF has is starvation.
 Many times it becomes complicated to predict the length of the upcoming CPU
request

3. Longest Job First(LJF):

Longest Job First(LJF) scheduling process is just opposite of shortest job first (SJF),
as the name suggests this algorithm is based upon the fact that the process with the
largest burst time is processed first. Longest Job First is non-preemptive in nature.
Characteristics of LJF:
 Among all the processes waiting in a waiting queue, CPU is always assigned to the
process having largest burst time.
 If two processes have the same burst time then the tie is broken using FCFS i.e. the
process that arrived first is processed first.
 LJF CPU Scheduling can be of both preemptive and non-preemptive types.
Advantages of LJF:
 No other task can schedule until the longest job or process executes completely.
 All the jobs or processes finish at the same time approximately.
Disadvantages of LJF:
 Generally, the LJF algorithm gives a very high average waiting time and average
turn-around time for a given set of processes.
 This may lead to convoy effect.
4. Priority Scheduling:

Preemptive Priority CPU Scheduling Algorithm is a pre-emptive method of CPU


scheduling algorithm that works based on the priority of a process. In this algorithm,
the editor sets the functions to be as important, meaning that the most important process
must be done first. In the case of any conflict, that is, where there are more than one
processor with equal value, then the most important CPU planning algorithm works on
the basis of the FCFS (First Come First Serve) algorithm.
Characteristics of Priority Scheduling:
 Schedules tasks based on priority.
 When the higher priority work arrives while a task with less priority is executed, the
higher priority work takes the place of the less priority one and
 The latter is suspended until the execution is complete.
 Lower is the number assigned, higher is the priority level of a process.
Advantages of Priority Scheduling:
 The average waiting time is less than FCFS
 Less complex
Disadvantages of Priority Scheduling:
 One of the most common demerits of the Preemptive priority CPU scheduling
algorithm is the Starvation Problem. This is the problem in which a process has to
wait for a longer amount of time to get scheduled into the CPU. This condition is
called the starvation problem.

5. Round robin:

Round Robin is a CPU scheduling algorithm where each process is cyclically assigned
a fixed time slot. It is the preemptive version of First come First Serve CPU Scheduling
algorithm. Round Robin CPU Algorithm generally focuses on Time Sharing technique.
Characteristics of Round robin:
 It’s simple, easy to use, and starvation-free as all processes get the balanced CPU
allocation.
 One of the most widely used methods in CPU scheduling as a core.
 It is considered preemptive as the processes are given to the CPU for a very limited
time.
Advantages of Round robin:
 Round robin seems to be fair as every process gets an equal share of CPU.
 The newly created process is added to the end of the ready queue.
6. Shortest Remaining Time First:

Shortest remaining time first is the preemptive version of the Shortest job first which
we have discussed earlier where the processor is allocated to the job closest to
completion. In SRTF the process with the smallest amount of time remaining until
completion is selected to execute.
Characteristics of Shortest remaining time first:
 SRTF algorithm makes the processing of the jobs faster than SJF algorithm, given
it’s overhead charges are not counted.
 The context switch is done a lot more times in SRTF than in SJF and consumes the
CPU’s valuable time for processing. This adds up to its processing time and
diminishes its advantage of fast processing.
Advantages of SRTF:
 In SRTF the short processes are handled very fast.
 The system also requires very little overhead since it only makes a decision when a
process completes or a new process is added.
Disadvantages of SRTF:
 Like the shortest job first, it also has the potential for process starvation.
 Long processes may be held off indefinitely if short processes are continually
added.

7. Longest Remaining Time First:

The longest remaining time first is a preemptive version of the longest job first
scheduling algorithm. This scheduling algorithm is used by the operating system to
program incoming processes for use in a systematic way. This algorithm schedules
those processes first which have the longest processing time remaining for completion.
Characteristics of longest remaining time first:
 Among all the processes waiting in a waiting queue, the CPU is always assigned to
the process having the largest burst time.
 If two processes have the same burst time then the tie is broken using FCFS i.e. the
process that arrived first is processed first.
 LJF CPU Scheduling can be of both preemptive and non-preemptive types.
Advantages of LRTF:
 No other process can execute until the longest task executes completely.
 All the jobs or processes finish at the same time approximately.
Disadvantages of LRTF:
 This algorithm gives a very high average waiting time and average turn-around
time for a given set of processes.
 This may lead to a convoy effect.
Thread in Operating System

Thread is a separate execution path. It is a lightweight process that the operating


system can schedule and run concurrently with other threads. The operating system
creates and manages threads, and they share the same memory and resources as the
program that created them. This enables multiple threads to collaborate and work
efficiently within a single program.
A thread is a single sequence stream within a process. Threads are also called
lightweight processes as they possess some of the properties of processes. Each thread
belongs to exactly one process. In an operating system that supports multithreading, the
process can consist of many threads. But threads can be effective only if CPU is more
than 1 otherwise two threads have to context switch for that single CPU.

Why Do We Need Thread?


 Threads run in parallel improving the application performance. Each such thread has
its own CPU state and stack, but they share the address space of the process and the
environment.
 Threads can share common data so they do not need to use interprocess
communication. Like the processes, threads also have states like ready, executing,
blocked, etc.
 Priority can be assigned to the threads just like the process, and the highest priority
thread is scheduled first.
 Each thread has its own Thread Control Block (TCB). Like the process, a context
switch occurs for the thread, and register contents are saved in (TCB). As threads
share the same address space and resources, synchronization is also required for the
various activities of the thread.

Thread Control Block in Operating System


Thread Control Blocks (TCBs) represents threads generated in the system. It contains
information about the threads, such as it’s ID and states.

The components have been defined below:


 Thread ID: It is a unique identifier assigned by the Operating System to the thread
when it is being created.
 Thread states: These are the states of the thread which changes as the thread
progresses through the system
 CPU information: It includes everything that the OS needs to know about, such as
how far the thread has progressed and what data is being used.
 Thread Priority: It indicates the weight (or priority) of the thread over other
threads which helps the thread scheduler to determine which thread should be
selected next from the READY queue.
 A pointer which points to the process which triggered the creation of this thread.
 A pointer which points to the thread(s) created by this thread.

Multi-Threading
A thread is also known as a lightweight process. The idea is to achieve parallelism by
dividing a process into multiple threads. For example, in a browser, multiple tabs can be
different threads. MS Word uses multiple threads: one thread to format the text, another
thread to process inputs, etc. More advantages of multithreading are discussed below.
Multithreading is a technique used in operating systems to improve the performance
and responsiveness of computer systems. Multithreading allows multiple threads (i.e.,
lightweight processes) to share the same resources of a single process, such as the CPU,
memory, and I/O devices.

Difference between Process and Thread


S.NO Process Thread

Process means any program is


Thread means a segment of a process.
1. in execution.

The process takes more time to


The thread takes less time to terminate.
2. terminate.

3. It takes more time for creation. It takes less time for creation.

It also takes more time for


It takes less time for context switching.
4. context switching.

The process is less efficient in Thread is more efficient in terms of


5. terms of communication. communication.

We don’t need multi programs in action for


Multiprogramming holds the
multiple threads because a single process
concepts of multi-process.
6. consists of multiple threads.

7. The process is isolated. Threads share memory.

The process is called the A Thread is lightweight as each thread in a


8. heavyweight process. process shares code, data, and resources.

Process switching uses an Thread switching does not require calling an


interface in an operating operating system and causes an interrupt to the
9. system. kernel.

If one process is blocked then it


If a user-level thread is blocked, then all other
will not affect the execution of
user-level threads are blocked.
10. other processes
S.NO Process Thread

The process has its own Thread has Parents’ PCB, its own Thread
Process Control Block, Stack, Control Block, and Stack and common Address
11. and Address Space. space.

Since all threads of the same process share


Changes to the parent process address space and other resources so any
do not affect child processes. changes to the main thread may affect the
12. behavior of the other threads of the process.

No system call is involved, it is created using


A system call is involved in it.
13. APIs.

The process does not share data


Threads share data with each other.
14. with each other.

Advantages of Thread
 Responsiveness: If the process is divided into multiple threads, if one thread
completes its execution, then its output can be immediately returned.
 Faster context switch: Context switch time between threads is lower compared to
the process context switch. Process context switching requires more overhead from
the CPU.
 Effective utilization of multiprocessor system: If we have multiple threads in a
single process, then we can schedule multiple threads on multiple processors. This
will make process execution faster.
 Resource sharing: Resources like code, data, and files can be shared among all
threads within a process. Note: Stacks and registers can’t be shared among the
threads. Each thread has its own stack and registers.
 Communication: Communication between multiple threads is easier, as the threads
share a common address space. while in the process we have to follow some specific
communication techniques for communication between the two processes.
 Enhanced throughput of the system: If a process is divided into multiple threads,
and each thread function is considered as one job, then the number of jobs
completed per unit of time is increased, thus increasing the throughput of the
system.

Types of Threads
Threads are of two types. These are described below.

User Level Threads


User Level Thread is a type of thread that is not created using system calls. The kernel
has no work in the management of user-level threads. User-level threads can be easily
implemented by the user. In case when user-level threads are single-handed processes,
kernel-level thread manages them. Let’s look at the advantages and disadvantages of
User-Level Thread.
Advantages of User-Level Threads
 Implementation of the User-Level Thread is easier than Kernel Level Thread.
 Context Switch Time is less in User Level Thread.
 User-Level Thread is more efficient than Kernel-Level Thread.
 Because of the presence of only Program Counter, Register Set, and Stack Space, it
has a simple representation.
Disadvantages of User-Level Threads
 There is a lack of coordination between Thread and Kernel.
 Inc case of a page fault, the whole process can be blocked.

Kernel Level Threads


A kernel Level Thread is a type of thread that can recognize the Operating system
easily. Kernel Level Threads has its own thread table where it keeps track of the
system. The operating System Kernel helps in managing threads. Kernel Threads have
somehow longer context switching time. Kernel helps in the management of threads.
Advantages of Kernel-Level
Level Threads
 It has up-to-date information on all threads.
 Applications that block frequency are to be handled by the Kernel-Level
Level Threads.
 Whenever any process requires more time to process, Kernel-Level Thread provides
more time to it.
Disadvantages of Kernel-Level
Level threads
 Kernel-Level Thread is slower than User-Level Thread.
 Implementation of this type of thread is a little more complex than a user-level
thread.

Multithreading Models in Operating system


Multithreading Model:
Multithreading allows the application to divide its task into individual threads. In multi-threads,
multi
the same process or task can be done by the number of threads, or we can say that there is more
than one thread to perform the task in multithreading. With the use of multithreading,
multitasking can be achieved.

The main drawback of single threading systems is that only one task can be perfo
performed
rmed at a time, so to
overcome the drawback of this single threading, there is multithreading that allows multiple tasks to be
performed.

For example:
In the above example, client1, client2, and client3 are accessing the web server without any
waiting. Inn multithreading, several tasks can run at the same time.

In an operating system,, threads are divided into the user


user-level
level thread and the Kernel-level
Kernel thread.
User-level
level threads handled independent form above the kernel and thereby managed without any
kernel support. On the opposite hand, the operating system directly manages the kernel-level
kernel
threads. Nevertheless, there must be a form of relationship between user user-level
level and kernel-level
kernel
threads.

There exists
ists three established multithreading models classifying these relationships are:

o Many to one multithreading model


o One to one multithreading model
o Many to Many multithreading models

Many to one multithreading model:


The many to one model maps many user lev levels
els threads to one kernel thread. This type of
relationship facilitates an effective context
context-switching
switching environment, easily implemented even on
the simple kernel with no thread support.

The disadvantage of this model is that since there is only one kernel
kernel-level
evel thread schedule at any
given time, this model cannot take advantage of the hardware acceleration offered by
multithreaded processes or multi
multi-processor
processor systems. In this, all the thread management is done in
the userspace. If blocking comes, this model blocks the whole system.
In the above figure, the many to one model associates all user
user-level
level threads to single kernel-level
kernel
threads.

One to one multithreading model


The one-to-one
one model maps a single user
user-level thread to a single kernel-level
level thread. This
Th type of
relationship facilitates the running of multiple threads in parallel. However, this benefit comes
with its drawback. The generation of every new user thread must include creating a
corresponding kernel thread causing an overhead, which can hinde hinderr the performance of the
parent process. Windows series and Linux operating systems try to tackle this problem by
limiting the growth of the thread count.
In the above figure, one model associates that one user
user-level
level thread to a single kernel-level
kernel
thread.

Many to Many Model multithreading model


In this type of model, there are several user
user-level threads and several kernel-level
level threads. The
number of kernel threads created depends upon a particular application. The developer can create
as many threads at both levels but may not be the same. The many to many model is a
compromise between the other two models. In this model, if any thread makes a blocking system
call, the kernel can schedule another thread for execution. Also, with the introduction of multiple
threads, complexity is not present as in the previous models. Though this model allows the
creation of multiple kernel threads, true concurrency cannot be achieved by this model. This is
because the kernel can schedule only one process at a time.

You might also like