Unit_2_OS
Unit_2_OS
In other words, we write the computer programs in the form of a text file, thus when we run
them, these turn into processes that complete all of the duties specified in the program.
A program can be segregated into four pieces when put into memory to become a process: stack,
heap, text, and data.
Components of a Process
It is divided into the following four sections:
Stack
Temporary data like method or function parameters, return address, and local variables are stored
in the process stack.
Heap
This is the memory that is dynamically allocated to a process during its execution.
Text
This comprises the contents present in the processor’s registers as well as the current activity
reflected by the value of the program counter.
Data
The global as well as static variables are included in this section.
Program exists at a single place and Process exists for a limited span of time as it
continues to exist until it is deleted. gets terminated after the completion of task.
Program does not have any resource Process has a high resource requirement, it
requirement, it only requires memory needs resources like CPU, memory address, I/O
space for storing the instructions. during its lifetime.
Program Process
Program does not have any control Process has its own control block called Process
block. Control Block.
New State
Ready State
It then goes to Ready State, at this moment the process is waiting to be assigned a processor by
the OS
Running State
Once the Processor is assigned, the process is being executed and turns in Running State.
Apart from the above some new systems also propose 2 more states of process which are –
1. Suspended Ready – There may be no possibility to add a new process in the queue. In
such cases it can be said to be suspended ready state.
2. Suspended Block – If the waiting queue is full
Process Attributes
Here are the various attributes of any process that is stored in the PCB:
Process Id
The process Id is a one-of-a-kind identifier for each system process. Each process is given a
unique identifier when it is created.
Program Counter
The address of the next instruction to be executed is specified by the program counter. The
address of the program’s first instruction is used to initialize the program counter before it is
executed.
The value of the program counter is incremented automatically to refer to the next instruction
when each instruction is executed. This process continues till the program ends.
Process State
Throughout its existence, each process goes through various phases. The present state of the
process is defined by the process state.
Priority
The priority of a process determines how important it is to complete it.
Among all the processes, the one with the greatest priority receives the most CPU time.
Important Notes
Each process’s PCB is stored in the main memory.
Each process has only one PCB associated with it.
All of the processes’ PCBs are listed in a linked list
Advantages:
1. Efficient process management: The process table and PCB provide an efficient
way to manage processes in an operating system. The process table contains all the
information about each process, while the PCB contains the current state of the
process, such as the program counter and CPU registers.
2. Resource management: The process table and PCB allow the operating system to
manage system resources, such as memory and CPU time, efficiently. By keeping
track of each process’s resource usage, the operating system can ensure that all
processes have access to the resources they need.
3. Process synchronization: The process table and PCB can be used to synchronize
processes in an operating system. The PCB contains information about each
process’s synchronization state, such as its waiting status and the resources it is
waiting for.
4. Process scheduling: The process table and PCB can be used to schedule processes
for execution. By keeping track of each process’s state and resource usage, the
operating system can determine which processes should be executed next.
Disadvantages:
1. Overhead: The process table and PCB can introduce overhead and reduce system
performance. The operating system must maintain the process table and PCB for
each process, which can consume system resources.
2. Complexity: The process table and PCB can increase system complexity and make
it more challenging to develop and maintain operating systems. The need to manage
and synchronize multiple processes can make it more difficult to design and
implement system features and ensure system stability.
3. Scalability: The process table and PCB may not scale well for large-scale systems
with many processes. As the number of processes increases, the process table and
PCB can become larger and more difficult to manage efficiently.
4. Security: The process table and PCB can introduce security risks if they are not
implemented correctly. Malicious programs can potentially access or modify the
process table and PCB to gain unauthorized access to system resources or cause
system instability.
5. Miscellaneous accounting and status data – This field includes information about
the amount of CPU used, time constraints, jobs or process number, etc. The process
control block stores the register content also known as execution content of the
processor when it was blocked from running. This execution content architecture
enables the operating system to restore a process’s execution context when the
process returns to the running state. When the process makes a transition from one
state to another, the operating system updates its information in the process’s PCB.
The operating system maintains pointers to each process’s PCB in a process table so
that it can access the PCB quickly.
Process Queues
The Operating system manages various types of queues for each of the process states. The PCB
relatedd to the process is also stored in the queue of the same state. If the Process is moved from
one state to another state then its PCB is also unlinked from the corresponding queue and added
to the other state queue in which the transition is made.
There are the following queues maintained by the Operating system.
1. Job Queue
In starting, all the processes get stored in the job queue. It is maintained in the secondary
memory. The long term scheduler (Job scheduler) picks some of the jobs and put them in the
primary memory.
2. Ready Queue
Ready queue is maintained in primary memory. The short term scheduler picks the job from the
ready queue and dispatch to the CPU for the execution.
3. Waiting Queue
When the process needs some IO operation in order to complete its execution, OS changes the
state of the process from running to waiting. The context (PCB) associated with the process gets
stored on the waiting queue which will be used by the Processor when the process finishes the
IO.
1. Arrival Time
The time at which the process enters into the ready queue is called the arrival time.
2. Burst Time
The total amount of time required by the CPU to execute the whole process is called the Burst
Time. This does not include the waiting time. It is confusing to calculate the execution time for a
process even before executing it hence the scheduling problems based on the burst time cannot
be implemented in reality.
3. Completion Time
The Time at which the process enters into the completion state or the time at which the process
completes its execution, is called completion time.
4. Turnaround time
The total amount of time spent by the process from its arrival to its completion, is called
Turnaround time.
5. Waiting Time
The Total amount of time for which the process waits for the CPU to be assigned is called
waiting time.
6. Response Time
The difference between the arrival time and the time at which the process first gets the CPU is
called Response Time.
Process Scheduling is responsible for selecting a processor process based on a scheduling method as well
as removing a processor process. It’s a crucial component of a multiprogramming operating system.
Process scheduling makes use of a variety of scheduling queues. The scheduler’s purpose is to implement
the virtual machine so that each process appears to be running on its own computer to the user.
The central processing unit (CPU) allocation to processes is controlled by the short-
term scheduler, also referred to as the CPU scheduler. The short-term scheduler
specifically carries out the following duties:
Process Selection: The scheduler chooses a process from the list of available
processes in the ready queue, which is where all of the processes are waiting to be
run. A scheduling algorithm, such as First-Come, First-Served (FCFS), Shortest Job
First (SJF), Priority Scheduling, or Round Robin, is typically used to make the
selection.
CPU Allocation: The scheduler assigns the CPU to a process after it has been
chosen, enabling it to carry out its instructions.
Preemptive Scheduling: The scheduler can also preempt a running process,
interrupting its execution and returning the CPU to the ready queue if a higher-
priority process becomes available.
Context Switching: When a process is switched out, the scheduler saves the context
of the process, including its register values and program counter, to memory. When
the process is later resumed, the scheduler restores this saved context to the CPU.
Process Ageing: Process aging is a function of the scheduler that raises a process’
priority when it has been sitting in the ready queue for a long time. This aids in
avoiding processes becoming locked in an endless waiting state.
Process synchronization and coordination: In order to prevent deadlocks, race
situations, and other synchronization problems, the scheduler also synchronizes
shared resource access among processes and coordinates their execution and
communication.
Load balancing: The scheduler also distributes the workload among multiple
processors or cores to optimize the system’s performance.
Power management: The scheduler also manages the power consumption by
adjusting the CPU frequency and turning off the cores that are not currently in use.
Medium-Term Scheduler
It is responsible for suspending and resuming the process. It mainly does swapping
(moving processes from main memory to disk and vice versa). Swapping may be
necessary to improve the process mix or because a change in memory requirements has
overcommitted available memory, requiring memory to be freed up. It is helpful in
maintaining a perfect balance between the I/O bound and the CPU bound. It reduces the
degree of multiprogramming.
Responsibilities of Medium Term Scheduler
The medium-term scheduler’s responsibility for ensuring equitable resource
distribution among all processes is one of its main responsibilities. This is necessary
to guarantee that every process has an equal chance to run and prevent any one
process from using up all of the system resources.
The medium-term scheduler’s responsibility for ensuring effective process
execution is another crucial task. This may entail modifying the priority of processes
based on their present condition or resource utilization, or modifying the resource
distribution to various processes based on their present requirements.
So, the operating system’s medium-term scheduler controls the scheduling and resource
distribution of processes that are blocked or waiting. It aids in ensuring that resources
are distributed equally throughout all of the processes and that they are carried out
effectively.
Functions
A medium-term scheduler’s main responsibilities include:
Managing blocked or waiting-for processes: Choosing which stalled or waiting-
for processes should be unblocked and permitted to continue running is the
responsibility of the medium-term scheduler. This may entail modifying the priority
of processes based on their present condition or resource utilization, or modifying
the resource distribution to various processes based on their present requirements.
Managing resource usage: The medium-term scheduler is in charge of keeping
track of how much memory, CPU, and other resources are being utilized by the
different processes and modifying the resource allocation as necessary to guarantee
efficient and equitable use of resources.
Process prioritization: The medium-term scheduler is in charge of prioritizing
processes based on a predetermined set of guidelines and criteria. This may entail
modifying the priority of processes based on their present condition or resource
utilization, or modifying the resource distribution to various processes based on their
present requirements.
Process preemption: The medium-term scheduler has the ability to halt the
execution of lower-priority processes that have already consumed their time slices in
order to make room for higher-priority or more crucial activities.
Aging of process: The medium-term scheduler can adjust the priority of a process
based on how long it has been waiting for execution. This is known as the aging of
process, which ensures that processes that have been waiting for a long time are
given priority over newer processes.
Memory Management: The medium-term scheduler can also be responsible for
memory management, which involves allocating memory to processes and ensuring
that processes are not using more memory than they are supposed to.
Security: A medium-term scheduler can assist guarantee that system resources are
not abused or misused by regulating the resource utilization of blocked or waiting-
for processes, adding an extra layer of security to the system.
Limitations:
Limited to batch systems: Medium-term scheduler is not suitable for real-time
systems, as it is not able to meet the strict deadlines and timings required for real-
time applications.
Overhead: Managing the scheduling and resource allocation of blocked or waiting
processes can add significant overhead to the system, which can negatively impact
overall performance.
Comparison among Scheduler
Long Term Scheduler Short term schedular Medium Term Scheduler
It is a process-swapping
It is a job scheduler It is a CPU scheduler
scheduler.
sharing system.
Following are the reasons that describe the need for context switching in the Operating system.
1. The switching of one process to another process is not directly in the system. A context switching
helps the operating system that switches between the multiple processes to use the CPU's
resource to accomplish its tasks and store its context. We can resume the service of the process at
the same point later. If we do not store the currently running process's data or context, the stored
data may be lost while switching between processes.
2. If a high priority process falls into the ready queue, the currently running process will be shut
down or stopped by a high priority process to complete its tasks in the system.
3. If any running process requires I/O resources in the system, the current process will be switched
by another process to use the CPUs. And when the I/O requirement is met, the old process goes
into a ready state to wait for its execution in the CPU. Context switching stores the state of the
process to resume its tasks in an operating system. Otherwise, the process needs to restart its
execution from the initials level.
4. If any interrupts occur while running a process in the operating system, the process status is saved
as registers using context switching. After resolving the interrupts, the process switches from a
wait state to a ready state to resume its execution at the same point later, where the operating
system interrupted occurs.
5. A context switching allows a single CPU to handle multiple process requests simultaneously
without the need for any additional processors.
1. Interrupts
2. Multitasking
3. Kernel/User switch
Interrupts: A CPU requests for the data to read from a disk, and if there are any interrupts, the
context switching automatic switches a part of the hardware that requires less time to handle the
interrupts.
Multitasking: A context switching is the characteristic of multitasking that allows the process to
be switched from the CPU so that another process can be run. When switching the process, the
old state is saved to resume the process's execution at the same point in the system.
Kernel/User Switch: It is used in the operating systems when switching between the user mode,
and the kernel/user mode is performed.
Maximize the CPU utilization, meaning that keep the CPU as busy as possible.
Fair allocation of CPU time to every process
Shortest job first (SJF) is a scheduling process that selects the waiting process with
the smallest execution time to execute next. This scheduling method may or may not be
preemptive. Significantly reduces the average waiting time for other processes waiting
to be executed. The full form of SJF is Shortest Job First.
Characteristics of SJF:
Shortest Job first has the advantage of having a minimum average waiting time
among all operating system scheduling algorithms.
It is associated with each task as a unit of time to complete.
It may cause starvation if shorter processes keep coming. This problem can be
solved using the concept of ageing.
Advantages of Shortest Job first:
As SJF reduces the average waiting time thus, it is better than the first come first
serve scheduling algorithm.
SJF is generally used for long term scheduling
Disadvantages of SJF:
One of the demerit SJF has is starvation.
Many times it becomes complicated to predict the length of the upcoming CPU
request
Longest Job First(LJF) scheduling process is just opposite of shortest job first (SJF),
as the name suggests this algorithm is based upon the fact that the process with the
largest burst time is processed first. Longest Job First is non-preemptive in nature.
Characteristics of LJF:
Among all the processes waiting in a waiting queue, CPU is always assigned to the
process having largest burst time.
If two processes have the same burst time then the tie is broken using FCFS i.e. the
process that arrived first is processed first.
LJF CPU Scheduling can be of both preemptive and non-preemptive types.
Advantages of LJF:
No other task can schedule until the longest job or process executes completely.
All the jobs or processes finish at the same time approximately.
Disadvantages of LJF:
Generally, the LJF algorithm gives a very high average waiting time and average
turn-around time for a given set of processes.
This may lead to convoy effect.
4. Priority Scheduling:
5. Round robin:
Round Robin is a CPU scheduling algorithm where each process is cyclically assigned
a fixed time slot. It is the preemptive version of First come First Serve CPU Scheduling
algorithm. Round Robin CPU Algorithm generally focuses on Time Sharing technique.
Characteristics of Round robin:
It’s simple, easy to use, and starvation-free as all processes get the balanced CPU
allocation.
One of the most widely used methods in CPU scheduling as a core.
It is considered preemptive as the processes are given to the CPU for a very limited
time.
Advantages of Round robin:
Round robin seems to be fair as every process gets an equal share of CPU.
The newly created process is added to the end of the ready queue.
6. Shortest Remaining Time First:
Shortest remaining time first is the preemptive version of the Shortest job first which
we have discussed earlier where the processor is allocated to the job closest to
completion. In SRTF the process with the smallest amount of time remaining until
completion is selected to execute.
Characteristics of Shortest remaining time first:
SRTF algorithm makes the processing of the jobs faster than SJF algorithm, given
it’s overhead charges are not counted.
The context switch is done a lot more times in SRTF than in SJF and consumes the
CPU’s valuable time for processing. This adds up to its processing time and
diminishes its advantage of fast processing.
Advantages of SRTF:
In SRTF the short processes are handled very fast.
The system also requires very little overhead since it only makes a decision when a
process completes or a new process is added.
Disadvantages of SRTF:
Like the shortest job first, it also has the potential for process starvation.
Long processes may be held off indefinitely if short processes are continually
added.
The longest remaining time first is a preemptive version of the longest job first
scheduling algorithm. This scheduling algorithm is used by the operating system to
program incoming processes for use in a systematic way. This algorithm schedules
those processes first which have the longest processing time remaining for completion.
Characteristics of longest remaining time first:
Among all the processes waiting in a waiting queue, the CPU is always assigned to
the process having the largest burst time.
If two processes have the same burst time then the tie is broken using FCFS i.e. the
process that arrived first is processed first.
LJF CPU Scheduling can be of both preemptive and non-preemptive types.
Advantages of LRTF:
No other process can execute until the longest task executes completely.
All the jobs or processes finish at the same time approximately.
Disadvantages of LRTF:
This algorithm gives a very high average waiting time and average turn-around
time for a given set of processes.
This may lead to a convoy effect.
Thread in Operating System
Multi-Threading
A thread is also known as a lightweight process. The idea is to achieve parallelism by
dividing a process into multiple threads. For example, in a browser, multiple tabs can be
different threads. MS Word uses multiple threads: one thread to format the text, another
thread to process inputs, etc. More advantages of multithreading are discussed below.
Multithreading is a technique used in operating systems to improve the performance
and responsiveness of computer systems. Multithreading allows multiple threads (i.e.,
lightweight processes) to share the same resources of a single process, such as the CPU,
memory, and I/O devices.
3. It takes more time for creation. It takes less time for creation.
The process has its own Thread has Parents’ PCB, its own Thread
Process Control Block, Stack, Control Block, and Stack and common Address
11. and Address Space. space.
Advantages of Thread
Responsiveness: If the process is divided into multiple threads, if one thread
completes its execution, then its output can be immediately returned.
Faster context switch: Context switch time between threads is lower compared to
the process context switch. Process context switching requires more overhead from
the CPU.
Effective utilization of multiprocessor system: If we have multiple threads in a
single process, then we can schedule multiple threads on multiple processors. This
will make process execution faster.
Resource sharing: Resources like code, data, and files can be shared among all
threads within a process. Note: Stacks and registers can’t be shared among the
threads. Each thread has its own stack and registers.
Communication: Communication between multiple threads is easier, as the threads
share a common address space. while in the process we have to follow some specific
communication techniques for communication between the two processes.
Enhanced throughput of the system: If a process is divided into multiple threads,
and each thread function is considered as one job, then the number of jobs
completed per unit of time is increased, thus increasing the throughput of the
system.
Types of Threads
Threads are of two types. These are described below.
The main drawback of single threading systems is that only one task can be perfo
performed
rmed at a time, so to
overcome the drawback of this single threading, there is multithreading that allows multiple tasks to be
performed.
For example:
In the above example, client1, client2, and client3 are accessing the web server without any
waiting. Inn multithreading, several tasks can run at the same time.
There exists
ists three established multithreading models classifying these relationships are:
The disadvantage of this model is that since there is only one kernel
kernel-level
evel thread schedule at any
given time, this model cannot take advantage of the hardware acceleration offered by
multithreaded processes or multi
multi-processor
processor systems. In this, all the thread management is done in
the userspace. If blocking comes, this model blocks the whole system.
In the above figure, the many to one model associates all user
user-level
level threads to single kernel-level
kernel
threads.