0% found this document useful (0 votes)
25 views82 pages

Process Management (1)

Uploaded by

smasmi2211
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views82 pages

Process Management (1)

Uploaded by

smasmi2211
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 82

PROCESS

MANAGEMENT
Introduction

 A Program does nothing unless its instructions are


executed by a CPU.
 A program in execution is called a process. In order to
accomplish its task, process needs the computer
resources.
 There may exist more than one process in the system
which may require the same resource at the same time.
 Therefore, the operating system has to manage all the
processes and the resources in a convenient and efficient
way.
 Some resources may need to be executed by one process
at one time to maintain the consistency otherwise the
system can become inconsistent and deadlock may occur.
 The operating system is responsible for the
following activities in connection with Process
Management
 Scheduling processes and threads on the CPUs.
 Creating and deleting both user and system
processes.
 Suspending and resuming processes.
 Providing mechanisms for process synchronization.
 Providing mechanisms for process communication.
Attributes of a process

 The Attributes of the process are used by the


Operating System to create the process control
block (PCB) for each of them.
 This is also called context of the process.
 Attributes which are stored in the PCB are
described below.
1. Process ID

 When a process is created, a unique id is assigned


to the process which is used for unique
identification of the process in the system.
2. Program counter

 A program counter stores the address of the last


instruction of the process on which the process
was suspended.
 The CPU uses this address when the execution of
this process is resumed.
3. Process State

 The Process, from its creation to the completion,


goes through various states which are new, ready,
running and waiting. We will discuss about them
later in detail.
4. Priority

 Every process has its own priority. The process


with the highest priority among the processes
gets the CPU first. This is also stored on the
process control block.
5. General Purpose Registers

 Every process has its own set of registers which


are used to hold the data which is generated
during the execution of the process.
6. List of open files

 During the Execution, Every process uses some


files which need to be present in the main
memory. OS also maintains a list of open files in
the PCB.
7. List of open devices

 OS also maintain the list of all open devices which


are used during the execution of the process.
Process States
State Diagram
 The process, from its creation to completion,
passes through various states. The minimum
number of states is five.
 The names of the states are not standardized
although the process may be in one of the
following states during execution.
1. New

 A program which is going to be picked up by the


OS into the main memory is called a new process.
2. Ready

 Whenever a process is created, it directly enters


in the ready state, in which, it waits for the CPU to
be assigned.
 The OS picks the new processes from the
secondary memory and put all of them in the
main memory.
 The processes which are ready for the execution
and reside in the main memory are called ready
state processes.
 There can be many processes present in the
ready state.
3. Running

 One of the processes from the ready state will be


chosen by the OS depending upon the scheduling
algorithm.
 Hence, if we have only one CPU in our system,
the number of running processes for a particular
time will always be one.
 If we have n processors in the system then we can
have n processes running simultaneously.
4. Block or wait

 From the Running state, a process can make the


transition to the block or wait state depending
upon the scheduling algorithm or the intrinsic
behavior of the process.
 When a process waits for a certain resource to be
assigned or for the input from the user then the
OS move this process to the block or wait state
and assigns the CPU to the other processes.
5. Completion or termination

 When a process finishes its execution, it comes in


the termination state.
 All the context of the process (Process Control
Block) will also be deleted the process will be
terminated by the Operating system.
6. Suspend ready

 A process in the ready state, which is moved to


secondary memory from the main memory due to
lack of the resources (mainly primary memory) is
called in the suspend ready state.
 If the main memory is full and a higher priority
process comes for the execution then the OS have
to make the room for the process in the main
memory by throwing the lower priority process
out into the secondary memory.
 The suspend ready processes remain in the
secondary memory until the main memory gets
available.
7. Suspend wait

 Instead of removing the process from the ready


queue, it's better to remove the blocked process
which is waiting for some resources in the main
memory.
 Since it is already waiting for some resource to
get available hence it is better if it waits in the
secondary memory and make room for the higher
priority process.
 These processes complete their execution once
the main memory gets available and their wait is
finished.
Operations on the
Process
1. Creation

 Once the process is created, it will be ready and


come into the ready queue (main memory) and
will be ready for the execution.
2. Scheduling

 Out of the many processes present in the ready


queue, the Operating system chooses one process
and start executing it.
 Selecting the process which is to be executed
next, is known as scheduling.
3. Execution

 Once the process is scheduled for the execution,


the processor starts executing it.
 Process may come to the blocked or wait state
during the execution then in that case the
processor starts executing the other processes.
4. Deletion/killing

 Once the purpose of the process gets over then


the OS will kill the process. The Context of the
process (PCB) will be deleted and the process gets
terminated by the Operating system.
Process
Schedulers
1. Long term scheduler

 Long term scheduler is also known as job


scheduler.
 It chooses the processes from the pool (secondary
memory) and keeps them in the ready queue
maintained in the primary memory.
 Long Term scheduler mainly controls the degree of
Multiprogramming.
 The purpose of long term scheduler is to choose
a perfect mix of IO bound and CPU bound
processes among the jobs present in the pool.
 If the job scheduler chooses more IO bound
processes then all of the jobs may reside in the
blocked state all the time and the CPU will remain
idle most of the time.
 This will reduce the degree of Multiprogramming.
 Therefore, the Job of long term scheduler is very
critical and may affect the system for a very long
time.
2. Short term scheduler

 Short term scheduler is also known as CPU scheduler.


 It selects one of the Jobs from the ready queue and
dispatch to the CPU for the execution.
 A scheduling algorithm is used to select which job is
going to be dispatched for the execution.
 The Job of the short term scheduler can be very critical
in the sense that if it selects job whose CPU burst time
is very high then all the jobs after that, will have to wait
in the ready queue for a very long time.
 This problem is called starvation which may arise if the
short term scheduler makes some mistakes while
selecting the job.
3. Medium term scheduler

 Medium term scheduler takes care of the swapped out processes.


 If the running state processes needs some IO time for the
completion then there is a need to change its state from running
to waiting.
 Medium term scheduler is used for this purpose.
 It removes the process from the running state to make room for
the other processes.
 Such processes are the swapped out processes and this procedure
is called swapping.
 The medium term scheduler is responsible for suspending and
resuming the processes.
 It reduces the degree of multiprogramming.
 The swapping is necessary to have a perfect mix of processes in
the ready queue.
Process Queues

 The Operating system manages various types of


queues for each of the process states.
 The PCB related to the process is also stored in
the queue of the same state.
 If the Process is moved from one state to another
state then its PCB is also unlinked from the
corresponding queue and added to the other state
queue in which the transition is made.
1. Job Queue

 In starting, all the processes get stored in the job


queue.
 It is maintained in the secondary memory.
 The long term scheduler (Job scheduler) picks
some of the jobs and put them in the primary
memory.
2. Ready Queue

 Ready queue is maintained in primary memory.


 The short term scheduler picks the job from the
ready queue and dispatch to the CPU for the
execution.
3. Waiting Queue

 When the process needs some IO operation in


order to complete its execution, OS changes the
state of the process from running to waiting.
 The context (PCB) associated with the process
gets stored on the waiting queue which will be
used by the Processor when the process finishes
the IO.
Various Times
related to the
Process
1. Arrival Time

 The time at which the process enters into the


ready queue is called the arrival time.
2. Burst Time

 The total amount of time required by the CPU to


execute the whole process is called the Burst
Time.
 This does not include the waiting time.
 It is confusing to calculate the execution time for
a process even before executing it hence the
scheduling problems based on the burst time
cannot be implemented in reality.
3. Completion Time

 The Time at which the process enters into the


completion state or the time at which the process
completes its execution, is called completion time.
4. Turnaround time

 The total amount of time spent by the process


from its arrival to its completion, is called
Turnaround time.
5. Waiting Time

 The Total amount of time for which the process


waits for the CPU to be assigned is called waiting
time.
6. Response Time

 The difference between the arrival time and the


time at which the process first gets the CPU is
called Response Time.
CPU Scheduling

 In the uniprogrammming systems like MS DOS,


when a process waits for any I/O operation to be
done, the CPU remains idle.
 This is an overhead since it wastes the time and
causes the problem of starvation.
 However, In Multiprogramming systems, the CPU
doesn't remain idle during the waiting time of the
Process and it starts executing other processes.
 Operating System has to define which process the
CPU will be given.
 In Multiprogramming systems, the Operating system schedules
the processes on the CPU to have the maximum utilization of it
and this procedure is called CPU scheduling.
 The Operating System uses various scheduling algorithm to
schedule the processes.
 This is a task of the short term scheduler to schedule the CPU for
the number of processes present in the Job Pool.
 Whenever the running process requests some IO operation then
the short term scheduler saves the current context of the process
(also called PCB) and changes its state from running to waiting.
 During the time, process is in waiting state; the Short term
scheduler picks another process from the ready queue and
assigns the CPU to this process.
 This procedure is called context switching.
What is saved in the Process Control Block?

 The Operating system maintains a process control


block during the lifetime of the process.
 The Process control block is deleted when the
process is terminated or killed.
 There is the following information which is saved
in the process control block and is changing with
the state of the process.
Why do we need Scheduling?

 In Multiprogramming, if the long term scheduler


picks more I/O bound processes then most of the
time, the CPU remains idle.
 The task of Operating system is to optimize the
utilization of resources.
 If most of the running processes change their
state from running to waiting then there may
always be a possibility of deadlock in the system.
 Hence to reduce this overhead, the OS needs to
schedule the jobs to get the optimal utilization of
CPU and to avoid the possibility to deadlock
Scheduling Algorithms

 There are various algorithms which are used by


the Operating System to schedule the processes
on the processor in an efficient way.
The Purpose of a Scheduling algorithm

1. Maximum CPU utilization


2. Fare allocation of CPU
3. Maximum throughput
4. Minimum turnaround time
5. Minimum waiting time
6. Minimum response time
1. First Come First Serve

 It is the simplest algorithm to implement.


 The process with the minimal arrival time will get
the CPU first.
 The lesser the arrival time, the sooner will the
process get the CPU.
 It is the non-preemptive type of scheduling.
2. Round Robin

 In the Round Robin scheduling algorithm, the OS


defines a time quantum (slice).
 All the processes will get executed in the cyclic
way.
 Each of the process will get the CPU for a small
amount of time (called time quantum) and then
get back to the ready queue to wait for its next
turn.
 It is a preemptive type of scheduling.
3. Shortest Job First

 The job with the shortest burst time will get the
CPU first.
 The lesser the burst time, the sooner will the
process get the CPU.
 It is the non-preemptive type of scheduling.
4. Shortest remaining time first

 It is the preemptive form of SJF.


 In this algorithm, the OS schedules the Job
according to the remaining time of the execution.
5. Priority based scheduling

 In this algorithm, the priority will be assigned to


each of the processes.
 The higher the priority, the sooner will the process
get the CPU.
 If the priority of the two processes is same then
they will be scheduled according to their arrival
time.
6. Highest Response Ratio Next

 In this scheduling Algorithm, the process with


highest response ratio will be scheduled next.
 This reduces the starvation in the system.
 There are two types of scheduling: preemptive
scheduling and non-preemptive scheduling.
 Preemptive scheduling allows a running process to
be interrupted by a high priority process, whereas
in non-preemptive scheduling, any new process
has to wait until the running process finishes its
CPU cycle.
THREADS
What is Thread?

 A thread is a flow of execution through the process code, with


its own program counter that keeps track of which instruction to
execute next, system registers which hold its current working
variables, and a stack which contains the execution history.
 A thread is a single sequential flow of execution of tasks of a
process.
 It is also known as thread of execution or thread of control.
 A thread shares with its peer threads few information like code
segment, data segment and open files.
 When one thread alters a code segment memory item, all other
threads see that.
 A thread is also called a lightweight process.
 Threads provide a way to improve application performance through
parallelism. (Parallelism is the ability to execute independent tasks
of a program in the same instant of time.)
 Threads represent a software approach to improving performance of
operating system by reducing the overhead, thread is equivalent to a
classical process.
 Each thread belongs to exactly one process and no thread can exist
outside a process.
 Each thread represents a separate flow of control.
 Threads have been successfully used in implementing network servers and
web server.
 They also provide a suitable foundation for parallel execution of
applications on shared memory multiprocessors.
 The following figure shows the working of a single-threaded and a
multithreaded process.
Difference between Process and Thread

S. Process Thread
N.
1 Process is heavy weight or Thread is light
resource intensive. weight, taking
lesser resources
than a process.
2 Process switching needs Thread switching
interaction with operating does not need to
system. interact with
operating system.
3 In multiple processing All threads can
environments, each process share same set of
executes the same code but open files, child
has its own memory and file processes.
resources.
4 If one process is blocked, While one thread is
5 Multiple Multiple threaded
processes processes use
without using fewer resources.
threads use more
resources.
6 In multiple One thread can
processes each read, write or
process operates change another
independently of thread's data.
the others.
Advantages of Thread

 Threads minimize the context switching time.


(Context switching time is the time taken
between two processes)
 Use of threads provides concurrency within a process.
(Concurrency is the execution of a set of
multiple instruction sequences at the same
time)
 Efficient communication.
 It is more economical to create and context switch
threads.
 Threads allow utilization of multiprocessor
architectures to a greater scale and efficiency.
Types of Thread

 User Level Threads − User managed threads.


 Kernel Level Threads − Operating System
managed threads acting on kernel, an operating
system core.
User Level Threads

 User-level threads are managed entirely by user-


level libraries or applications without direct
involvement from the operating system kernel
 In this case, the thread management kernel is not
aware of the existence of threads.
 The thread library contains code for creating and
destroying threads, for passing message and data
between threads, for scheduling thread execution
and for saving and restoring thread contexts.
 The application starts with a single thread
Advantages

 Thread switching does not require Kernel mode


privileges.
 User level thread can run on any operating
system.
 Scheduling can be application specific in the user
level thread.
 User level threads are fast to create and manage.
Disadvantages

 In a typical operating system, most system calls


are blocking.(one that must wait until the
action can be completed)
 Multithreaded application cannot take advantage
of multiprocessing.
Kernel Level Threads

 In this case, thread management is done by the Kernel.


 There is no thread management code in the application area.
 Kernel threads are supported directly by the operating system.
 Any application can be programmed to be multithreaded.
 All of the threads within an application are supported within a
single process.
 The Kernel maintains context information for the process as a whole
and for individuals threads within the process.
 Scheduling by the Kernel is done on a thread basis.
 The Kernel performs thread creation, scheduling and management
in Kernel space.
 Kernel threads are generally slower to create and manage than the
user threads.
Advantages

 Kernel can simultaneously schedule multiple


threads from the same process on multiple
processes.
 If one thread in a process is blocked, the Kernel
can schedule another thread of the same process.
 Kernel routines themselves can be multithreaded.
Disadvantages

 Kernel threads are generally slower to create and


manage than the user threads.
 Transfer of control from one thread to another
within the same process requires a mode switch
to the Kernel.
Multithreading Models

 Some operating system provide a combined user


level thread and Kernel level thread facility.
 Solaris is a good example of this combined
approach.
 In a combined system, multiple threads within the
same application can run in parallel on multiple
processors and a blocking system call need not
block the entire process.
 Multithreading models are three types
 Many to many relationship.
 Many to one relationship.
 One to one relationship.
Many to Many Model

 The many-to-many model multiplexes any number of


user threads onto an equal or smaller number of kernel
threads.
 The following diagram shows the many-to-many
threading model where 6 user level threads are
multiplexing with 6 kernel level threads.
 In this model, developers can create as many user
threads as necessary and the corresponding Kernel
threads can run in parallel on a multiprocessor
machine.
 This model provides the best accuracy on concurrency
and when a thread performs a blocking system call, the
kernel can schedule another thread for execution.
Many to One Model

 Many-to-one model maps many user level threads to one


Kernel-level thread.
 Thread management is done in user space by the thread
library.
 When thread makes a blocking system call, the entire
process will be blocked.
 Only one thread can access the Kernel at a time, so
multiple threads are unable to run in parallel on
multiprocessors.
 If the user-level thread libraries are implemented in the
operating system in such a way that the system does not
support them, then the Kernel threads use the many-to-
one relationship modes.
One to One Model

 There is one-to-one relationship of user-level thread to


the kernel-level thread.
 This model provides more concurrency than the many-
to-one model.
 It also allows another thread to run when a thread
makes a blocking system call.
 It supports multiple threads to execute in parallel on
microprocessors.
 Disadvantage of this model is that creating user thread
requires the corresponding Kernel thread.
 OS/2, windows NT and windows 2000 use one to one
relationship model.
Difference between User-Level & Kernel-Level Thread

S. User-Level Threads Kernel-Level


N. Thread
1 User-level threads are faster to Kernel-level
create and manage threads are
slower to create
and manage.
2 Implementation is by a thread Operating system
library at the user level. supports creation
of Kernel threads.
3 User-level thread is generic and Kernel-level
can run on any operating thread is specific
system. to the operating
system.
4 Multi-threaded applications Kernel routines
cannot take advantage of themselves can
multiprocessing. be

You might also like