UNIT-2 Process Management
UNIT-2 Process Management
Process
management
--AVANISH GOSWAMI
BCA STUDENT
UIM(FUGS)
BY--AVANISH GOSWAMI
Process Management in OS
• A Program does nothing unless its instructions are executed by a CPU. A
program in execution is called a process. In order to accomplish its task,
process needs the computer resources.
• There may exist more than one process in the system which may require the
same resource at the same time. Therefore, the operating system has to
manage all the processes and the resources in a convenient and efficient way.
BY--AVANISH GOSWAMI
• Process management can help organizations improve their operational
efficiency, reduce costs, increase customer satisfaction, and maintain
compliance with regulatory requirements. It involves analyzing the
performance of existing processes, identifying bottlenecks, and making
changes to optimize the process flow.
• some of the systems call in this category are as follows.
• Create a child’s process identical to the parent’s.
• Terminate a process
• Wait for a child process to terminate
• Change the priority of the process
• Block the process
BY--AVANISH GOSWAMI
• Ready the process
• Dispatch a process
• Suspend a process
• Resume a process
• Delay a process
• Fork a process
BY--AVANISH GOSWAMI
How Does a Process Look Like in
Memory?
The process
looks like:
BY--AVANISH GOSWAMI
Explanation of Process
• Text Section: A Process, sometimes known as the Text Section, also
includes the current activity represented by the value of the Program
Counter.
• Stack: The stack contains temporary data, such as function
parameters, returns addresses, and local variables.
• Data Section: Contains the global variable.
• Heap Section: Dynamically memory allocated to process during its run
time.
BY--AVANISH GOSWAMI
Characteristics of a Process
• Process Id: A unique identifier assigned by the operating system.
• Process State: Can be ready, running, etc.
• CPU registers: Like the Program Counter (CPU registers must be saved and
restored when a process is swapped in and out of the CPU)
• Accounts information: Amount of CPU used for process execution, time
limits, execution ID, etc
• I/O status information: For example, devices allocated to the process, open
files, etc
• CPU scheduling information: For example, Priority (Different processes may
have different priorities, for example, a shorter process assigned high priority
in the shortest job first scheduling)
BY--AVANISH GOSWAMI
States of Process
BY--AVANISH GOSWAMI
Advantages of Process
Management
• Improved Efficiency
• Cost Savings
• Improved Quality
• Increased Customer Satisfaction
• Compliance with Regulations
BY--AVANISH GOSWAMI
Disadvantages of Process
Management
• Time and Resource Intensive
• Resistance to Change
• Overemphasis on Process
• Risk of Standardization
• Risk of Standardization
BY--AVANISH GOSWAMI
What is Process Scheduling?
• Process scheduling is the activity of the process manager that handles
the removal of the running process from the CPU and the selection of
another process based on a particular strategy.
BY--AVANISH GOSWAMI
Diagram of program schedule
BY--AVANISH GOSWAMI
Categories of Scheduling
BY--AVANISH GOSWAMI
Types of Process Schedulers
There are three types of process schedulers:
• Long Term or Job Scheduler
• Short-Term or CPU Scheduler
• Medium-Term Scheduler
BY--AVANISH GOSWAMI
1. Long term scheduler
BY--AVANISH GOSWAMI
• The dispatcher is responsible for loading the process selected by the
Short-term scheduler on the CPU (Ready to Running State) Context
switching is done by the dispatcher only
• A dispatcher does the following:
• Switching context
• Switching to user mode.
• Jumping to the proper location in the newly loaded program.
BY--AVANISH GOSWAMI
• Short term scheduler is also known as CPU scheduler.
• It selects one of the Jobs from the ready queue and dispatch to the CPU for the
execution.
• A scheduling algorithm is used to select which job is going to be dispatched for the
execution.
• The Job of the short term scheduler can be very critical in the sense that if it selects
job whose CPU burst time is very high then all the jobs after that, will have to wait in
the ready queue for a very long time.
• This problem is called starvation which may arise if the short term scheduler makes
some mistakes while selecting the job.
BY--AVANISH GOSWAMI
3. Medium-Term Scheduler
• It is responsible for suspending and resuming the process. It mainly does
swapping (moving processes from main memory to disk and vice versa).
• Swapping may be necessary to improve the process mix or because a
change in memory requirements has overcommitted available memory,
requiring memory to be freed up.
• It is helpful in maintaining a perfect balance between the I/O bound and
the CPU bound. It reduces the degree of multiprogramming.
• Medium term scheduler takes care of the swapped out processes.
• If the running state processes needs some IO time for the completion
then there is a need to change its state from running to waiting.
BY--AVANISH GOSWAMI
• Medium term scheduler is used for this purpose.
• It removes the process from the running state to make room for the
other processes.
• Such processes are the swapped out processes and this procedure is
called swapping.
• The medium term scheduler is responsible for suspending and resuming
the processes.
BY--AVANISH GOSWAMI
Comparison Among Scheduler
BY--AVANISH GOSWAMI
BY--AVANISH GOSWAMI
BY--AVANISH GOSWAMI
• Cooperating Process: Cooperating Processes are those processes that
depend on other processes or processes.
• They work together to achieve a common task in an operating
system.
• These processes interact with each other by sharing the resources
such as CPU, memory, and I/O devices to complete the task.
BY--AVANISH GOSWAMI
Methods of Cooperating Process
BY--AVANISH GOSWAMI
• 2. Cooperation by Communication:The cooperating processes may
cooperate by using messages. If every process waits for a message
from another process to execute a task, it may cause a deadlock. If a
process does not receive any messages, it may cause starvation.
BY--AVANISH GOSWAMI
BY--AVANISH GOSWAMI
BY--AVANISH GOSWAMI
BY--AVANISH GOSWAMI
BY--AVANISH GOSWAMI
BY--AVANISH GOSWAMI
BY--AVANISH GOSWAMI
CPU Scheduling Criteria
• CPU scheduling is essential for the system’s performance and ensures
that processes are executed correctly and on time.
• Different CPU scheduling algorithms have other properties and the
choice of a particular algorithm depends on various factors.
• Many criteria have been suggested for comparing CPU scheduling
algorithms.
BY--AVANISH GOSWAMI
What is CPU scheduling?
• CPU Scheduling is a process that allows one process to use the CPU
while another process is delayed due to unavailability of any
resources such as I / O etc, thus making full use of the CPU.
• In short, CPU scheduling decides the order and priority of the
processes to run and allocates the CPU time based on various
parameters such as CPU usage, throughput, turnaround, waiting time,
and response time.
• The purpose of CPU Scheduling is to make the system more efficient,
faster, and fairer.
BY--AVANISH GOSWAMI
Criteria of CPU Scheduling
BY--AVANISH GOSWAMI
1. CPU utilization: The main objective of any CPU scheduling algorithm
is to keep the CPU as busy as possible. Theoretically, CPU utilization
can range from 0 to 100 but in a real-time system, it varies from 40
to 90 percent depending on the load upon the system.
2. Throughput:A measure of the work done by the CPU is the number
of processes being executed and completed per unit of time. This is
called throughput. The throughput may vary depending on the
length or duration of the processes.
BY--AVANISH GOSWAMI
• 3. Turnaround Time:
• For a particular process, an important criterion is how long it takes to
execute that process. The time elapsed from the time of submission
of a process to the time of completion is known as the turnaround
time. Turn-around time is the sum of times spent waiting to get into
memory, waiting in the ready queue, executing in CPU, and waiting
for I/O.
BY--AVANISH GOSWAMI
• 4. Waiting Time:
• A scheduling algorithm does not affect the time required to complete
the process once it starts execution. It only affects the waiting time of
a process i.e. time spent by a process waiting in the ready queue.
BY--AVANISH GOSWAMI
• 5. Response Time:
• In an interactive system, turn-around time is not the best criterion. A
process may produce some output fairly early and continue
computing new results while previous results are being output to the
user. Thus another criterion is the time taken from submission of the
process of the request until the first response is produced. This
measure is called response time.
BY--AVANISH GOSWAMI
CPU Scheduling Algorithms
• There are several CPU Scheduling Algorithms, that are listed below.
BY--AVANISH GOSWAMI
Scheduling Algorithms
• There are various algorithms which are used by the Operating System
to schedule the processes on the processor in an efficient way.
• The Purpose of a Scheduling algorithm
• Maximum CPU utilization
• Fare allocation of CPU
• Maximum throughput
• Minimum turnaround time
• Minimum waiting time
• Minimum response time
BY--AVANISH GOSWAMI
There are the following
algorithms which can be used to
schedule the jobs
• 1. First Come First Serve
• It is the simplest algorithm to implement. The process with the minimal
arrival time will get the CPU first. The lesser the arrival time, the sooner will
the process gets the CPU. It is the non-pre-emptive type of scheduling.
• 2. Round Robin
• In the Round Robin scheduling algorithm, the OS defines a time quantum
(slice). All the processes will get executed in the cyclic way. Each of the
process will get the CPU for a small amount of time (called time quantum)
and then get back to the ready queue to wait for its next turn. It is a pre-
emptive type of scheduling.
BY--AVANISH GOSWAMI
• 3. Shortest Job First
• The job with the shortest burst time will get the CPU first. The lesser
the burst time, the sooner will the process get the CPU. It is the non-
pre-emptive type of scheduling.
BY--AVANISH GOSWAMI
• 5. Priority based scheduling
• In this algorithm, the priority will be assigned to each of the
processes. The higher the priority, the sooner will the process get the
CPU. If the priority of the two processes is same then they will be
scheduled according to their arrival time.
BY--AVANISH GOSWAMI
Multiple Processors Scheduling
• Multiple processor scheduling or multiprocessor scheduling focuses
on designing the system's scheduling function, which consists of more
than one processor.
• Multiple CPUs share the load (load sharing) in multiprocessor
scheduling so that various processes run simultaneously.
• In general, multiprocessor scheduling is complex as compared to
single processor scheduling.
• In the multiprocessor scheduling, there are many processors, and
they are identical, and we can run any process at any time.
BY--AVANISH GOSWAMI
• The multiple CPUs in the system are in close communication, which shares a common bus,
memory, and other peripheral devices.
• So we can say that the system is tightly coupled.
• These systems are used when we want to process a bulk amount of data, and these systems
are mainly used in satellite, weather forecasting, etc.
• There are cases when the processors are identical, i.e., homogenous, in terms of their
functionality in multiple-processor scheduling.
• We can use any processor available to run any process in the queue.
• Multiprocessor systems may be heterogeneous (different kinds of CPUs) or homogenous (the
same CPU).
• There may be special scheduling constraints, such as devices connected via a private bus to
only one.
BY--AVANISH GOSWAMI
• CPU.
BY--AVANISH GOSWAMI
Approaches to Multiple
Processor Scheduling
• There are two approaches to multiple processor scheduling in the
operating system: Symmetric Multiprocessing and Asymmetric
Multiprocessing.
BY--AVANISH GOSWAMI
• Symmetric Multiprocessing: It is used where each processor is self-
scheduling. All processes may be in a common ready queue, or each
processor may have its private queue for ready processes. The
scheduling proceeds further by having the scheduler for each
processor examine the ready queue and select a process to execute.
• Asymmetric Multiprocessing: It is used when all the scheduling
decisions and I/O processing are handled by a single processor called
the Master Server. The other processors execute only the user code.
This is simple and reduces the need for data sharing, and this entire
scenario is called Asymmetric Multiprocessing.
BY--AVANISH GOSWAMI
Processor Affinity
• Processor Affinity means a process has an affinity for the processor on which it is currently
running. When a process runs on a specific processor, there are certain effects on the cache
memory.
• The data most recently accessed by the process populate the cache for the processor. As a
result, successive memory access by the process is often satisfied in the cache memory.
BY--AVANISH GOSWAMI
Load Balancing
• Load Balancing is the phenomenon that keeps the workload evenly
distributed across all processors in an SMP system. Load balancing is
necessary only on systems where each processor has its own private queue of
a process that is eligible to execute.
BY--AVANISH GOSWAMI
• Push Migration: In push migration, a task routinely checks the load on
each processor. If it finds an imbalance, it evenly distributes the load
on each processor by moving the processes from overloaded to idle
or less busy processors.
• Pull Migration: Pull Migration occurs when an idle processor pulls a
waiting task from a busy processor for its execution.
BY--AVANISH GOSWAMI
Symmetric Multiprocessor
• Symmetric Multiprocessors (SMP) is the third model. There is one copy of the
OS in memory in this model, but any central processing unit can run it. Now,
when a system call is made, the central processing unit on which the system call
was made traps the kernel and processed that system call. This model balances
processes and memory dynamically. This approach uses Symmetric
Multiprocessing, where each processor is self-scheduling.
• The scheduling proceeds further by having the scheduler for each processor
examine the ready queue and select a process to execute. In this system, this is
possible that all the process may be in a common ready queue or each processor
may have its private queue for the ready process. There are mainly three sources
of contention that can be found in a multiprocessor operating system
BY--AVANISH GOSWAMI
• Locking system: As we know that the resources are shared in the
multiprocessor system, there is a need to protect these resources for safe
access among the multiple processors. The main purpose of the locking scheme
is to serialize access of the resources by the multiple processors.
• Shared data: When the multiple processors access the same data at the same
time, then there may be a chance of inconsistency of data, so to protect this,
we have to use some protocols or locking schemes.
• Cache coherence: It is the shared resource data that is stored in multiple local
caches. Suppose two clients have a cached copy of memory and one client
change the memory block. The other client could be left with an invalid cache
without notification of the change, so this conflict can be resolved by
maintaining a coherent view of the data.
BY--AVANISH GOSWAMI
Master-Slave Multiprocessor
• In this multiprocessor model, there is a single data structure that
keeps track of the ready processes.
• In this model, one central processing unit works as a master and
another as a slave.
• All the processors are handled by a single processor, which is called
the master server.
BY--AVANISH GOSWAMI
• The master server runs the operating system process, and the slave
server runs the user processes.
• The memory and input-output devices are shared among all the
processors, and all the processors are connected to a common bus.
• This system is simple and reduces data sharing, so this system is called
Asymmetric multiprocessing.
BY--AVANISH GOSWAMI
Scheduling in Real Time
Systems
• Real-time systems are systems that carry real-time tasks. These tasks
need to be performed immediately with a certain degree of urgency.
• In particular, these tasks are related to control of certain events (or)
reacting to them. Real-time tasks can be classified as hard real-time
tasks and soft real-time tasks.
• A hard real-time task must be performed at a specified time which
could otherwise lead to huge losses. In soft real-time tasks, a specified
deadline can be missed. This is because the task can be rescheduled
(or) can be completed after the specified time,
BY--AVANISH GOSWAMI
• Based on schedulability, implementation (static or dynamic), and
the result (self or dependent) of analysis, the scheduling algorithm
are classified as follows.
• Static table-driven approaches:
• These algorithms usually perform a static analysis associated with
scheduling and capture the schedules that are advantageous. This
helps in providing a schedule that can point out a task with which the
execution must be started at run time.
BY--AVANISH GOSWAMI
• Static priority-driven pre-emptive approaches:
• Similar to the first approach, these type of algorithms also uses static
analysis of scheduling. The difference is that instead of selecting a
particular schedule, it provides a useful way of assigning priorities
among various tasks in pre-emptive scheduling.
•
• Dynamic planning-based approaches:
• Here, the feasible schedules are identified dynamically (at run time). It
carries a certain fixed time interval and a process is executed if and
only if satisfies the time constraint.
BY--AVANISH GOSWAMI
• Dynamic best effort approaches:
• These types of approaches consider deadlines instead of feasible
schedules. Therefore the task is aborted if its deadline is reached. This
approach is used widely is most of the real-time systems.
BY--AVANISH GOSWAMI
Advantages of Scheduling in
Real-Time Systems:
• Meeting Timing Constraints:
• Resource Optimization
• Priority-Based Execution
• Predictability and Determinism
• Control Over Task Execution
BY--AVANISH GOSWAMI
Disadvantages of Scheduling in Real-Time Systems:
• Increased Complexity
• Overhead
• Limited Resources
• Verification and Validation
• Scalability
BY--AVANISH GOSWAMI
BY--AVANISH GOSWAMI
BY--AVANISH GOSWAMI
BY--AVANISH GOSWAMI
Thank you
BY--AVANISH GOSWAMI