0% found this document useful (0 votes)
5 views

UNIT-2 Process Management

added by goswamiavanish92

Uploaded by

goswamiavanish92
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

UNIT-2 Process Management

added by goswamiavanish92

Uploaded by

goswamiavanish92
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 74

UNIT-2

Process
management
--AVANISH GOSWAMI
BCA STUDENT
UIM(FUGS)

BY--AVANISH GOSWAMI
Process Management in OS
• A Program does nothing unless its instructions are executed by a CPU. A
program in execution is called a process. In order to accomplish its task,
process needs the computer resources.

• There may exist more than one process in the system which may require the
same resource at the same time. Therefore, the operating system has to
manage all the processes and the resources in a convenient and efficient way.

• Some resources may need to be executed by one process at one time to


maintain the consistency otherwise the system can become inconsistent and
deadlock may occur.
BY--AVANISH GOSWAMI
• The operating system is responsible for the following activities in
connection with Process Management:
• Scheduling processes and threads on the CPUs.
• Creating and deleting both user and system processes.
• Suspending and resuming processes.
• Providing mechanisms for process synchronization

BY--AVANISH GOSWAMI
• Process management can help organizations improve their operational
efficiency, reduce costs, increase customer satisfaction, and maintain
compliance with regulatory requirements. It involves analyzing the
performance of existing processes, identifying bottlenecks, and making
changes to optimize the process flow.
• some of the systems call in this category are as follows.
• Create a child’s process identical to the parent’s.
• Terminate a process
• Wait for a child process to terminate
• Change the priority of the process
• Block the process
BY--AVANISH GOSWAMI
• Ready the process
• Dispatch a process
• Suspend a process
• Resume a process
• Delay a process
• Fork a process

BY--AVANISH GOSWAMI
How Does a Process Look Like in
Memory?
The process
looks like:

BY--AVANISH GOSWAMI
Explanation of Process
• Text Section: A Process, sometimes known as the Text Section, also
includes the current activity represented by the value of the Program
Counter.
• Stack: The stack contains temporary data, such as function
parameters, returns addresses, and local variables.
• Data Section: Contains the global variable.
• Heap Section: Dynamically memory allocated to process during its run
time.

BY--AVANISH GOSWAMI
Characteristics of a Process
• Process Id: A unique identifier assigned by the operating system.
• Process State: Can be ready, running, etc.
• CPU registers: Like the Program Counter (CPU registers must be saved and
restored when a process is swapped in and out of the CPU)
• Accounts information: Amount of CPU used for process execution, time
limits, execution ID, etc
• I/O status information: For example, devices allocated to the process, open
files, etc
• CPU scheduling information: For example, Priority (Different processes may
have different priorities, for example, a shorter process assigned high priority
in the shortest job first scheduling)
BY--AVANISH GOSWAMI
States of Process

• A process is in one of the following states:

• New: Newly Created Process (or) being-created process.


• Ready: After the creation process moves to the Ready state, i.e. the process is ready for
execution.
• Run: Currently running process in CPU (only one process at a time can be under
execution in a single processor)
• Wait (or Block): When a process requests I/O access.
• Complete (or Terminated): The process completed its execution.
• Suspended Ready: When the ready queue becomes full, some processes are moved to a
suspended ready state
• Suspended Block: When the waiting queue becomes full.
BY--AVANISH GOSWAMI
Diagram of process
management

BY--AVANISH GOSWAMI
Advantages of Process
Management
• Improved Efficiency
• Cost Savings
• Improved Quality
• Increased Customer Satisfaction
• Compliance with Regulations

BY--AVANISH GOSWAMI
Disadvantages of Process
Management
• Time and Resource Intensive
• Resistance to Change
• Overemphasis on Process
• Risk of Standardization
• Risk of Standardization

BY--AVANISH GOSWAMI
What is Process Scheduling?
• Process scheduling is the activity of the process manager that handles
the removal of the running process from the CPU and the selection of
another process based on a particular strategy.

• Process scheduling is an essential part of a Multiprogramming


operating system. Such operating systems allow more than one
process to be loaded into the executable memory at a time and the
loaded process shares the CPU using time multiplexing.

BY--AVANISH GOSWAMI
Diagram of program schedule

BY--AVANISH GOSWAMI
Categories of Scheduling

• Non-pre-emptive: In this case, a process’s resource cannot be taken


before the process has finished running. When a running process
finishes and transitions to a waiting state, resources are switched.
• Pre-emptive: In this case, the OS assigns resources to a process for a
predetermined period. The process switches from running state to
ready state or from waiting for state to ready state during resource
allocation. This switching happens because the CPU may give other
processes priority and substitute the currently active process for the
higher priority process.

BY--AVANISH GOSWAMI
Types of Process Schedulers
There are three types of process schedulers:
• Long Term or Job Scheduler
• Short-Term or CPU Scheduler
• Medium-Term Scheduler

BY--AVANISH GOSWAMI
1. Long term scheduler

• Long term scheduler is also known as job scheduler.


• It chooses the processes from the pool (secondary memory) and keeps them in
the ready queue maintained in the primary memory.
• Long Term scheduler mainly controls the degree of Multiprogramming.
• The purpose of long term scheduler is to choose a perfect mix of IO bound and
CPU bound processes among the jobs present in the pool.
• If the job scheduler chooses more IO bound processes then all of the jobs may
reside in the blocked state all the time and the CPU will remain idle most of the
time.
• This will reduce the degree of Multiprogramming. Therefore, the Job of long
term scheduler is very critical and may affect the system for a very long time.
BY--AVANISH GOSWAMI
• It brings the new process to the ‘Ready State’.
• It controls the Degree of Multi-programming, i.e., the number of processes
present in a ready state at any point in time.
• It is important that the long-term scheduler make a careful selection of
both I/O and CPU-bound processes.
• I/O-bound tasks are which use much of their time in input and output
operations while CPU-bound processes are which spend their time on the
CPU.
• The job scheduler increases efficiency by maintaining a balance between
the two. They operate at a high level and are typically used in batch-
processing systems.
BY--AVANISH GOSWAMI
2. Short-Term or CPU Scheduler
• It is responsible for selecting one process from the ready state for
scheduling it on the running state.
• Note: Short-term scheduler only selects the process to schedule it
doesn’t load the process on running. Here is when all the scheduling
algorithms are used.
• The CPU scheduler is responsible for ensuring no starvation due to
high burst time processes.

BY--AVANISH GOSWAMI
• The dispatcher is responsible for loading the process selected by the
Short-term scheduler on the CPU (Ready to Running State) Context
switching is done by the dispatcher only
• A dispatcher does the following:

• Switching context
• Switching to user mode.
• Jumping to the proper location in the newly loaded program.

BY--AVANISH GOSWAMI
• Short term scheduler is also known as CPU scheduler.
• It selects one of the Jobs from the ready queue and dispatch to the CPU for the
execution.

• A scheduling algorithm is used to select which job is going to be dispatched for the
execution.
• The Job of the short term scheduler can be very critical in the sense that if it selects
job whose CPU burst time is very high then all the jobs after that, will have to wait in
the ready queue for a very long time.

• This problem is called starvation which may arise if the short term scheduler makes
some mistakes while selecting the job.
BY--AVANISH GOSWAMI
3. Medium-Term Scheduler
• It is responsible for suspending and resuming the process. It mainly does
swapping (moving processes from main memory to disk and vice versa).
• Swapping may be necessary to improve the process mix or because a
change in memory requirements has overcommitted available memory,
requiring memory to be freed up.
• It is helpful in maintaining a perfect balance between the I/O bound and
the CPU bound. It reduces the degree of multiprogramming.
• Medium term scheduler takes care of the swapped out processes.
• If the running state processes needs some IO time for the completion
then there is a need to change its state from running to waiting.
BY--AVANISH GOSWAMI
• Medium term scheduler is used for this purpose.
• It removes the process from the running state to make room for the
other processes.
• Such processes are the swapped out processes and this procedure is
called swapping.
• The medium term scheduler is responsible for suspending and resuming
the processes.

• It reduces the degree of multiprogramming. The swapping is necessary


to have a perfect mix of processes in the ready queue.
BY--AVANISH GOSWAMI
Diagram of medium-term
schedule

BY--AVANISH GOSWAMI
Comparison Among Scheduler

Long Term Scheduler Short term schedular


• It is a job scheduler. • It is a CPU scheduler.
• Generally, Speed is lesser than • Speed is the fastest among all of
short term scheduler. them.
• It controls the degree of • It gives less control over how
multiprogramming. much multiprogramming is
• It is barely present or done.
nonexistent in the time-sharing • It is a minimal time-sharing
system. system.
BY--AVANISH GOSWAMI
Medium Term Scheduler
• It is a process-swapping scheduler.
• Speed lies in between both short and long-term schedulers.
• It reduces the degree of multiprogramming.
• It is a component of systems for time sharing.

BY--AVANISH GOSWAMI
BY--AVANISH GOSWAMI
BY--AVANISH GOSWAMI
• Cooperating Process: Cooperating Processes are those processes that
depend on other processes or processes.
• They work together to achieve a common task in an operating
system.
• These processes interact with each other by sharing the resources
such as CPU, memory, and I/O devices to complete the task.

BY--AVANISH GOSWAMI
Methods of Cooperating Process

• Cooperating processes may coordinate with each other by sharing


data or messages. The methods are given below:
1. Cooperation by sharing
• The processes may cooperate by sharing data, including variables,
memory, databases, etc. The critical section provides data integrity,
and writing is mutually exclusive to avoid inconsistent data.

BY--AVANISH GOSWAMI
• 2. Cooperation by Communication:The cooperating processes may
cooperate by using messages. If every process waits for a message
from another process to execute a task, it may cause a deadlock. If a
process does not receive any messages, it may cause starvation.

BY--AVANISH GOSWAMI
BY--AVANISH GOSWAMI
BY--AVANISH GOSWAMI
BY--AVANISH GOSWAMI
BY--AVANISH GOSWAMI
BY--AVANISH GOSWAMI
BY--AVANISH GOSWAMI
CPU Scheduling Criteria
• CPU scheduling is essential for the system’s performance and ensures
that processes are executed correctly and on time.
• Different CPU scheduling algorithms have other properties and the
choice of a particular algorithm depends on various factors.
• Many criteria have been suggested for comparing CPU scheduling
algorithms.

BY--AVANISH GOSWAMI
What is CPU scheduling?

• CPU Scheduling is a process that allows one process to use the CPU
while another process is delayed due to unavailability of any
resources such as I / O etc, thus making full use of the CPU.
• In short, CPU scheduling decides the order and priority of the
processes to run and allocates the CPU time based on various
parameters such as CPU usage, throughput, turnaround, waiting time,
and response time.
• The purpose of CPU Scheduling is to make the system more efficient,
faster, and fairer.

BY--AVANISH GOSWAMI
Criteria of CPU Scheduling

BY--AVANISH GOSWAMI
1. CPU utilization: The main objective of any CPU scheduling algorithm
is to keep the CPU as busy as possible. Theoretically, CPU utilization
can range from 0 to 100 but in a real-time system, it varies from 40
to 90 percent depending on the load upon the system.
2. Throughput:A measure of the work done by the CPU is the number
of processes being executed and completed per unit of time. This is
called throughput. The throughput may vary depending on the
length or duration of the processes.

BY--AVANISH GOSWAMI
• 3. Turnaround Time:
• For a particular process, an important criterion is how long it takes to
execute that process. The time elapsed from the time of submission
of a process to the time of completion is known as the turnaround
time. Turn-around time is the sum of times spent waiting to get into
memory, waiting in the ready queue, executing in CPU, and waiting
for I/O.

• Turn Around Time = Completion Time – Arrival Time.

BY--AVANISH GOSWAMI
• 4. Waiting Time:
• A scheduling algorithm does not affect the time required to complete
the process once it starts execution. It only affects the waiting time of
a process i.e. time spent by a process waiting in the ready queue.

• Waiting Time = Turnaround Time – Burst Time.

BY--AVANISH GOSWAMI
• 5. Response Time:
• In an interactive system, turn-around time is not the best criterion. A
process may produce some output fairly early and continue
computing new results while previous results are being output to the
user. Thus another criterion is the time taken from submission of the
process of the request until the first response is produced. This
measure is called response time.

• Response Time = CPU Allocation Time(when the CPU was allocated


for the first) – Arrival Time
BY--AVANISH GOSWAMI
• 6. Completion Time:The completion time is the time when the
process stops executing, which means that the process has
completed its burst time and is completely executed.
• 7. Priority: If the operating system assigns priorities to processes, the
scheduling mechanism should favor the higher-priority processes.

• 8. Predictability:A given process always should run in about the same


amount of time under a similar system load.

BY--AVANISH GOSWAMI
CPU Scheduling Algorithms
• There are several CPU Scheduling Algorithms, that are listed below.

• First Come First Served (FCFS)


• Shortest Job First (SJF)
• Longest Job First (LJF)
• Priority Scheduling
• Round Robin (RR)
• Shortest Remaining Time First (SRTF)
• Longest Remaining Time First (LRTF)
BY--AVANISH GOSWAMI
Factors Influencing CPU
Scheduling Algorithms
• There are many factors that influence the choice of CPU scheduling
algorithm. Some of them are listed below.

• The number of processes.


• The processing time required.
• The urgency of tasks.
• The system requirements.

BY--AVANISH GOSWAMI
Scheduling Algorithms
• There are various algorithms which are used by the Operating System
to schedule the processes on the processor in an efficient way.
• The Purpose of a Scheduling algorithm
• Maximum CPU utilization
• Fare allocation of CPU
• Maximum throughput
• Minimum turnaround time
• Minimum waiting time
• Minimum response time
BY--AVANISH GOSWAMI
There are the following
algorithms which can be used to
schedule the jobs
• 1. First Come First Serve
• It is the simplest algorithm to implement. The process with the minimal
arrival time will get the CPU first. The lesser the arrival time, the sooner will
the process gets the CPU. It is the non-pre-emptive type of scheduling.

• 2. Round Robin
• In the Round Robin scheduling algorithm, the OS defines a time quantum
(slice). All the processes will get executed in the cyclic way. Each of the
process will get the CPU for a small amount of time (called time quantum)
and then get back to the ready queue to wait for its next turn. It is a pre-
emptive type of scheduling.
BY--AVANISH GOSWAMI
• 3. Shortest Job First
• The job with the shortest burst time will get the CPU first. The lesser
the burst time, the sooner will the process get the CPU. It is the non-
pre-emptive type of scheduling.

• 4. Shortest remaining time first


• It is the pre-emptive form of SJF. In this algorithm, the OS schedules
the Job according to the remaining time of the execution.

BY--AVANISH GOSWAMI
• 5. Priority based scheduling
• In this algorithm, the priority will be assigned to each of the
processes. The higher the priority, the sooner will the process get the
CPU. If the priority of the two processes is same then they will be
scheduled according to their arrival time.

• 6. Highest Response Ratio Next


• In this scheduling Algorithm, the process with highest response ratio
will be scheduled next. This reduces the starvation in the system.

BY--AVANISH GOSWAMI
Multiple Processors Scheduling
• Multiple processor scheduling or multiprocessor scheduling focuses
on designing the system's scheduling function, which consists of more
than one processor.
• Multiple CPUs share the load (load sharing) in multiprocessor
scheduling so that various processes run simultaneously.
• In general, multiprocessor scheduling is complex as compared to
single processor scheduling.
• In the multiprocessor scheduling, there are many processors, and
they are identical, and we can run any process at any time.

BY--AVANISH GOSWAMI
• The multiple CPUs in the system are in close communication, which shares a common bus,
memory, and other peripheral devices.
• So we can say that the system is tightly coupled.
• These systems are used when we want to process a bulk amount of data, and these systems
are mainly used in satellite, weather forecasting, etc.

• There are cases when the processors are identical, i.e., homogenous, in terms of their
functionality in multiple-processor scheduling.
• We can use any processor available to run any process in the queue.
• Multiprocessor systems may be heterogeneous (different kinds of CPUs) or homogenous (the
same CPU).
• There may be special scheduling constraints, such as devices connected via a private bus to
only one.
BY--AVANISH GOSWAMI
• CPU.

• There is no policy or rule which can be declared as the best


scheduling solution to a system with a single processor. Similarly,
there is no best scheduling solution for a system with multiple
processors as well.

BY--AVANISH GOSWAMI
Approaches to Multiple
Processor Scheduling
• There are two approaches to multiple processor scheduling in the
operating system: Symmetric Multiprocessing and Asymmetric
Multiprocessing.

BY--AVANISH GOSWAMI
• Symmetric Multiprocessing: It is used where each processor is self-
scheduling. All processes may be in a common ready queue, or each
processor may have its private queue for ready processes. The
scheduling proceeds further by having the scheduler for each
processor examine the ready queue and select a process to execute.
• Asymmetric Multiprocessing: It is used when all the scheduling
decisions and I/O processing are handled by a single processor called
the Master Server. The other processors execute only the user code.
This is simple and reduces the need for data sharing, and this entire
scenario is called Asymmetric Multiprocessing.

BY--AVANISH GOSWAMI
Processor Affinity
• Processor Affinity means a process has an affinity for the processor on which it is currently
running. When a process runs on a specific processor, there are certain effects on the cache
memory.
• The data most recently accessed by the process populate the cache for the processor. As a
result, successive memory access by the process is often satisfied in the cache memory.

• Now, suppose the process migrates to another processor.


• In that case, the contents of the cache memory must be invalidated for the first processor,
and the cache for the second processor must be repopulated.
• Because of the high cost of invalidating and repopulating caches, most SMP(symmetric
multiprocessing) systems try to avoid migrating processes from one processor to another
and keep a process running on the same processor.
• This is known as processor affinity. There are two types of processor affinity, such as:
BY--AVANISH GOSWAMI
Soft Affinity: When an operating system has a policy of keeping a process running on the same processor but
not guaranteeing it will do so, this situation is called soft affinity.
Hard Affinity: Hard Affinity allows a process to specify a subset of processors on which it may run. Some Linux
systems implement soft affinity and provide system calls like sched_setaffinity() that also support hard affinity.

BY--AVANISH GOSWAMI
Load Balancing
• Load Balancing is the phenomenon that keeps the workload evenly
distributed across all processors in an SMP system. Load balancing is
necessary only on systems where each processor has its own private queue of
a process that is eligible to execute.

• Load balancing is unnecessary because it immediately extracts a runnable


process from the common run queue once a processor becomes idle. On SMP
(symmetric multiprocessing), it is important to keep the workload balanced
among all processors to utilize the benefits of having more than one
processor fully. One or more processors will sit idle while other processors
have high workloads along with lists of processors awaiting the CPU. There
are two general approaches to load balancing:

BY--AVANISH GOSWAMI
• Push Migration: In push migration, a task routinely checks the load on
each processor. If it finds an imbalance, it evenly distributes the load
on each processor by moving the processes from overloaded to idle
or less busy processors.
• Pull Migration: Pull Migration occurs when an idle processor pulls a
waiting task from a busy processor for its execution.

BY--AVANISH GOSWAMI
Symmetric Multiprocessor

• Symmetric Multiprocessors (SMP) is the third model. There is one copy of the
OS in memory in this model, but any central processing unit can run it. Now,
when a system call is made, the central processing unit on which the system call
was made traps the kernel and processed that system call. This model balances
processes and memory dynamically. This approach uses Symmetric
Multiprocessing, where each processor is self-scheduling.

• The scheduling proceeds further by having the scheduler for each processor
examine the ready queue and select a process to execute. In this system, this is
possible that all the process may be in a common ready queue or each processor
may have its private queue for the ready process. There are mainly three sources
of contention that can be found in a multiprocessor operating system
BY--AVANISH GOSWAMI
• Locking system: As we know that the resources are shared in the
multiprocessor system, there is a need to protect these resources for safe
access among the multiple processors. The main purpose of the locking scheme
is to serialize access of the resources by the multiple processors.
• Shared data: When the multiple processors access the same data at the same
time, then there may be a chance of inconsistency of data, so to protect this,
we have to use some protocols or locking schemes.
• Cache coherence: It is the shared resource data that is stored in multiple local
caches. Suppose two clients have a cached copy of memory and one client
change the memory block. The other client could be left with an invalid cache
without notification of the change, so this conflict can be resolved by
maintaining a coherent view of the data.
BY--AVANISH GOSWAMI
Master-Slave Multiprocessor
• In this multiprocessor model, there is a single data structure that
keeps track of the ready processes.
• In this model, one central processing unit works as a master and
another as a slave.
• All the processors are handled by a single processor, which is called
the master server.

BY--AVANISH GOSWAMI
• The master server runs the operating system process, and the slave
server runs the user processes.
• The memory and input-output devices are shared among all the
processors, and all the processors are connected to a common bus.
• This system is simple and reduces data sharing, so this system is called
Asymmetric multiprocessing.

BY--AVANISH GOSWAMI
Scheduling in Real Time
Systems
• Real-time systems are systems that carry real-time tasks. These tasks
need to be performed immediately with a certain degree of urgency.
• In particular, these tasks are related to control of certain events (or)
reacting to them. Real-time tasks can be classified as hard real-time
tasks and soft real-time tasks.
• A hard real-time task must be performed at a specified time which
could otherwise lead to huge losses. In soft real-time tasks, a specified
deadline can be missed. This is because the task can be rescheduled
(or) can be completed after the specified time,

BY--AVANISH GOSWAMI
• Based on schedulability, implementation (static or dynamic), and
the result (self or dependent) of analysis, the scheduling algorithm
are classified as follows.
• Static table-driven approaches:
• These algorithms usually perform a static analysis associated with
scheduling and capture the schedules that are advantageous. This
helps in providing a schedule that can point out a task with which the
execution must be started at run time.

BY--AVANISH GOSWAMI
• Static priority-driven pre-emptive approaches:
• Similar to the first approach, these type of algorithms also uses static
analysis of scheduling. The difference is that instead of selecting a
particular schedule, it provides a useful way of assigning priorities
among various tasks in pre-emptive scheduling.

• Dynamic planning-based approaches:
• Here, the feasible schedules are identified dynamically (at run time). It
carries a certain fixed time interval and a process is executed if and
only if satisfies the time constraint.
BY--AVANISH GOSWAMI
• Dynamic best effort approaches:
• These types of approaches consider deadlines instead of feasible
schedules. Therefore the task is aborted if its deadline is reached. This
approach is used widely is most of the real-time systems.

BY--AVANISH GOSWAMI
Advantages of Scheduling in
Real-Time Systems:
• Meeting Timing Constraints:
• Resource Optimization
• Priority-Based Execution
• Predictability and Determinism
• Control Over Task Execution

BY--AVANISH GOSWAMI
Disadvantages of Scheduling in Real-Time Systems:

• Increased Complexity
• Overhead
• Limited Resources
• Verification and Validation
• Scalability

BY--AVANISH GOSWAMI
BY--AVANISH GOSWAMI
BY--AVANISH GOSWAMI
BY--AVANISH GOSWAMI
Thank you

BY--AVANISH GOSWAMI

You might also like