0% found this document useful (0 votes)
34 views

Os Chapter 4

Uploaded by

balpandeamish302
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views

Os Chapter 4

Uploaded by

balpandeamish302
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

OPERATING SYSTEM 5TH SEM ASHWINI GOLGHATE

Explain Time sharing O.S.


• In time sharing system, the CPU executes multiple jobs by switching among them.
• The switches occur so frequently that the users can interact with each program while it
is running.
• It includes an interactive computer system which provides direct communication
between the user and the system.
• A time-sharing system allows many users to share the computer resources
simultaneously.
• The timesharing system provides the direct access to many users where CPU time is
divided among all the users on scheduled basis.
• The operating system allocates a time slice to each user.
• When this time is expired, it passes control to the next user on the system.
• The time allowed is extremely small and the users are given the impression that each of
them has their own CPU and they are the sole owner of the CPU.
• In this time slice each user gets attention of the CPU.
• The objective of time-sharing system is to minimize response time of process.
• Example: The concept of time-sharing system is shown in figure:

In above figure, the user 5 is active but user 1, user 2, user 3, and user 4 are in waiting state
whereas user 6 is in ready status.

1|Page
OPERATING SYSTEM 5TH SEM ASHWINI GOLGHATE

Chapter 4: CPU Scheduling and Algorithms (14 M- 18 M)

• Scheduling types- scheduling objective, CPU and I/O burst cycle, Pre-emptive, Non-
Pre-emptive Scheduling
• Scheduling Criteria
• Types of Scheduling algorithms-First come first served (FCFS), Shortest Job First (SJF),
Shortest remaining Time (SRTN), Round Robin (RR), Priority scheduling, multilevel
queue scheduling.
• Deadlock- System Models, Necessary Condition leading to Deadlocks, deadlock
handling-Prevention, avoidance.
➢ CPU Scheduling in Operating Systems
• CPU Scheduling is a process that allows one process to use the CPU while another
process is delayed due to unavailability of any resources such as I / O etc, thus making
full use of the CPU.
• In short, CPU scheduling decides the order and priority of the processes to run and
allocates the CPU time based on various parameters such as CPU usage, throughput,
turnaround, waiting time, and response time.
• The purpose of CPU Scheduling is to make the system more efficient, faster, and fairer.
➢ What is CPU Scheduling?
• Before we get to CPU scheduling, let's define a process.
• A process is essentially just a set of instructions or a program in execution.

2|Page
OPERATING SYSTEM 5TH SEM ASHWINI GOLGHATE

• As you can see in the diagram above, we have processes that come from the job queue to
the ready queue (in primary memory) that are one by one, in some manner given
resources, and then their execution is completed.
• Say you have a uni programming system (like MS-DOS) and a process that
requires I/O is being executed.
• While it waits for the I/O operations to complete, the CPU remains idle.
• This is wrong, because we're wasting the resources of the CPU, as well as time, and
might result in some processes waiting too long for their execution.
• In multiprogramming systems, however, the CPU does not remain idle whenever a
process currently executing waits for I/O.
• It starts the execution of other processes, making an attempt to maximize CPU
utilization.
• How does the CPU decide which process should be executed next from the ready queue
for maximum utilization of the CPU? This procedure of "scheduling" the processes, is
called CPU scheduling.

There is essential 4 conditions under which CPU scheduling decisions are taken:

1. If a process is making the switch between the running state to the waiting state (could
be for an I/O request, or invocation of wait() for terminating one of its child processes)

2. If a process is making the switch from the running state to the ready state (on the
occurrence of an interrupt, for example)

3. If a process is making the switch between waiting and ready state (e.g. when
its I/O request completes)

4. If a process terminates upon completion of execution.

• So in the case of conditions 1 and 4, the CPU does not really have a choice of
scheduling, if a process exists in the ready queue the CPU's response to this would be

3|Page
OPERATING SYSTEM 5TH SEM ASHWINI GOLGHATE

to select it for execution. In cases 2 and 3, the CPU has a choice of selecting a particular
process for executing next.
➢ TYPES OF CPU SCHEDULING

1) Non-Preemptive Scheduling
• In the case of non-preemptive scheduling, new processes are executed only after the
current process has completed its execution.
• The process holds the resources of the CPU (CPU time) till its state changes to
terminated or is pushed to the process waiting state.
• If a process is currently being executed by the CPU, it is not interrupted till it is
completed.
• Once the process has completed its execution, the processer picks the next process from
the ready queue (the queue in which all processes that are ready for execution are
stored).

4|Page
OPERATING SYSTEM 5TH SEM ASHWINI GOLGHATE

For Example:

• In the image above, we can see that all the processes were executed in the order in which
they appeared, and none of the processes were interrupted by another, making this a
non-preemptive, FCFS (First Come, First Served) CPU scheduling algorithm.
• P2 was the first process to arrive (arrived at time = 0), and was hence executed first.
Let's ignore the third column for a moment, we'll get to that soon.
• Process P3 arrived next (at time = 1) and was executed after the previous process - P2
was done executing, and so on.
• Some examples of non-preemptive scheduling algorithms are - Shortest Job First (SJF,
non-preemptive), and Priority scheduling (non-preemptive).

2) Preemptive Scheduling
• Preemptive scheduling takes into consideration the fact that some processes could have
a higher priority and hence must be executed before the processes that have a lower
priority.
• In preemptive scheduling, the CPU resource is allocated to a process for only a limited
period of time and then those resources are taken back and assigned to another process
(the next in execution).
• If the process was yet to complete its execution, it is placed back in the ready state,
where it will remain till it gets a chance to execute once again.
• So, when we take a look at the conditions under which CPU scheduling decisions are
taken on the basis of which CPU provides its resources to processes, we can see that
there isn't really a choice in making a decision when it comes to condition 1 and 4.
• If we have a process in the ready queue, we must select it for execution.
• However, we do have a choice in conditions 2 and 3.
• If we opt to make the choice of scheduling only if a process terminates (condition 4) or
if the current process execution is waiting for I/O (condition 1) then we can say that our

5|Page
OPERATING SYSTEM 5TH SEM ASHWINI GOLGHATE

scheduling is non-preemptive, however, if we make scheduling decisions in other


conditions as well, we can say that our scheduling process is preemptive.

➢ COMPARISION BETWEEN NON-PREEMPTIVE and PREEMPTIVE

6|Page
OPERATING SYSTEM 5TH SEM ASHWINI GOLGHATE

➢ CPU Scheduling Terminologies/ Criteria:

Let's now discuss some important terminologies that are relevant to CPU scheduling.

1. Arrival time: Arrival time (AT) is the time at which a process arrives at the ready queue.

2. Burst Time: As you may have seen the third column is 'burst time', It is the time required
by the CPU to complete the execution of a process, or the amount of time required for
the execution of a process. It is also sometimes called the execution time or running
time.

3. Completion Time: As the name suggests, completion time is the time at when a process
completes its execution. It is not to be confused with burst time.

4. Turn-Around Time: Also written as TAT, turn-around time is simply the difference
between completion time and arrival time (Completion time - arrival time).

5. Waiting Time: The Waiting time (WT) of a process is the difference between turn around
time and burst time (TAT - BT), i.e. the amount of time a process waits to get CPU
resources in the ready queue.

6. Response Time: Response time (RT) of a process is the time after which any process
gets CPU resources allocated after entering the ready queue.

➢ Need for CPU Scheduling Algorithm


• CPU (Central Processing Unit) scheduling algorithm is needed to efficiently allocate
the available processing time of a CPU among multiple processes that are competing
for the CPU's resources.
• The need for CPU scheduling arises because modern operating systems allow multiple
processes to execute concurrently on a single CPU.
• When multiple processes are running, they contend for the CPU's resources, and the
CPU must choose which process to execute at any given moment.
• The CPU scheduling algorithm determines the order in which processes are executed
and how much CPU time each process is allocated.
7|Page
OPERATING SYSTEM 5TH SEM ASHWINI GOLGHATE

• A good CPU scheduling algorithm should ensure that each process gets a fair share of
the CPU time, while also maximizing overall system throughput and minimizing
response time.
• In short we can say Purpose of a Scheduling algorithm is

1. Maximum CPU utilization

2. Fare allocation of CPU

3. Maximum throughput

4. Minimum turnaround time

5. Minimum waiting time

6. Minimum response time

➢ Types of CPU scheduling Algorithms

1. First Come First Serve (FCFS) Scheduling Algorithm

• The FCFS algorithm is the simplest of scheduling algorithms in OS. T


• his is because the deciding principle behind it is just as its name suggests- on a first
come basis.
• The job that requests execution first gets the CPU allocated to it, then the second, and
so on.

Characteristics of FCFS scheduling algorithm

• The algorithm is easy to understand and implement.

• Programs are executed on a first come first serve basis.

• It is a non-preemptive scheduling algorithm.

• In this case, the ready queue acts as the First In First Out (FIFO) queue - Where the job
that gets ready for execution first, also gets out first.

8|Page
OPERATING SYSTEM 5TH SEM ASHWINI GOLGHATE

• This is used in most batch OS.

Advantages of FCFS scheduling algorithm

• The fact that it is simple to implement means it can easily be integrated into a pre-
existing system.

• It is especially useful when the processes have a large burst time since there is no need
for context switching.

• The absence of low or high-priority preferences makes it fairer.

• Every process gets its chance to execute.

Disadvantages of the FCFS scheduling algorithm

• Since its first come basis, small processes with a very short execution time, have to wait
their turn.

• There is a high wait and turnaround time for this scheduling algorithm in OS.

• All in all, it leads to inefficient utilization of the CPU.

Example of FCFS scheduling algorithm in OS-

• In the table above, 5 processes have arrived at the CPU at different times.
• The process with the minimal arrival time goes first.

9|Page
OPERATING SYSTEM 5TH SEM ASHWINI GOLGHATE

• Since the first process has a burst time of 3, the CPU will remain busy for 3 units of
time, which indicates that the second process will have to wait for 1 unit of time since
it arrives at T=2.
• In this way, the waiting and turnaround times for all processes can be calculated.
• This also gives the average waiting time and the average turnaround time.
• We can contrast this with other algorithms for the same set of processes.
• Using a queue for the execution of processes is helpful in keeping track of which process
comes at what stage.
• Although this is one of the simplest CPU scheduling algorithms, it suffers from the
convoy effect.
• This occurs when multiple smaller processes get stuck behind a large process, which
leads to an extremely high average wait time.
• This is similar to multiple cars stuck behind a slow-moving truck on a single-lane road.

2. Shortest Job First (SJF) Scheduling Algorithm

• The Shortest Job First (SJF) is a CPU scheduling algorithm that selects the shortest jobs
on priority and executes them.
• The idea is that jobs with short burst time get done quickly, making CPU available for
other, longer jobs/ processes.
• In other words, this is a priority scheduling algorithm based on the shortest burst time.

Characteristics of SJF scheduling algorithm

• This CPU scheduling algorithm has a minimum average wait time since it prioritizes
jobs with the shortest burst time.

• If there are multiple short jobs, it may lead to starvation.

• This is a non-preemptive scheduling algorithm.

• It is easier to implement the SJF algorithm in Batch OS.

10 | P a g e
OPERATING SYSTEM 5TH SEM ASHWINI GOLGHATE

Advantages of SJF scheduling algorithm

• It minimizes the average waiting time and turnaround time.

• Beneficial in long-term scheduling.

• Is better than the FCFS scheduling algorithm.

• Useful for batch processes.

Disadvantages of the SJF scheduling algorithm

• As mentioned if short time jobs keep on coming, it may lead to starvation for longer
jobs.

• Is dependent upon burst time, but it is not always possible to know the burst time
beforehand.

• Does not work for interactive systems.

Example of SJF scheduling algorithm in OS-

• Here, the first 2 processes are executed as they come, but when the 5th process comes
in, it instantly jumps to the front of the queue since it has the shortest burst time.
• The turnaround time and waiting time is calculated accordingly.
• It's visible that this is an improvement over FCFS, as it smaller average waiting time as
well as a smaller average turnaround time.

11 | P a g e
OPERATING SYSTEM 5TH SEM ASHWINI GOLGHATE

• This algorithm is especially useful in cases where there are multiple incoming processes
and their burst time is known in advance.
• The average waiting time obtained is lower as compared to the first-come-first-served
scheduling algorithm.
3) Shortest Remaining Time First (SRTF) Scheduling Algorithm
• The SRTF scheduling algorithm is the preemptive version of the SJF scheduling
algorithm in OS.
• This calls for the job with the shortest burst time remaining to be executed first, and it
keeps pre-empting jobs on the basis of burst time remaining in ascending order.

Characteristics of the SRTF Scheduling Algorithm

• The incoming processes are sorted on the basis of their CPU-burst time.

• It requires the least burst time to be executed first, but if another process arrives that has
an even lesser burst time, then the former process will get preempted for the latter.

• The flow of execution is- a process is executed for some specific unit of time and then
the scheduler checks if any new processes with even shorter burst times have arrived.

Advantages of SRTF Scheduling Algorithm

• More efficient than SJF since it's the preemptive version of SJF.

• Efficient scheduling for batch processes.

• The average waiting time is lower in comparison to many other scheduling algorithms
in OS.

Disadvantages of SRTF Scheduling Algorithm

• Longer processes may starve if short jobs keep getting the first shot.

• Can’t be implemented in interactive systems.

12 | P a g e
OPERATING SYSTEM 5TH SEM ASHWINI GOLGHATE

• The context switch happens too many times, leading to a rise in the overall completion
time.

• The remaining burst time might not always be apparent before the execution.

Example of SRTF scheduling algorithm in OS-

• Here, the first process starts first and then the second process executes for 1 unit of time.
• It is then pre-empted by the arrival of the third process which has a lower service time.
• This goes on until the ready queue is empty and all processes are done executing.

4) Priority Scheduling Algorithm in OS


• This CPU scheduling algorithm in OS first executes the jobs with higher priority.
• That is, the job with the highest priority gets executed first, followed by the second
prioritized jobs, and so on.

Characteristics of Priority Scheduling Algorithm

• Jobs are scheduled on the basis of the priority level, in descending order.

• If a job with higher priority than the one running currently comes on, the CPU preempts
the current job in favor of the one with higher priority.

• But for other purposes, it follows a non-preemptive scheduling approach.


13 | P a g e
OPERATING SYSTEM 5TH SEM ASHWINI GOLGHATE

• In between two jobs with the same priority, the FCFS process decides which jobs get
executed first.

• The priority of a process can be set depending on multiple factors like memory
requirements, required CPU time, etc.

Advantages of Priority Scheduling Algorithm

• This process is simpler than most other scheduling algorithms in OS.

• Priorities help in sorting the incoming processes.

• Works well for static and dynamic environments.

Disadvantages of Priority Scheduling Algorithm

• It may lead to the starvation problem in jobs with low priority.

• The average turnaround and waiting time might be higher in comparison to other CPU
scheduling algorithms.

Example of Priority Scheduling Algorithm in OS-

14 | P a g e
OPERATING SYSTEM 5TH SEM ASHWINI GOLGHATE

• Here, different priorities are assigned to the incoming processes.


• The lower the number, the higher the priority.
• The 1st process to be executed is the second one, since it has higher priority than the
first process.
• Then the fourth process gets its turn.
• This is known as priority scheduling.
• The calculated times may not be the lowest but it helps to prioritize important processes
over others.

5. Round Robin Scheduling Algorithm in OS

• In this scheduling algorithm, the OS defines a quantum time or a fixed time period.
• And every job is run cyclically for this predefined period of time, before being pre-
empted for the next job in the ready queue.
• The jobs that are pre-empted before completion go back to the ready queue to wait their
turn.
• It is also referred to as the preemptive version of the FCFS scheduling algorithm in OS.

• As seen from the figure, the scheduler executes the 3 incoming processes part by part.

15 | P a g e
OPERATING SYSTEM 5TH SEM ASHWINI GOLGHATE

Characteristics of RR Scheduling Algorithm

• Once a job begins running, it is executed for a predetermined time and gets pre-empted
after the time quantum is over.

• It is easy and simple to use or implement.

• The RR scheduling algorithm is one of the most commonly used CPU scheduling
algorithms in OS.

• It is a preemptive algorithm.

Advantages of RR Scheduling Algorithm

• This seems like a fair algorithm since all jobs get equal time CPU.

• Does not lead to any starvation problems.

• New jobs are added at the end of the ready queue and do not interrupt the ongoing
process.

• Leads to efficient utilization of the CPU.

Disadvantages of RR Scheduling Algorithm

• Every time job runs the course of quantum time, a context switch happens. This adds to
the overhead time, and ultimately the overall execution time.

• A low slicing time may lead to low CPU output.

• Important tasks aren’t given priority.

• Choosing the correct time quantum is a difficult job.

16 | P a g e
OPERATING SYSTEM 5TH SEM ASHWINI GOLGHATE

Example of RR Scheduling Algorithm in OS

• Let's take a quantum time of 4 units.


• The first process will execute and get completed.
• After a gap of 1 unit, the second process executes for 4 units.
• Then the third one executes since it has also arrived in the ready queue.
• After 4 units, the fourth process executes.
• This process keeps going until all processes are done.
• It is worth noting that the minimum average waiting time is higher than some of the
other algorithms.
• While this approach does result in a higher turnaround time, it is much more efficient in
multitasking environments in comparison to most other scheduling algorithms in OS.

5) Multiple-level Queue (MLQ) Scheduling Algorithm


• The multiple-level queue scheduling approach calls for first dividing all the processes
in the ready queue into different classes on the basis of their scheduling needs.
• These lead to the creation of multiple queues, for example, queues of foreground jobs,
batch jobs, etc.

17 | P a g e
OPERATING SYSTEM 5TH SEM ASHWINI GOLGHATE

• The OS then treats these queues individually and executes them by running different
algorithms on them. Take a look at the diagram below.

• The figure above displays how processes are separated on the basis of their priority
depending on the type and executed using different algorithms.
• Here, for example, FCFS, SJF, and the Round-Robin (RR) scheduling algorithms.

Characteristics of MLQ Scheduling Algorithm

• Jobs with common characteristics are grouped into individual queues.

• Each queue is assigned a scheduling algorithm depending on specific needs.

• The queues are then given a priority level to decide which queue gets CPU time first.

Advantages of MLQ Scheduling Algorithm

• Uses a combination of algorithms to provide the best results.

• The overhead time is less.

Disadvantages of MLQ Scheduling Algorithm

• Once a job is assigned to a queue it cannot be changed. This may lead to inflexibility
and hence inefficiency.

• This may lead to a starvation problem if high high-priority queue does not allow for
low-priority queues to take turns.

18 | P a g e
OPERATING SYSTEM 5TH SEM ASHWINI GOLGHATE

• Not easy to implement.

• Sorting the jobs might become complex.

➢ DEADLOCK
• A deadlock is a situation where a set of processes is blocked because each process is
holding a resource and waiting for another resource acquired by some other process.
• In this article, we will discuss deadlock, its necessary conditions, etc. in detail.

How Does Deadlock occur in the Operating System?

• Before going into detail about how deadlock occurs in the Operating System, let’s first
discuss how the Operating System uses the resources present.
• A process in an operating system uses resources in the following way.

• Requests a resource

• Use the resource

• Releases the resource

• A situation occurs in operating systems when there are two or more processes that hold
some resources and wait for resources held by other(s).
• For example, in the below diagram, Process 1 is holding Resource 1 and waiting for
resource 2 which is acquired by process 2, and process 2 is waiting for resource 1.

19 | P a g e
OPERATING SYSTEM 5TH SEM ASHWINI GOLGHATE

➢ Necessary Conditions for Deadlock in OS

1) Mutual Exclusion
• There should be a resource that can only be held by one process at a time.
• In the diagram below, there is a single instance of Resource 1 and it is held by Process
1 only.

2) Hold and Wait


• A process can hold multiple resources and still request more resources from other
processes which are holding them.
• In the diagram given below, Process 2 holds Resource 2 and Resource 3 and is
requesting the Resource 1 which is held by Process 1.

3) No Pre-emption
• A resource cannot be preempted from a process by force.
• A process can only release a resource voluntarily.
• In the diagram below, Process 2 cannot preempt Resource 1 from Process 1.

20 | P a g e
OPERATING SYSTEM 5TH SEM ASHWINI GOLGHATE

• It will only be released when Process 1 relinquishes it voluntarily after its execution is
complete.

4) Circular Wait
• A process is waiting for the resource held by the second process, which is waiting for
the resource held by the third process and so on, till the last process is waiting for a
resource held by the first process.
• This forms a circular chain.
• For example: Process 1 is allocated Resource2 and it is requesting Resource 1.
• Similarly, Process 2 is allocated Resource 1 and it is requesting Resource 2.
• This forms a circular wait loop.

21 | P a g e
OPERATING SYSTEM 5TH SEM ASHWINI GOLGHATE

➢ METHODS OF HANDLING DEADLOCKS

There are four approaches to dealing with deadlocks.

1. Deadlock Prevention

2. Deadlock avoidance (Banker's Algorithm)

3. Deadlock detection & recovery

4. Deadlock Ignorance (Ostrich Method)

1. Deadlock Prevention

• The strategy of deadlock prevention is to design the system in such a way that the
possibility of deadlock is excluded.
• The indirect methods prevent the occurrence of one of three necessary conditions of
deadlock i.e., mutual exclusion, no pre-emption, and hold and wait.
• The direct method prevents the occurrence of circular wait.
• Prevention techniques –
• Mutual exclusion – are supported by the OS.
• Hold and Wait – the condition can be prevented by requiring that a process requests all
its required resources at one time and blocking the process until all of its requests can
be granted at the same time simultaneously.
• But this prevention does not yield good results because:

• long waiting time required

• inefficient use of allocated resource

• A process may not know all the required resources in advance

22 | P a g e
OPERATING SYSTEM 5TH SEM ASHWINI GOLGHATE

• No pre-emption – techniques for ‘no pre-emption are’

• If a process that is holding some resource, requests another resource that can not be
immediately allocated to it, all resources currently being held are released and if
necessary, request again together with the additional resource.

• If a process requests a resource that is currently held by another process, the OS may
pre-empt the second process and require it to release its resources. This works only if
both processes do not have the same priority.

• Circular wait One way to ensure that this condition never holds is to impose a total
ordering of all resource types and to require that each process requests resources in
increasing order of enumeration, i.e., if a process has been allocated resources of type
R, then it may subsequently request only those resources of types following R in
ordering.

2. Deadlock Avoidance

• The deadlock avoidance Algorithm works by proactively looking for potential deadlock
situations before they occur.
• It does this by tracking the resource usage of each process and identifying conflicts that
could potentially lead to a deadlock.
• If a potential deadlock is identified, the algorithm will take steps to resolve the conflict,
such as rolling back one of the processes or pre-emptively allocating resources to other
processes.
• The Deadlock Avoidance Algorithm is designed to minimize the chances of a deadlock
occurring, although it cannot guarantee that a deadlock will never occur.
• This approach allows the three necessary conditions of deadlock but makes judicious
choices to assure that the deadlock point is never reached.
• It allows more concurrency than avoidance detection.
• A decision is made dynamically whether the current resource allocation request will, if
granted, potentially lead to deadlock.
23 | P a g e
OPERATING SYSTEM 5TH SEM ASHWINI GOLGHATE

• It requires knowledge of future process requests. Two techniques to avoid deadlock:

1. Process initiation denial

2. Resource allocation denial

Advantages

• Not necessary to pre-empt and rollback processes

• Less restrictive than deadlock prevention

Disadvantages

• Future resource requirements must be known in advance

• Processes can be blocked for long periods

• Exists a fixed number of resources for allocation

➢ Banker’s Algorithm
• The Banker’s Algorithm is based on the concept of resource allocation graphs.
• A resource allocation graph is a directed graph where each node represents a process,
and each edge represents a resource.
• The state of the system is represented by the current allocation of resources between
processes.
• For example, if the system has three processes, each of which is using two resources,
the resource allocation graph would look like this:
• Processes A, B, and C would be the nodes, and the resources they are using would be
the edges connecting them.
• The Banker’s Algorithm works by analyzing the state of the system and determining if
it is in a safe state or at risk of entering a deadlock.
• To determine if a system is in a safe state, the Banker’s Algorithm uses two matrices:
the available matrix and the need matrix.
• The available matrix contains the amount of each resource currently available.
24 | P a g e
OPERATING SYSTEM 5TH SEM ASHWINI GOLGHATE

• The need matrix contains the amount of each resource required by each process.
• The Banker’s Algorithm then checks to see if a process can be completed without
overloading the system.
• It does this by subtracting the amount of each resource used by the process from the
available matrix and adding it to the need matrix.
• If the result is in a safe state, the process is allowed to proceed, otherwise, it is blocked
until more resources become available.
• The Banker’s Algorithm is an effective way to prevent deadlocks in multiprogramming
systems.
• It is used in many operating systems, including Windows and Linux.
• In addition, it is used in many other types of systems, such as manufacturing systems
and banking systems.
• The Banker’s Algorithm is a powerful tool for resource allocation problems, but it is not
foolproof.
• It can be fooled by processes that consume more resources than they need, or by
processes that produce more resources than they need.
• Also, it can be fooled by processes that consume resources in an unpredictable manner.
• To prevent these types of problems, it is important to carefully monitor the system to
ensure that it is in a safe state.

3. Deadlock Detection

• Deadlock detection is used by employing an algorithm that tracks the circular waiting
and kills one or more processes so that the deadlock is removed.
• The system state is examined periodically to determine if a set of processes is
deadlocked.
• A deadlock is resolved by aborting and restarting a process, relinquishing all the
resources that the process held.

• This technique does not limit resource access or restrict process action.
25 | P a g e
OPERATING SYSTEM 5TH SEM ASHWINI GOLGHATE

• Requested resources are granted to processes whenever possible.

• It never delays the process initiation and facilitates online handling.

• The disadvantage is the inherent pre-emption losses.

4. Deadlock Ignorance

• In the Deadlock ignorance method the OS acts like the deadlock never occurs and
completely ignores it even if the deadlock occurs.
• This method only applies if the deadlock occurs very rarely.
• The algorithm is very simple.
• It says, “if the deadlock occurs, simply reboot the system and act like the deadlock never
occurred.”
• That’s why the algorithm is called the Ostrich Algorithm.

Advantages

• Ostrich Algorithm is relatively easy to implement and is effective in most cases.

• It helps in avoiding the deadlock situation by ignoring the presence of deadlocks.

Disadvantages

• Ostrich Algorithm does not provide any information about the deadlock situation.

• It can lead to reduced performance of the system as the system may be blocked for a
long time.

26 | P a g e
OPERATING SYSTEM 5TH SEM ASHWINI GOLGHATE

PREVIOUS YEAR QUESTION

WINTER 23
1) Explain following terms with respect to scheduling
i) CPU utilization
ii) Throughput
iii) Turnaround time
iv) Waiting time
2) What is deadlock? Discuss any one method of deadlock prevention.
3) Explain working of CPU switch from process to process with neat labelled diagram.
4) Solve given problem by using FCFS scheduling algorithm. Draw correct Gantt chart and
calculate average waiting time and average turnaround time –
Process Arrival time Burst time
P0 0 10
P1 1 29
P2 2 3
P3 3 7
P4 4 12

5) Which hole is taken for next segment request for 8 KB in a swapping system for First fit,
Best fit and Worst fit.
OS
4 KB
9 KB
20 KB
16 KB

27 | P a g e
OPERATING SYSTEM 5TH SEM ASHWINI GOLGHATE

8 KB
2 KB
6 KB
6) How pre-emptive scheduling is better than non pre-emptive scheduling by solving
following problem using SJF (Solve it by using pre-emptive SJF and non-pre-emptive SJF
also).
Process Arrival time Burst time
P1 0 8
P2 1 4
P3 2 9
P4 3 5

SUMMER 23

1) Write the difference between pre-emptive and non-preemptive scheduling.


2) Define Deadlock.
3) State and explain four scheduling criteria.
4) Explain different types of schedulers.
5) Describe any four condition for deadlock.
6) With neat diagram explain multilevel queue scheduling.
7) Consider the four processes P1, P2, P3 and P4 with length of CPO Burst time. Find out
Avg waiting time and Avg turn around time for the following Algorithms.
i) FCFS ii) RR (Slice-4ms) iii) SJF

28 | P a g e

You might also like