0% found this document useful (0 votes)
5 views

OS Unit 2

Unit II of the Operating Systems course at Mailam Engineering College covers process management, including concepts such as process scheduling, inter-process communication, CPU scheduling algorithms, and synchronization mechanisms like mutexes and semaphores. It also addresses critical issues like deadlocks, race conditions, and the producer-consumer problem, while discussing the benefits and challenges of multithreading. The unit emphasizes the importance of managing resources effectively to ensure efficient process execution and system performance.

Uploaded by

janciraniaids
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

OS Unit 2

Unit II of the Operating Systems course at Mailam Engineering College covers process management, including concepts such as process scheduling, inter-process communication, CPU scheduling algorithms, and synchronization mechanisms like mutexes and semaphores. It also addresses critical issues like deadlocks, race conditions, and the producer-consumer problem, while discussing the benefits and challenges of multithreading. The unit emphasizes the importance of managing resources effectively to ensure efficient process execution and system performance.

Uploaded by

janciraniaids
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 67

AL3452 – Operating Systems Unit 2 Mailam Engineering College

UNIT II PROCESS MANAGEMENT

Processes - Process Concept - Process Scheduling - Operations on Processes - Inter-


process Communication; CPU Scheduling - Scheduling criteria - Scheduling
algorithms: Threads - Multithread Models – Threading issues; Process
Synchronization - The Critical-Section problem - Synchronization hardware –
Semaphores – Mutex - Classical problems of synchronization - Monitors; Deadlock -
Methods for handling deadlocks, Deadlock prevention, Deadlock avoidance, Deadlock
detection, Recovery from deadlock.
PART A
1. Give a programming example in which multithreading does not provide
better performance than a single threaded solution.
• pAny kind of sequential program is not a good candidate to be threaded.
• An example of this is a program that calculates an individual tax return.
• Another example is a “shell” program such as the C-shell or Korn shell. Such a
program must closely monitor its own working space such as open files,
environment variables, and current working directory.

2. What is the meaning of the term busy waiting?


• Busy waiting means that a process is waiting for a condition to be satisfied in
a tight loop without relinquishing the processor.
• Alternatively, a process could wait by relinquishing the processor, and block
on a condition and wait to be awakened at some appropriate time in the
future.
• Busy waiting can be avoided but incurs the overhead associated with putting
a process to sleep and having to wake it up when the appropriate program
state is reached.

3. Can a multithreaded solution using multiple level threads achieve better


performance on a multiprocessor system than on a single processor system?
• A multithreaded system comprising of multiple user-level threads cannot
make use of the different processors in a multiprocessor system
simultaneously.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 1


AL3452 – Operating Systems Unit 2 Mailam Engineering College

• The operating system sees only a single process and will not schedule the
different threads of the process on separate processors.
• Consequently, there is no performance benefit associated with executing
multiple user-level threads on a multiprocessor system.

4. Name and draw five different process states with proper definition. (or) Draw
a diagram to show the different process states. (NOV/DEC 2024)
Process – A process is a program in execution
• New. The process is being created.
• Running. Instructions are being executed.
• Waiting. The process is waiting for some event to occur (such as an I/O
completion or reception of a signal).
• Ready. The process is waiting to be assigned to a processor.
• Terminated. The process has finished execution.

5. Elucidate mutex locks with its procedure.


• Operating-systems designers build software tools to solve the critical-section
problem.
• The simplest of these tools is the mutexlock. (mutualexclusion.)
• We use the mutexlockto protect critical regions and thus prevent race
conditions.
• That is, a process must acquire the lock before entering a critical section; it
releases the lock when it exits the critical section.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 2


AL3452 – Operating Systems Unit 2 Mailam Engineering College

• The acquire()function acquires the lock, and the release() function releases the
lock.
6. “Priority inversion is a condition that occurs in real time systems where a
lower priority access is starved because higher priority processes have gained
hold of the CPU. “. Comment on this statement.(April/May 2024)
• Priority inversion is a problem that occurs in concurrent processes when low-
priority threads hold shared resources required by some high-priority threads,
causing the high priority-threads to block indefinitely.
• This problem is enlarged when the concurrent processes are in a real time
system where high- priority threads must be served on time.
• When a high-priority thread needs a resource engaged by a low-priority thread,
the low priority thread is preempted, the original resource is restored and the
high-priority thread is allowed to use the original resource.

7. Differentiate single threaded and multithreaded processes.


User Level Threads or Single Thread
• User level threads are managed by a user level library
• In this case, the kernel may not favor a process that has many threads.
• User level threads are typically fast.
• They are a good choice for non blocking tasks otherwise the entire process will
block if any of the threads blocks.
Kernel Level Threads or Multithread
• Kernel level threads are managed by the OS, therefore, thread operations (ex.
Scheduling) are implemented in the kernel code.
• This means kernel level threads may favor thread heavy processes.
• If one thread blocks it does not cause the entire process to block.
• They are slower than user level threads due to the management overhead.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 3


AL3452 – Operating Systems Unit 2 Mailam Engineering College

8. What is the difference between user level instruction and privileged


instructions? Which of the following instructions should be privileged and only
allowed to execute in kernel mode?
a) Load a value from a memory address to a general purpose register
b) Set a new value in a program counter register.
c) Turn off interrupts
Ans:
Load a value from a memory address to a general purpose register
User mode
• Regular instructions
• Access user memory
Kernel (privileged) mode
• Regular instructions
• Privileged instructions
• Access user memory
• Access kernel memory

9. Define a process.
A process is a program in execution. It is an active entity and it includes the
process stack, containing temporary data and the data section contains global
variables.
10. What is process control block?
Each process is represented in the OS by a process control block. It contains
many pieces of information associated with a specific process.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 4


AL3452 – Operating Systems Unit 2 Mailam Engineering College

11. What is zombie process?


A process that has terminated, but whose parent has not yet called wait(),
isknown as a zombie process.

12. What are the benefits of threads?


• Responsiveness.
• Resource sharing.
• Economy.
• Scalability

13. What do you mean by multicore or multiprocessor system?


• System design is to place multiple computing cores on a single chip.
• Each core appears as a separate processor to the operating system.
• Whether the cores appear across CPU chips or within CPU chips, we call
these systems multicore or multiprocessor systems.
14. What are the benefits of multithreaded programming?
The benefits of multithreaded programming can be broken down into four
major categories:
• Responsiveness
• Resource sharing
• Economy
• Utilization of multiprocessor architectures
15. What is critical section problem?
• Consider a system consists of 'n' processes. Each process has segment of
code called a critical section, in which the process may be changing
common variables, updating a table, writing a file.
• When one process is executing in its critical section, no other process can
allowed executing in its critical section.
16. Define busy waiting and spinlock.
• When a process is in its critical section, any other process that tries to enter its
critical section must loop continuously in the entry code. This is called as busy
waiting.
• Spinlock - the process spins while waiting for the lock.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 5


AL3452 – Operating Systems Unit 2 Mailam Engineering College

17. What do you meant by semaphore?


A semaphoreS is an integer variable that, apart from initialization, isaccessed
only through two standard atomic operations:
1. wait() - operation was originally termed P (from the Dutch proberen,
“totest”);
2. signal() - was originally called V (from verhogen, “to increment”).

18. What are the four circumstances in CPU scheduling decisions?


1. When a process switches from the running state to the waiting state.
2. When a process switches from the running state to the ready state.
3. When a process switches from the waiting state to the ready state.
4. When a process terminates

19. Define race condition.


• When several process access and manipulate same data concurrently, then
the outcome of the execution depends on particular order in which the access
takes place is called race condition.
• To avoid race condition, only one process at a time can manipulate the
shared variable.

20. Define deadlock.


• A process requests resources; if the resources are not available at that time,
the process enters a wait state.
• Waiting processes may never again change state, because the resources they
have requested are held by other waiting processes. This situation is called a
deadlock.

21. What are a safe state and an unsafe state?


• A state is safe if the system can allocate resources to each process in some
order and still avoid a deadlock. A system is in safe state only if there exists a
safe sequence.
• If no such sequence exists, then the system state is said to be unsafe.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 6


AL3452 – Operating Systems Unit 2 Mailam Engineering College

22. What is Amdahl’s law?


Amdahl’s Law is a formula that identifies potential performance gains
from adding additional computing cores to an application that has both serial
(nonparallel) and parallel components.

23. What are the benefits of multithreads?


1. Responsiveness - One thread may provide rapid response while other
threads are
blocked or slowed down doing intensive calculations.
2. Resource sharing - By default threads share common code, data, and
otherresources, which allows multiple tasks to be performed simultaneously in
a single address space.
3. Economy - Creating and managing threads (and context switches between
them)
is much faster than performing the same tasks for processes.
4. Scalability, i.e. Utilization of multiprocessor architectures - A single
threaded process can only run on one CPU, no matter how many may beavailable,
whereas the execution of a multi-threaded application may be split amongst
available processors.

24. Give the necessary conditions for deadlock to occur?


Mutual exclusion:
• At least one resource must be held in a non sharable mode.
• That is only one process at a time can use the resource.
• If another process requests that resource, the requesting process must
be delayed until the resource has been released.
Hold and wait:
• A process must be holding at least one resource and waiting to acquire
additional resources that are currently being held by other processes.
No preemption:
• Resources cannot be preempted.
Circular wait:
• P0 is waiting for a resource that is held by P1, P1 is waiting for a
resource that is held by P2...Pn-1.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 7


AL3452 – Operating Systems Unit 2 Mailam Engineering College

25. Can multiple user level threads achieve better performance on a


multiprocessor system than a single processor system? Justify your answer.

• A process has multiple tasks.


• When one of the tasks may block, and it is desired to allow the other tasks to
proceed without blocking.
• For example in a word processor, a background thread may check spelling and
grammar while a foreground thread processes user input (keystrokes), third
thread loads images from the hard drive and a fourth does periodic automatic
backups of the file being edited.

26. Write the four situations under which CPU scheduling decisions take
place.
CPU scheduling decisions may take place under the following four
circumstances:
1. When a process switches from the running state to the waiting state.
2. When a process switches from the running state to the ready state.
3. When a process switches from the waiting state to the ready state.
4. When a process terminates.

27. What is a critical region? How do they relate to controlling access to


shared resources?
• A critical region is a section of code in which a shared resource is
accessed.
• To control access to the shared resource, access is controlled to the
critical region of code. By controlling the code that accessed the
resource we can control access to the resource.
28. What is the producer consumer problem? Give an example of its
occurrence in operating systems.
• The producer consumer problem is a classic concurrency problem. It arises
when a process is producing some data, the producer, and another process
is using that data, the consumer.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 8


AL3452 – Operating Systems Unit 2 Mailam Engineering College

• An example in an operating system would be interfacing with a network


device. The network device produces data at a certain rate and places it on
a buffer, the operating system then consumes the data at another rate.

29. What are monitors and condition variables?


• Monitors are high level synchronization primitives that encapsulate data,
variables and operations.
• A conditional variable is variable placed inside a monitor to allow for
processes to wait inside a monitor until a special event has occurred.

30. What is a deadlock? What is starvation? How do they differ from each
other? (April/May 2024)
• A deadlock is a situation where a number of processes cannot precede
without a resource that another holds, but cannot release any resources
that it is currently holding. A deadlock ensures that no processes can
proceed.
• Where as a starvation is a situation where a signal process is never given a
resource or event that it requires to proceed, this includes CPU time.
• They differ in that a deadlock ensures that no processes can proceed where
as starvation only a single process fails to proceed.

31. Define Semaphore.


• A semaphore is an integer variable, shared among multiple processes. The
main aim of using a semaphore is process synchronization and access control
for a common resource in a concurrent environment.

32. Define MUTEX.


• Mutex lock is essentially a variable that is binary nature that provides
code wise functionality for mutual exclusion. At times, there maybe
multiple threads that may be trying to access same resource like
memory or I/O etc.
• To make sure that there is no overriding. Mutex provides a locking
mechanism.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 9


AL3452 – Operating Systems Unit 2 Mailam Engineering College

• Only one thread at a time can take the ownership of a mutex and apply
the lock.
• Once it done utilizing the resource and it may release the mutex lock.

33. How deadlocks can be avoided?


A deadlock occurs when the first process locks the first resource at the
same time as the second process locks the second resource. The deadlock can
be resolved by cancelling and restarting the first process.

34. List out the benefits and challenges of thread handling.


• Enhanced performance by decreased development time.
• Simplified and streamlined program coding.
• Improvised GUI responsiveness.
• Simultaneous and parallelized occurrence of tasks.
• Better use of cache storage by utilization of resources.
• Decreased cost of maintenance.

35. What is external fragmentation? (NOV/DEC 2024)


External fragmentation in computer memory management refers to a
situation where there is enough free memory available in total, but it is
scattered across numerous small, non-contiguous blocks, making it difficult to
allocate a large contiguous block of memory to a new process, even if the total
free memory is sufficient; essentially, the available memory is fragmented into
unusable pieces despite there being enough space overall.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 10


AL3452 – Operating Systems Unit 2 Mailam Engineering College

PART B
1. Explain in detail about process concept. (or) Explain the seven-state model of
a process and also explain the queuing diagram for the same. Is it possible to have
more than one blocked queue, if so can a process reside in more than one queue?
(April/May 2024)
Process Concept
• Process is a program in execution. A process is the unit of work in a
modern time-sharing system.
• Even on a single-user system, a user maybe able to run several programs at
one time: a word processor, a Web browser, and an e-mail package.
• It also includes program counter, processor’s registers, stack, data section,
heap; Refer figure 1=2.1.

Fig.2.1 Process in memory


• A program is a passive entity, A process is an active entity,
Process States
• New: The process is being created.
• Running: Instructions are being executed.
• Waiting or Blocked: The process is waiting for some event to occur (such as
an I/O completion or reception of a signal).
• Ready: The process is waiting to be assigned to a processor.
• Terminated: The process has finished execution.
• Suspended Ready: The process is temporarily moved from the ready queue to
secondary storage (like a hard disk) but is still considered ready to execute
when needed.
• Suspended Blocked: A process is waiting for an event while being suspended
in secondary storage. Refer figure 2.2.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 11


AL3452 – Operating Systems Unit 2 Mailam Engineering College

Fig.2.2 Process State


Process Control Block(or) Task control block.
• It contains many pieces of information associated with a specific
process, including these as shown figure 2.3.

Fig.2.3 Process Control Block


• Process state. The state may be new, ready, running, and waiting,
halted, and so on.
• Program counter. The counter indicates the address of the next
instructionto be executed for this process.
• CPU registers. It include accumulators, index registers, stack pointers,
and general-purpose registers, plus any condition-code information.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 12


AL3452 – Operating Systems Unit 2 Mailam Engineering College

• CPU-scheduling information. This information includes a process


priority, pointers to scheduling queues, and any other scheduling
parameters.
• Memory-management information. This information may include
such items as the value of the base and limit registers and the page
tables, or the segment tables, depending on the memory system used by
the operating system.
• Accounting information. This information includes the amount of
CPUand real time used, time limits, account numbers, job or process
numbers, and so on.
• I/O status information. This information includes the list of I/O
devices allocated to the process, a list of open files, and so on.

2. Describe the difference among short term, medium term and long term
scheduling with suitable example.
Process Scheduling
• The objective of multiprogramming is to have some process running
at all times, to maximize CPU utilization.
• The objective of time sharing is to switch the CPU among processes
so frequently that users can interact with each program while it is
running.
• To meet these objectives, the process scheduler selects an available
process (possibly from a set of several available processes) for
program execution on the CPU.
Scheduling Queues
• As processes enter the system, they are put into a job queue, which
consists of all processes in the system.
• The processes that are residing in main memory and are ready and
waiting to execute are kept on a list called the ready queue.
• The list of processes waiting for a particular I/O device is called a
device queue. Each device has its own device queue. Refer figure
2.4.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 13


AL3452 – Operating Systems Unit 2 Mailam Engineering College

Fig.2.4 The ready queue and various I/O device queues


Schedulers
• The long-term scheduler, or job scheduler, selects processes from the
job pool and loads them into memory for execution.
• The short-term scheduler, or CPU scheduler, selects from among the
processes that are ready to execute and allocates the CPU to one of
them. Refer figure 2.5.

Fig.2.5 Medium term scheduling to the queuing diagram

3. Explain in detail about operations of process.


Operations
1. Process creation
2. Termination
1. Process Creation
• During the course of execution, a process may create several new
processes.
• The creating process is called a parent process, and the new
processes are called the children of that process.
When a process creates a new process, two possibilities for execution:

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 14


AL3452 – Operating Systems Unit 2 Mailam Engineering College

1. The parent continues to execute concurrently with its children.


2. The parent waits until some or all of its children have terminated.
There are also two address-space possibilities for the new process:
1. The child process is a duplicate of the parent process (it has the same
program and data as the parent).
2. The child process has a new program loaded into it.
• A new process is created by the fork() system call. The new process
consists of a copy of the address space of the original process.
• The exec () system call loads a binary file into memory and starts its
execution as shown in figure 2.6.

Fig.2.6 process creation using the fork() system call


Program to implement for Creating a separate process using the UNIX fork()
system call.
#include <sys/types.h>
#include <stdio.h>
#include <unistd.h>
int main()
{
pid t pid;
/* fork a child process */
pid = fork();
if (pid< 0) { /* error occurred */
fprintf(stderr, "Fork Failed");
return 1;
}
else if (pid == 0) { /* child process */
execlp("/bin/ls","ls",NULL);
}
else{ /* parent process */

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 15


AL3452 – Operating Systems Unit 2 Mailam Engineering College

/* parent will wait for the child to complete */


wait(NULL);
printf("Child Complete");
}
return 0;
}
2. Process Termination
• A process terminates when it finishes executing its final statement and
asks the operating system to delete it by using the exit() system call.
A parent may terminate the execution of one of its children for a variety of
reasons, such as these:
1. The child has exceeded its usage of some of the resources that it has
been allocated. (To determine whether this has occurred, the parent
must have a mechanism to inspect the state of its children.)
2. The task assigned to the child is no longer required.
3. The parent is exiting, and the operating system does not allow a child to
continue if its parent terminates.
Some systems do not allow a child to exist if its parent has terminated.
1. If a process terminates (either normally or abnormally), then all its
children must also be terminated. This phenomenon, referred to as
cascading termination, is normally initiated by the operating system.
2. A parent process may wait for the termination of a child process by
using the wait() system call.
3. When a process terminates, its resources are de - allocated by the
operating system.
4. A process that has terminated, but whose parent has not yet called
wait(), is known as a zombie process.
5. If a parent did not invoke wait() and instead terminated, there by leaving
its child processes as orphans.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 16


AL3452 – Operating Systems Unit 2 Mailam Engineering College

4. Explain in detail about Inter Process Communication.


o Processes executing concurrently in the operating system may be either
independent processes or cooperating processes.
o Several cooperating processes executing concurrently is Inter process
Communication
o There are several reasons for providing an environment that allows process
cooperation:
• Information sharing
Several users access the information concurrently
• Computation speedup
If we want a particular task to run faster, we must break it into
subtasks, each of which will be executing in parallel with the
others
• Modularity
Dividing the system functions into separate processes or threads
• Convenience
Even an individual user may work on many tasks at the same
time
There are two fundamental models of Inter Process Communication:
1. Shared Memory
2. Message Passing
1. In the shared-memory model, a region of memory that is shared by
cooperating processes is established. Processes can then exchange information
by reading and writing data to the shared region.
2. In the message-passing model, communication takes place by means of
messages exchanged between the cooperating processes. Refer figure 2.7.

Fig. 2.7 Communications models. (a) Message passing. (b) Shared


memory.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 17


AL3452 – Operating Systems Unit 2 Mailam Engineering College

1. Shared-Memory Systems
• Inter Process Communication using shared memory requires
communicating processes to establish a region of shared memory.
• A producer process produces information that is consumed by a
consumer process.
• One solution to the producer–consumer problem uses shared memory.
• Two types of buffers can be used.
• The unbounded buffer places no practical limit on the size of the buffer.
• The bounded buffer assumes a fixed buffer size.
2. Message-Passing Systems
• Message passing provides a mechanism to allow processes to
communicate and to synchronize their actions without sharing the same
address space.
• A message-passing facility provides at least two operations:
• send(message)
• receive(message)
Here are several methods for send()/receive() operations:
• Naming
• Synchronization
• Buffering
1. Naming:
• Direct or indirect communication
• Synchronous or asynchronous communication
• Automatic or explicit buffering
Direct and Indirect Communication
In Direct Communication, Each process that wants to communicate must
explicitly name the recipient or sender of the communication.
In this scheme, the send() and receive() primitives are defined as:
• send(P, message)—Send a message to process P.
• receive(Q, message)—Receive a message from process Q.
A direct communication link in this scheme has the following properties:
• Symmetry communication - both the sender process and the receiver
process must name the other to communicate.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 18


AL3452 – Operating Systems Unit 2 Mailam Engineering College

• Asymmetry communication - only the sender names the recipient;


the recipient is not required to name the sender.
• send(P, message)—Send a message to process P.
• receive(id, message)—Receive a message from any process.
The disadvantage in both of these schemes symmetric and asymmetric is
the limited modularity of the resulting process definitions.
With indirect communication, the messages are sent to and received from
mailboxes, or ports.
• A mail box can be viewed abstractly as an object into which messages
can be placed by processes and from which messages can be removed.
• Each mailbox has a unique identification.
The send() and receive()primitives are defined as follows:
• send(A, message)—Send a message to mailbox A.
• receive(A, message)—Receive a message from mailbox A.
In this scheme, a communication link has the following properties:
• A link is established between a pair of processes only if both members’
of the pair has a shared mailbox.
• A link may be associated with more than two processes.
• Between each pair of communicating processes, a number of different
links may exist, with each link corresponding to one mailbox.
2. Synchronization
• Communication between processes takes place through calls to send()
and receive() primitives.
• Blocking send. The sending process is blocked until the message is
received by the receiving process or by the mailbox.
• Nonblocking send. The sending process sends the message and
resumes operation.
• Blocking receive. The receiver blocks until a message is available.
• Nonblocking receive. The receiver retrieves either a valid message or a
null.
3. Buffering
• Whether communication is direct or indirect, messages exchanged by
communicating processes reside in a temporary queue.
Such queues can be implemented in three ways:

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 19


AL3452 – Operating Systems Unit 2 Mailam Engineering College

• Zero capacity. The queue has a maximum length of zero;


• Bounded capacity. The queue has finite length n;
• Unbounded capacity. The queue’s length is potentially infinite;

5. Explain in detail about threads.


Thread Overview:
• A thread is a basic unit of CPU utilization.
• It comprises a thread ID, a program counter, a register set, and a stack.
• Figure 2.8 shows the difference between single threaded and multithreaded
processes.

Fig. 2.8 Single threaded and multithreaded processes


Motivation
• Most software applications that run on modern computers are multithreaded.
• A web browser might have one thread display images or text while another
thread retrieves data from the network.
• For example: A word processor may have a thread for displaying graphics,
another thread for responding to keystrokes from the user, and a third thread
for performing spelling and grammar checking in the background.
Benefits
1. Responsiveness.
• Multithreading an interactive application may allow a program to
continue running even if part of it is blocked or is performing a lengthy
operation, thereby increasing responsiveness to the user.
2. Resource sharing.
• Processes can only share resources through techniques such as shared
memory and message passing.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 20


AL3452 – Operating Systems Unit 2 Mailam Engineering College

• The benefit of sharing code and data is that it allows an application to


have several different threads of activity within the same address space.

3. Economy.
• Allocating memory and resources for process creation is costly.
• Because threads share the resources of the process to which they
belong, it is more economical to create and context-switch threads.
4. Scalability.
• The benefits of multithreading can be even greater in a multiprocessor
architecture, where threads may be running in parallel on different
processing cores.
• A single-threaded process can run on only one processor, regardless
how many are available.

6. Explain in detail about multicore programming.


Multicore Programming
• Computer systems need for more computing performance, single-CPU systems
evolved into multi-CPU systems.
• System design is to place multiple computing cores on a single chip.
• Each core appears as a separate processor to the operating system. Refer figure
2.9.
• Whether the cores appear across CPU chips or within CPU chips, we call these
systems multicore or multiprocessor systems. Refer figure 2.10.

Fig. 2.9 concurrent execution on a single-core system

Fig.2.10. parallel execution on a multicore system


• A system is parallel if it can perform more than one task simultaneously.
AMDAHL’S LAW

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 21


AL3452 – Operating Systems Unit 2 Mailam Engineering College

• Amdahl’s Law is a formula that identifies potential performance gains from


adding additional computing cores to an application that has both serial
(nonparallel) and parallel components.
• If S is the portion of the application that must be performed serially on a
system with N processing cores, the formula appears as follows:
speedup≤ 1
S + (1−S)N
Programming Challenges
• Designers of operating systems must write scheduling algorithms that use
multiple processing cores to allow the parallel execution shown in Figure.
In general, five areas present challenges in programming for multicore systems:
➢ Identifying tasks
• This involves examining applications to find are as that can be divided
into separate, concurrent tasks.
• Ideally, tasks are independent of one another and thus can run in
parallel on individual cores.
➢ Balance
• While identifying tasks that can run in parallel, programmers must also
ensure that the tasks perform equal work of equal value.
➢ Data splitting
• Just as applications are divided into separate tasks, the data accessed
and manipulated by the tasks must be divided to run on separate cores.
➢ Data dependency
• The data accessed by the tasks must be examined for dependencies
between two or more tasks.
• When one task depends on data from another, programmers must
ensure that the execution of the tasks is synchronized to accommodate
the data dependency.
➢ Testing and debugging
• When a program is running in parallel on multiple cores, many different
execution paths are possible.
• Testing and debugging such concurrent programs is inherently more
difficult than testing and debugging single-threaded applications.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 22


AL3452 – Operating Systems Unit 2 Mailam Engineering College

Types of Parallelism
• In general, there are two types of parallelism:
1. Data parallelism
2. Task parallelism.
• Data parallelism - The two threads would be running in parallel on
separate computing cores.
• Task parallelism - involves distributing not data but tasks (threads)
across multiple computing cores. Each thread is performing a unique
operation.
7. Explain in detail about multithreading models.
Multithreading Models
• Support for threads may be provided either at the user level, for user threads, or
by the kernel, for kernel threads.
• User threads are supported above the kernel and are managed without kernel
support, whereas kernel threads are supported and managed directly by the
operating system.
• Three common ways of establishing such are Relationship:
o many-to-one model
o one-to-one model
o many-to-many model
Many-to-One Model
• The many-to-one model maps many user-level threads to one kernel thread.
• Thread management is done by the thread library in user space, so it is efficient.
• However, the entire process will block if a thread makes a blocking system call.
• Many-to-one model allows the developer to create as many user threads, it does
not result in true concurrency, because the kernel can schedule only one thread
at a time. Refer figure 2.11.

Fig.2.11 Many to One Model

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 23


AL3452 – Operating Systems Unit 2 Mailam Engineering College

One-to-One Model
• The one-to-one model maps each user thread to a kernel thread.
• It also allows multiple threads to run in parallel on multiprocessors.
• The only drawback to this model is that creating a user thread requires creating
the corresponding kernel thread.
• The one-to-one model allows greater concurrency, but the developer has to be
careful not to create too many threads within an application. Refer 2.12.

Fig.2.12. One to One Model


Many-to-Many Model
• The many-to-many model multiplexes many user-level threads to a smaller or
equal number of kernel threads as shown in figure 2.13.

Fig.2.13 Many to Many Model


Two level model
• One variation on the many-to-many model still multiplexes many user level
threads to a smaller or equal number of kernel threads but also allows a user-level
thread to be bound to a kernel thread as shown in figure 2.14.
• This variation is sometimes referred to as the two-level model.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 24


AL3452 – Operating Systems Unit 2 Mailam Engineering College

Fig.2.14 Two level model

Threading issues:
• fork() and exec() system calls
• Thread cancellation
• Signal handling
• Thread pools
• Thread specific data
fork() and exec() system calls:
• A fork() system call may duplicate all threads or duplicate only the thread that
invoked fork().
• If a thread invoke exec() system call ,the program specified in the parameter to exec
will replace the entire process.
Thread cancellation:
• It is the task of terminating a thread before it has completed.
• A thread that is to be cancelled is called a target thread.
• There are two types of cancellation namely
1. Asynchronous Cancellation – One thread immediately terminates the target
thread.
2. Deferred Cancellation – The target thread can periodically check if it should
terminate, and does so in an orderly fashion.
Signal handling:
1. A signal is a used to notify a process that a particular event has
occurred.
2. A generated signal is delivered to the process.
• Deliver the signal to the thread to which the signal applies.
• Deliver the signal to every thread in the process.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 25


AL3452 – Operating Systems Unit 2 Mailam Engineering College

• Deliver the signal to certain threads in the process.


• Assign a specific thread to receive all signals for the process.
3. Once delivered the signal must be handled.
Signal is handled by
i. A default signal handler
ii. A user defined signal handler
Thread pools:
• Creation of unlimited threads exhausts system resources such as CPU
time or memory. Hence we use a thread pool.
• In a thread pool, a number of threads are created at process startup and
placed in the pool.
• Then there is a need for a thread the process will pick a thread from the
pool and assign it a task.
• After completion of the task, the thread is returned to the pool.
Thread specific data
• Threads belonging to a process share the data of the process. However
each thread might need its own copy of certain data known as thread-
specific data.

8. Explain in detail about SMP (Symmetric Multi Processor) management.


• One approach to CPU scheduling in a multiprocessor system has all
scheduling decisions, I/O processing, and other system activities
handled by a single processor—the master server.
• The other processors execute only user code.
• This asymmetric multiprocessing is simple because only one
processor accesses the system data structures, reducing the need for
data sharing.
• A second approach uses Symmetric Multi Processing (SMP), where
each processor is self-scheduling.
• All processes may be in a common ready queue, or each processor
may have its own private queue of ready processes.
• Regardless, scheduling proceeds by having the scheduler for each
processor examine the ready queue and select a process to execute.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 26


AL3452 – Operating Systems Unit 2 Mailam Engineering College

• If we have multiple processors trying to access and update a


common data structure, the scheduler must be programmed
carefully.
• We must ensure that two separate processors do not choose to
schedule the same process and that processes are not lost from the
queue.

9. Explain in detail about process synchronization and explain critical


section problem.
Process Synchronization
• Concurrent access to shared data may result in data inconsistency.
• Maintaining data consistency requires mechanisms to ensure the
orderly execution of cooperating processes.
• Shared memory solution to bounded buffer problem allows at most
n-1 items in buffer at the same time. A solution, where all N buffers
are used is not simple.
• Suppose that we modify the producer-consumer code by adding a
variable counter, initialized to 0 and increment it each time a new
item is added to the buffer.
• Race condition: the situation where several processes access and
manipulate shared data concurrently. The final value of the shared
data depends upon which process finishes last.
• To prevent race conditions, concurrent processes must be
synchronized.
Critical-Section Problem
• Consider a system consisting of n processes{P0, P1, ...,Pn−1}.
• Each process has a segment of code, called a critical section, in
which the process may be changing common variables, updating a
table, writing a file, and so on.
• The important feature of the system is that, when one process is
executing in its critical section, no other process is allowed to
execute in its critical section.
• That is, no two processes are executing in their critical sections at
the same time.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 27


AL3452 – Operating Systems Unit 2 Mailam Engineering College

• The critical-section problem is to design a protocol that the


processes can use to cooperate. Each process must request
permission to enter its critical section.
• The section of code implementing this request is the entry section.
• The critical section may be followed by an exit section.
• The remaining code is the remainder section.

The general structure of a typical process Pi


do
{
entry section
critical section
exit section
remainder section
} while (true);
A solution to the critical-section problem must satisfy the following three
requirements:
1. Mutual exclusion.
• If process Pi is executing in its critical section, then no other processes
can be executing in their critical sections.
2. Progress.
• If no process is executing in its critical section and some processes wish
to enter their critical sections, then only those processes that are not
executing in their remainder sections can participate in deciding which
will enter its critical section next, and this selection cannot be
postponed indefinitely.
3. Bounded waiting.
• There exists a bound, or limit, on the number of times that other
processes are allowed to enter their critical sections after a process has
made a request to enter its critical section and before that request is
granted.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 28


AL3452 – Operating Systems Unit 2 Mailam Engineering College

10. Define Critical Section, critical resource. Explain the need for enforcing
mutual exclusion using echo function as an example in both uniprocessor
and multiprocessor environment. (April/May 2024)
Critical Section: A critical section is a part of a program that accesses
shared resources, such as memory, files, or variables, which must not be
accessed by more than one process or thread at a time to prevent race conditions.
Critical Resource: A critical resource is a resource that multiple processes or
threads need to access, but only one can use at a time to maintain data
consistency and avoid conflicts.
Need for Enforcing Mutual Exclusion
Mutual exclusion ensures that when one process is executing in its
critical section, no other process can enter its critical section simultaneously.
This is necessary to prevent data corruption, inconsistent states, and race
conditions.
Example: echo Function in Uniprocessor and Multiprocessor Environments
The echo command in Unix-based systems prints text to the terminal. If
multiple processes execute echo simultaneously, they may try to write to the
same output stream (e.g., the terminal), causing interleaved or corrupted output.
Uniprocessor Environment
• Since only one process executes at a time due to time-sharing, mutual
exclusion is enforced using software mechanisms like semaphores or
locks.
• Example issue: If two processes execute echo "Hello" concurrently, without
mutual exclusion, their outputs may mix (HHeelllooo instead of Hello).
• Solution: Implement a lock (e.g., a semaphore) that ensures only one
process executes echo at a time.
Multiprocessor Environment
• Multiple processors can execute different processes simultaneously,
increasing the risk of race conditions.
• Example issue: If two processes running on separate CPUs execute echo
"Hello" simultaneously, both may try to write to the terminal at the same
time, leading to mixed output.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 29


AL3452 – Operating Systems Unit 2 Mailam Engineering College

• Solution: Hardware-based locking mechanisms (e.g., test-and-set locks,


spinlocks) or OS-level synchronization (mutexes, semaphores) enforce
mutual exclusion to prevent data corruption.
Thus, enforcing mutual exclusion ensures that shared resources like output
buffers are accessed in a controlled manner, preserving correct execution order
and preventing unexpected behavior.

11. Explain in detail about mutex locks.


Mutex Locks(mutual exclusion.)
• We use the mutex lock to protect critical regions and thus prevent race
conditions.
• That is, a process must acquire the lock before entering a critical
section; it releases the lock when it exits the critical section.
• The acquire() function acquires the lock, and the release() function
releases the lock,
Solution to the critical-section problem using mutex locks
The definition of acquire() is as follows:
acquire()
{
while (!available)
; /* busy wait */
available = false;;
}
do
{
acquire lock
critical section
release lock
remainder section
} while (true);
• A mutex lock has a boolean variable available whose value indicates if
the lock is available or not.
• If the lock is available, a call to acquire() succeeds, and the lock is then
considered unavailable.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 30


AL3452 – Operating Systems Unit 2 Mailam Engineering College

• A process that attempts to acquire an unavailable lock is blocked until


the lock is released.
The definition of release() is as follows:
release()
{
available = true;
}
• Calls to either acquire() or release() must be performed atomically.
• The main disadvantage of the implementation given here is that it
requires busy waiting.
• While a process is in its critical section, any other process that tries to
enter its critical section must loop continuously in the call to acquire().
• In fact, this type of mutex lock is also called a spin lock because the
process “spins” while waiting for the lock to become available.
• In multiprocessor systems, one thread can “spin” on one processor while
another thread performs its critical section on another processor.

12. Explain in detail about semaphores. Write the algorithm using test
and set instruction that satisfy all the critical section requirements.
A semaphore S is an integer variable that, apart from
initialization, is accessed only through two standard atomic operations:
wait() - operation was originally termed P (from the Dutch proberen,
“totest”);
signal() - was originally called V (from verhogen, “to increment”).
The definition of wait() is as follows:
wait(S)
{
while (S <= 0)
; // busy wait
S--;
}
The definition of signal() is as follows:
signal(S)
{

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 31


AL3452 – Operating Systems Unit 2 Mailam Engineering College

S++;
}
• All modifications to the integer value of the semaphore in the wait() and
signal() operations must be executed indivisibly.
• That is, when one process modifies the semaphore value, no other
process can simultaneously modify that same semaphore value.
Test and Set() instruction
booleanTestAndSet (boolean&target)
{
booleanrv = target;
target = true;
returnrv;
}
Swap
void Swap (boolean&a, boolean&b)
{
boolean temp = a;
a = b;
b = temp;
}

Semaphore Usage
• Operating systems often distinguish between counting and binary
semaphores.
• The value of a counting semaphore can range over an unrestricted
domain.
• The value of a binary semaphore can range only between 0 and 1.
• Thus, binary semaphores behave similarly to mutex locks.
• Counting semaphores can be used to control access to a given resource
consisting of a finite number of instances.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 32


AL3452 – Operating Systems Unit 2 Mailam Engineering College

Semaphore Implementation
To implement semaphores under this definition, we define a semaphore as
follows:
type def struct
{
int value;
struct process *list;
} semaphore;
• When a process must wait on a semaphore, it is added to the list of
processes.
• A signal() operation removes one process from the list of waiting
processes and awakens that process.
The wait() semaphore operation can be defined as
wait(semaphore *S)
{
S->value--;
if (S->value < 0) {
add this process to S->list;
block();
}
}
The signal() semaphore operation can be defined as
signal(semaphore *S)
{
S->value++;
if (S->value <= 0) {
remove a process P from S->list;
wakeup(P);
}
}
• The block() operation suspends the process that invokes it.
• The wakeup(P) operation resumes the execution of a blocked process P.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 33


AL3452 – Operating Systems Unit 2 Mailam Engineering College

13. Explain in detail about Classical Problems of Synchronization.


The following problems of synchronization are considered as
classical problems:
• Bounded-buffer (or Producer-Consumer) Problem
• Dining-Philosophers Problem
• Readers and Writers Problem
• Sleeping Barber Problem
1. Bounded-buffer (or Producer-Consumer) Problem:
• Bounded Buffer problem is also called producer consumer problem.
• This problem is generalized in terms of the Producer-Consumer
problem.
• Solution to this problem is, creating two counting semaphores “full”
and “empty” to keep track of the current number of full and empty
buffers respectively. Producers produce a product and consumers
consume the product, but both use of one of the containers each
time.
Consumer process using shared memory
while (true) {
/* produce an item in next produced */
while (((in + 1) % BUFFER SIZE) == out)
; /* do nothing */
buffer[in] = next produced;
in = (in + 1) % BUFFER SIZE;
}
The structure of the consumer process.
do {
wait(full);
wait(mutex);
...
/* remove an item from buffer to next consumed */
...
signal(mutex);
signal(empty);
...

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 34


AL3452 – Operating Systems Unit 2 Mailam Engineering College

/* consume the item in next consumed */


...
} while (true);

2. Dining-Philosophers Problem:
• The Dining Philosopher Problem states that K philosophers seated
around a circular table with one chopstick between each pair of
philosophers as shown figure 2.15.
• There is one chopstick between each philosopher. A philosopher
may eat if he can pickup the two chopsticks adjacent to him.
• One chopstick may be picked up by any one of its adjacent
followers but not both. This problem involves the allocation of
limited resources to a group of processes in a deadlock-free and
starvation-free manner.

Fig.2.15 Dining philosophers


The structure of philosopher
do {
wait(chopstick[i]);
wait(chopstick[(i+1) % 5]);
...
/* eat for awhile */
...
signal(chopstick[i]);
signal(chopstick[(i+1) % 5]);
...
/* think for awhile */
...

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 35


AL3452 – Operating Systems Unit 2 Mailam Engineering College

} while (true);
2. Readers and Writers Problem:
• Suppose that a database is to be shared among several concurrent
processes. Some of these processes may want only to read the
database, whereas others may want to update (that is, to read and
write) the database.
• We distinguish between these two types of processes by referring to
the former as readers and to the latter as writers.
• Precisely in OS we call this situation as the readers-writers
problem.
• Problem parameters:
o One set of data is shared among a number of processes.
o Once a writer is ready, it performs its write. Only one writer
may write at a time.
o If a process is writing, no other process can read it.
o If at least one reader is reading, no other process can write.
o Readers may not write and only read.
The structure of writer process
do {
wait(rw mutex);
...
/* writing is performed */
...
signal(rw mutex);
} while (true);
The structure of reader process
do {
wait(mutex);
read count++;
if (read count == 1)
wait(rw mutex);
signal(mutex);
...
/* reading is performed */

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 36


AL3452 – Operating Systems Unit 2 Mailam Engineering College

...
wait(mutex);
read count--;
if (read count == 0)
signal(rw mutex);
signal(mutex);
} while (true);
4. Sleeping Barber Problem:

• Barber shop with one barber, one barber chair and N chairs to wait in.
• When no customers the barber goes to sleep in barber chair and must
be woken when a customer comes in.

• When barber is cutting hair new customers take empty seats to


wait, or leave if no vacancy.

14. Explain in detail about monitors. Explain the dining philosopher


critical section problem solution using monitor.
Monitors
• A high-level abstraction that provides a convenient and effective
mechanism for process synchronization. Refer 2.17.
• Only one process may be active within the monitor at a time.
monitor monitor-name
{
// shared variable declarations
procedure body P1 (…) { …. }

procedure body Pn (…) {……}
{
initialization code
}
}
• To allow a process to wait within the monitor, a condition variable must be
declared as condition x, y;
• Two operations on a condition variable:
o x.wait () –a process that invokes the operation is suspended.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 37


AL3452 – Operating Systems Unit 2 Mailam Engineering College

o x.signal () –resumes one of the suspended processes (if any)


Schematic view of a monitor
• Refer figure 2.16 for the schematic view of a monitor.

Fig.2.16 schematic view of a monitor


Monitor with condition variables
• Refer figure 2.16 for the monitor with condition variables.

Fig.2.17 monitor with condition variables

Solution to Dining Philosophers Problem using monitor


monitor DP
{
enum { THINKING; HUNGRY, EATING) state [5] ;
condition self [5];
void pickup (int i) {
state[i] = HUNGRY;
test(i);

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 38


AL3452 – Operating Systems Unit 2 Mailam Engineering College

if (state[i]!= EATING) self [i].wait;


}
void putdown (int i) {
state[i] = THINKING;
// test left and right neighbors
test((i + 4) % 5);
test((i + 1) % 5);
}
void test (int i) {
if ( (state[(i + 4) % 5] != EATING) &&
(state[i] == HUNGRY) &&
(state[(i + 1) % 5] != EATING) ) {
state[i] = EATING;
self[i].signal () ;
}
}
initialization_code() {
for (int i = 0; i < 5; i++)
state[i] = THINKING;
}
}

15. Explain in detail about CPU scheduling algorithms with suitable


examples.
Basic Concepts
• In a single-processor system, only one process can run at a time.
• The objective of multiprogramming is to have some process running at
all times, to maximize CPU utilization.
• A process is executed until it must wait, for the completion of some I/O
request.
• Several processes are kept in memory at one time.
• When one process has to wait, the operating system takes the CPU away
from that process and gives the CPU to another process.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 39


AL3452 – Operating Systems Unit 2 Mailam Engineering College

▪ CPU–I/O Burst Cycle


• Process execution consists of a cycle of CPU execution and I/O
wait.
• Processes alternate between these two states. Process execution
begins with a CPU burst as shown in figure 2.18.

Fig. 2.18. Alternative sequence of CPU and I/O burst


▪ CPU Scheduler
• Whenever the CPU becomes idle, the operating system must select one
of the processes in the ready queue to be executed.
• The selection process is carried outby the short-term scheduler, or
CPU scheduler.
• The scheduler selects a process from the processes in memory that are
ready to execute and allocates the CPU to that process.
▪ Preemptive Scheduling
CPU-scheduling decisions may take place under the following four
circumstances:
• When a process switches from the running state to the waiting
state (for example, as the result of an I/O request or an invocation
of wait() for the termination of a child process)
• When a process switches from the running state to the ready state
(for example, when an interrupt occurs)
• When a process switches from the waiting state to the ready state
(for example, at completion of I/O)
• When a process terminates

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 40


AL3452 – Operating Systems Unit 2 Mailam Engineering College

• When scheduling takes place only under circumstances 1 and 4,


we say that the scheduling scheme is Non preemptive or
cooperative. Otherwise, it is preemptive.
• Non preemptive scheduling, once the CPU has been allocated to
a process, the process keeps the CPU until it releases the CPU
either by terminating or by switching to the waiting state.
• Preemptive scheduling can result in race conditions when data
are shared among several processes.
Scheduling Criteria
1. CPU utilization- We want to keep the CPU as busy as possible.
Conceptually, CPU utilization can range from 0 to 100 percent.
2. Throughput- One measure of work is the number of processes that are
completed per time unit, called throughput.
3. Turnaround time- The interval from the time of submission of a
process to the time of completion is the turnaround time. Turnaround
time is the sum of the periods spent waiting to get into memory, waiting
in the ready queue, executing on the CPU, and doing I/O.
4. Waiting time- Waiting time is the sum of the periods spent waiting in
the ready queue.
5. Response time- The measure is the time from the submission of a
request until the first response is produced. This measure, called
response time, is the time it takes to start responding, not the time it
takes to output the response.

16. Explain Briefly CPU scheduling algorithms. Explain in detail FCFS,


shortest job first, Priority and round robin (time slice =2) scheduling
algorithms with Gantt chart.
There are many different CPU-scheduling algorithms.
• First-Come, First-Served Scheduling
• With this scheme, the process that requests the CPU first is allocated
the CPU first.
• When a process enters the ready queue, its PCB is linked onto the tail of
the queue.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 41


AL3452 – Operating Systems Unit 2 Mailam Engineering College

• When the CPU is free, it is allocated to the process at the head of the
queue.
• The running process is then removed from the queue.
Advantage: The code for FCFS scheduling is simple to write and understand.
Disadvantage: The average waiting time under the FCFS policy is often quite
long.
Consider the following set of processes that arrive at time 0, with the length of
the CPU burst given in milliseconds:
Process Burst Time
P1 24
P2 3
P3 3
If the processes arrive in the order P1, P2, P3, and are served in FCFS order,
Gantt chart

• The waiting time is 0 milliseconds for process P1, 24 milliseconds for


process P2, and 27 milliseconds for process P3.
• Thus, the average waiting time is (0+ 24 + 27)/3 = 17 milliseconds.
• Shortest-Job-First Scheduling
• This algorithm associates with each process the length of the process’s
next CPU burst.
• When the CPU is available, it is assigned to the process that has the
smallest next CPU burst.
• If the next CPU bursts of two processes are the same, FCFS scheduling
is used to break the tie.
• It is shortest-next-CPU-burst algorithm, because scheduling depends
on the length of the next
Consider the following set of processes, with the length of the CPU burst
given in milliseconds:

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 42


AL3452 – Operating Systems Unit 2 Mailam Engineering College

Process Burst Time


P1 6
P2 8
P3 7
P4 3
Gantt chart:

The waiting time is 3 milliseconds for process P1, 16 milliseconds for


process P2, 9 milliseconds for process P3, and 0 milliseconds for process
P4.
Thus, the average waiting time is (3 + 16 + 9 + 0)/4 = 7 milliseconds.
• With short-term scheduling, there is no way to know the length of the
next CPU burst.
• A preemptive SJF algorithm will preempt the currently executing
process, whereas a nonpreemptive SJF algorithm will allow the
currently running process to finish its CPU burst.
• Preemptive SJF scheduling is sometimes called shortest-remaining-
time-first scheduling.
Consider the following four processes, with the length of the CPU burst given
in milliseconds:
Process Arrival Time Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
Gantt chart:

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 43


AL3452 – Operating Systems Unit 2 Mailam Engineering College

• Priority Scheduling
• Apriority is associated with each process, and the CPU is allocated to
the process with the highest priority.
• Equal-priority processes are scheduled in FCFS order.
• An SJF algorithm is simply a priority algorithm where the priority (p) is
the inverse of the (predicted) next CPU burst.
Consider the following set of processes, assumed to have arrived at time 0
in the order P1, P2, · · ·, P5, with the length of the CPU burst given in
milliseconds:
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
Gantt chart:

The average waiting time is 8.2 milliseconds.


• Priority scheduling can be either preemptive or nonpreemptive.
• When a process arrives at the ready queue, its priority is compared with
the priority of the currently running process.
• A preemptive priority scheduling algorithm will preempt the CPU if the
priority of the newly arrived process is higher than the priority of the
currently running process.
• A nonpreemptive priority scheduling algorithm will simply put the new
process at the head of the ready queue.
• Round-Robin Scheduling
• The round-robin (RR) scheduling algorithm is designed especially for
time sharing systems.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 44


AL3452 – Operating Systems Unit 2 Mailam Engineering College

• It is similar to FCFS scheduling, but preemption is added to enable the


system to switch between processes. A small unit of time, called a time
quantum or time slice, is defined.
• A time quantum is generally from 10 to 100 milliseconds in length.
• The ready queue is treated as a circular queue.
• The CPU scheduler goes around the ready queue, allocating the CPU to
each process for a time interval of up to 1 time quantum.
Consider the following set of processes that arrive at time 0, with the length of
the CPU burst given in milliseconds:
Process Burst Time
P1 24
P2 3
P3 3
• If we use a time quantum of 4 milliseconds, then process P1 gets the
first 4 milliseconds.
The resulting RR schedule is as follows:

Let’s calculate the average waiting time for this schedule. P1 waits for 6
milliseconds (10 - 4), P2 waits for 4 milliseconds, and P3 waits for 7
milliseconds.
Thus, the average waiting time is 17/3 = 5.66 milliseconds.

• Multilevel Queue Scheduling


• Scheduling algorithms has been created for situations in which
processes are easily classified into different groups. For example, a

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 45


AL3452 – Operating Systems Unit 2 Mailam Engineering College

common division is made between foreground (interactive) processes


and background (batch) processes.
• These two types of processes have different response-time requirements
and so may have different scheduling needs. In addition, foreground
processes may have priority (externally defined) over background
processes.
• A multilevel queue scheduling algorithm partitions the ready queue
into several separate queues as shown in figure 2.19.

Fig.2.19 Multilevel Queue Scheduling


• Multilevel queue scheduling algorithm with five queues
1. System processes
2. Interactive processes
3. Interactive editing processes
4. Batch processes
5. Student processes
6. Multilevel Feedback Queue Scheduling
• The multilevel feedback queue scheduling algorithm, allows a process
to move between queues.
• If a process uses too much CPU time, it will be moved to a lower-priority
queue.
• A process that waits too long in a lower-priority queue may be moved to
a higher-priority queue as shown in figure 2.20.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 46


AL3452 – Operating Systems Unit 2 Mailam Engineering College

Fig.2.20 Multilevel feedback Queue

17. Explain in detail about multiple processor scheduling.


Multiple Processor Scheduling
• If multiple CPUs are available, the scheduling problem is
correspondingly more complex.
• If several identical processors are available, then load-sharing can
occur.
• It is possible to provide a separate queue for each processor.
• In this case however, one processor could be idle, with an empty queue,
while another processor was very busy.
• To prevent this situation, we use a common ready queue.
1. Self Scheduling- Each processor is self-scheduling. Each processor
examines the common ready queue and selects a process to execute. We must
ensure that two processors do not choose the same process, and that
processes are not lost from the queue.
2. Master – Slave Structure - This avoids the problem by appointing one
processor as scheduler for the other processors, thus creating a master-slave
structure.

18. Explain in detail about real-time scheduling.


Real-Time Scheduling
• Real-time computing is divided into two types.
o Hard real-time systems
o Soft real-time systems
• Hard RTS are required to complete a critical task within a guaranteed
amount of time.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 47


AL3452 – Operating Systems Unit 2 Mailam Engineering College

• Generally, a process is submitted along with a statement of the amount


of time in which it needs to complete or perform I/O.
• The scheduler then either admits the process, guaranteeing that the
process will complete on time, or rejects the request as impossible. This
is known as resource reservation.
• Soft real-time computing is less restrictive. It requires that critical
processes receive priority over less fortunate ones.
• The system must have priority scheduling, and real-time processes
must have the highest priority.
• The priority of real-time processes must not degrade over time, even
though the priority of non-real-time processes may.
• Dispatch latency must be small. The smaller the latency, the faster a
real-time process can start executing.
• The high-priority process would be waiting for a lower-priority one to
finish. This situation is known as priority inversion.

19. What is a deadlock? What are the necessary conditions for a deadlock
to occur? (or) Explain deadlock detection with examples. (NOV/DEC
2024)
Deadlock Definition
• A process requests resources.
• If the resources are not available at that time, the process enters a wait
state.
• Waiting processes may never change state again because the resources
they have requested are held by other waiting processes. This situation
is called a deadlock.
Resources
• Request: If the request cannot be granted immediately then the requesting
process must wait until it can acquire the resource.
• Use: The process can operate on the resource
• Release: The process releases the resource.
Four Necessary conditions for a deadlock
Mutual exclusion:
• At least one resource must be held in a non sharable mode.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 48


AL3452 – Operating Systems Unit 2 Mailam Engineering College

• That is only one process at a time can use the resource.


• If another process requests that resource, the requesting process
must be delayed until the resource has been released.
Hold and wait:
• A process must be holding at least one resource and waiting to
acquire additional resources that are currently being held by
other processes.
No preemption:
• Resources cannot be preempted.
Circular wait:
• P0 is waiting for a resource that is held by P1, P1 is waiting for a
resource that is held by P2...Pn-1.

20. Explain in detail about deadlock characterization.


Resource-Allocation Graph
• It is a Directed Graph with a set of vertices V and set of edges E.
• V is partitioned into two types:
• Nodes P = {p1, p2,..pn}
• Resource type R ={R1,R2,...Rm}
• Pi -->Rj - request => request edge
• Rj-->Pi - allocated => assignment edge.
• Pi is denoted as a circle and Rj as a square.
• Rj may have more than one instance represented as a dot with in the
square.
P = { P1,P2,P3}
R = {R1,R2,R3,R4}
E= {P1->R1, P2->R3, R1->P2, R2->P1, R3->P3 }
Resource instances
• One instance of resource type R1, Two instance of resource type R2,
One instance of resource type R3, Three instances of resource type R4
as shown in figure 2.21.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 49


AL3452 – Operating Systems Unit 2 Mailam Engineering College

Fig.2.21 Resource Allocation graph

Process states
• Process P1 is holding an instance of resource type R2, and is waiting for
an instance of resource type R1 as shown in figure 2.22.
Resource Allocation Graph with a deadlock

Fig.2.22 Resource allocation graph with deadlock


• Process P2 is holding an instance of R1 and R2 and is waiting for an
instance of resource type R3. Process P3 is holding an instance of R3.
• P1->R1->P2->R3->P3->R2->P1
• P2->R3->P3->R2->P2

21. Explain about the methods used to prevent deadlocks


Deadlock Prevention:
• This ensures that the system never enters the deadlock state.
• Deadlock prevention is a set of methods for ensuring that at least one of
the necessary conditions cannot hold.
• By ensuring that at least one of these conditions cannot hold, we can
prevent the occurrence of a deadlock.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 50


AL3452 – Operating Systems Unit 2 Mailam Engineering College

1. Denying Mutual exclusion


• Mutual exclusion condition must hold for non-sharable resources.
• Printer cannot be shared simultaneously shared by prevent processes.
• Sharable resource - example Read-only files.
• If several processes attempt to open a read-only file at the same time,
they can be granted simultaneous access to the file.
• A process never needs to wait for a sharable resource.

2. Denying Hold and wait


• Whenever a process requests a resource, it does not hold any other
resource.
• One technique that can be used requires each process to request and be
allocated all its resources before it begins execution.
• Another technique is before it can request any additional resources, it
must release all the resources that it is currently allocated.
• These techniques have two main disadvantages:
• First, resource utilization may be low, since many of the
resources may be allocated but unused for a long time.
• We must request all resources at the beginning for both
protocols.
3. Denying No preemption
• If a Process is holding some resources and requests another resource
that cannot be immediately allocated to it. (i.e. the process must wait),
then all resources currently being held are preempted.
• These resources are implicitly released.
• The process will be restarted only when it can regain its old resources.
4. Denying Circular wait
• Impose a total ordering of all resource types and allow each process to
request for resources in an increasing order of enumeration.
• Let R = {R1,R2,...Rm} be the set of resource types.
• Assign to each resource type a unique integer number.
• If the set of resource types R includes tape drives, disk drives and
printers.
F(tapedrive)=1,

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 51


AL3452 – Operating Systems Unit 2 Mailam Engineering College

F(diskdrive)=5,
F(Printer)=12.
• Each process can request resources only in an increasing order of
enumeration.

22. Explain in detail about Banker’s deadlock avoidance algorithm with


an illustration.
Deadlock Avoidance:
• Deadlock avoidance request that the OS be given in advance additional
information concerning which resources a process will request and use
during its life time.
• To decide whether the current request can be satisfied or must be
delayed, a system must consider the resources currently available, the
resources currently allocated to each process and future requests and
releases of each process.
Safe State
A state is safe if the system can allocate resources to each process in some
order and still avoid a dead lock as shown in figure 2.23.

Fig.2.23 Deadlock Safe/Unsafe


• A deadlock is an unsafe state.
• Not all unsafe states are dead locks
• An unsafe state may lead to a dead lock
• Two algorithms are used for deadlock avoidance namely;
1. Resource Allocation Graph Algorithm - single instance of a
resource type.
2. Banker’s Algorithm – several instances of a resource type.
Resource allocation graph algorithm
• Claim edge - Claim edge Pi--->Rj indicates that process Pi may request
resource Rj at some time, represented by a dashed directed edge.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 52


AL3452 – Operating Systems Unit 2 Mailam Engineering College

• When process Pi request resource Rj, the claim edge Pi ->Rj is converted
to a request edge.
• Similarly, when a resource Rj is released by Pi the assignment edge Rj ->
Pi is reconverted to a claim edge Pi ->Rj
• The request can be granted only if converting the request edge Pi ->Rj to
an assignment edge Rj -> Pi does not form a cycle.
• If no cycle exists, then the allocation of the resource will leave the
system in a safe state.
• If a cycle is found, then the allocation will put the system in an unsafe
state.
Banker's algorithm
o Safety Algorithm
o Resource request algorithm
Data structures used for bankers algorithm
• Available: indicates the number of available resources of each type.
• Max: Max[i, j]=k then process Pi may request at most k instances of
resource type Rj
• Allocation: Allocation[i. j]=k, then process Pi is currently allocated K
instances of resource type Rj
• Need: if Need[i, j]=k then process Pi may need K more instances of
resource type Rj
Need [i, j]=Max[i, j]-Allocation[i, j]
1. Safety algorithm
1. Initialize work := available and Finish [i]:=false for i=1,2,3 .. n
2. Find an i such that both
Finish[i]=false
a. Needi<= Work
i. if no such i exists, goto step 4
3. work :=work+ allocationi;
a. Finish[i]:=true
goto step 2
4. If finish[i]=true for all i, then the system is in a safe state
2. Resource Request Algorithm
Let Requesti be the request from process Pi for resources.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 53


AL3452 – Operating Systems Unit 2 Mailam Engineering College

1. If Requesti<= Needi goto step2, otherwise raise an error condition,


since the process has exceeded its maximum claim.
2. If Requesti<= Available, goto step3, otherwise Pi must wait, since
the resources are not available.
3. Available := Availabe-Requesti;
Allocationi :=Allocationi + Requesti
Needi :=Needi - Requesti;
• Now apply the safety algorithm to check whether this new state is safe
or not. If it is safe then the request from process Pi can be granted.

23. Explain the two solutions of recovery from deadlock.


Deadlock Recovery
1. Process Termination
1. Abort all deadlocked processes.
2. Abort one deadlocked process at a time until the deadlock cycle is
eliminated.
• After each process is aborted, a deadlock detection algorithm must be
invoked to determine where any process is still dead locked.
2. Resource Preemption
• Preemptive some resources from process and give these resources to
other processes until the deadlock cycle is broken.
Factors considered for resource preemption
• Selecting a victim: which resources and which process are to be
preempted.
• Rollback: if we preempt a resource from a process it cannot
continue with its normal execution. It is missing some needed
resource. we must rollback the process to some safe state, and
restart it from that state.
• Starvation: How can we guarantee that resources will not always
be preempted from the same process?

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 54


AL3452 – Operating Systems Unit 2 Mailam Engineering College

24. How can deadlock be detected?


Two methods to detect a deadlock
o Single instance of resource type
o Several instances of a resource type
Single Instance of Each Resource Type
• If all resources have only a single instance, then we can define a
deadlock detection algorithm that use a variant of resource-
allocation graph called a wait for graph as shown in figure 2.24.

Fig.2.24 a) Resource Allocation graph b) wait for graph


Several Instances of a Resource Type
Available: Number of available resources of each type
Allocation: number of resources of each type currently allocated to each
process
Request: Current request of each process
If Request [i,j]=k, then process Pi is requesting K more instances of
resource type Rj.
1. Initialize work := available
Finish[i]=false, otherwise finish [i]:=true
2. Find an index i such that both
a. Finish[i]=false
b. Requesti<=work
if no such i exists go to step4.
3. Work:=work+allocationi
Finish[i]:=true
goto step2
• If finish[i]=false then process Pi is deadlocked

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 55


AL3452 – Operating Systems Unit 2 Mailam Engineering College

25. Consider the table given below for a system, find the need matrix
and the safety sequence, is the request from process P1(0, 1, 2) can be
granted immediately.
Resource – 3 types
A – (10 instances)
B – (5 instances)
C – (7 instances)

Solution: Banker’s Algorithm


Step 1:
Safety for process P0
need0 = (7, 4, 3)
If need0 ≤ Available
if [(7, 4, 3) ≤ (3, 3, 2)] (false)
Process P0 must wait.
Step 2:
Safety for process P0
need1 = (1, 2, 2)
ifneedi ≤ Available
if [(1, 2, 2) ≤ (3, 3, 2)]
Pi will execute.
Available = Available + Allocation
= (3, 3, 2) + (2, 0, 0)
= (5, 3, 2)
Step 3:
Safety for process P2
need2 = (6, 0, 0)
if need2 ≤ Available
if [(6, 0, 0) ≤(5, 3, 2)] (false)
P3 will execute.
Available = Available + Allocation
= (5, 3, 2) + (2, 1, 1)
= (7, 4, 3)

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 56


AL3452 – Operating Systems Unit 2 Mailam Engineering College

Step 5:
Safety for process P4
need4 = (4, 3, 1)
If need4 ≤ Available
If [(4, 3, 1) ≤ (6, 4, 3)]
P4 will execute.
Available = Available + Allocation
= (7, 4, 3) + (0, 0, 2)
= (7, 4, 5)
Step 6:
Safety for process P0
need0 = (7, 4, 3)
if need0 ≤ Available
if [(7, 4, 3) ≤ (7, 4, 5)]
P0 will execute.
Available = Available + Allocation
= (7, 4, 5) + (0, 1, 0)
= (7, 5, 5)
Step 7:
Safety for process P2
need2 = (6, 0, 0)
if need2 ≤ Available
if [(6, 0, 0) ≤ (7, 5, 5)]
P2 will execute.
Available = Available + Allocation
= (7, 5, 5) + (3, 0, 2)
= (10, 5, 7)
Safety Sequence = <P1, P3, P4, P0, P2>

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 57


AL3452 – Operating Systems Unit 2 Mailam Engineering College

26. Compare and contrast preemptive and non-preemptive


scheduling.
• In preemptive scheduling, the CPU is allocated to the processes for a
limited time whereas, in Non-preemptive scheduling, the CPU is allocated
to the process till it terminates or switches to the waiting state.
• The executing process in preemptive scheduling is interrupted in the
middle of execution when higher priority one comes whereas, the executing
process in non-preemptive scheduling is not interrupted in the middle of
execution and waits till its execution.
• In Preemptive Scheduling, there is the overhead of switching the process
from the ready state to running state, vise-verse and maintaining the ready
queue. Whereas in the case of non-preemptive scheduling has no overhead
of switching the process from running state to ready state.
• In preemptive scheduling, if a high-priority process frequently arrives in
the ready queue then the process with low priority has to wait for a long,
and it may have to starve. , in the non-preemptive scheduling, if CPU is
allocated to the process having a larger burst time then the processes with
small burst time may have to starve.
• Preemptive scheduling attains flexibility by allowing the critical processes
to access the CPU as they arrive into the ready queue, no matter what
process is executing currently. Non-preemptive scheduling is called rigid
as even if a critical process enters the ready queue the process running
CPU is not disturbed.
• Preemptive Scheduling has to maintain the integrity of shared data that’s
why it is cost associative which is not the case with Non-preemptive
Scheduling.
27. Describe why the interrupts are not appropriate for implementing
synchronous preemptive in multiprocessing systems. (8) (NOV/DEC 2024)
In a multiprocessing system, synchronous preemption refers to
preempting a process or thread in a controlled and predictable manner to ensure fair
CPU scheduling, resource allocation, and responsiveness. While interrupts are
commonly used in OS scheduling, they are not ideal for implementing synchronous
preemption in multiprocessing systems due to several reasons:

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 58


AL3452 – Operating Systems Unit 2 Mailam Engineering College

1. Interrupts Are Asynchronous by Nature


• Interrupts occur unpredictably based on hardware events (e.g., I/O completion,
timer expiration, or external signals).
• Synchronous preemption, on the other hand, requires predictable and controlled
preemption at specific execution points (e.g., after a quantum expires or at well-
defined system calls).
• Since interrupts can occur at any time, they may disrupt critical operations and
lead to race conditions or inconsistent states.
2. Interrupt Handling Overhead in Multiprocessing
• In a multiprocessor system, interrupts must be handled carefully to avoid
conflicts.
• If multiple CPUs receive interrupts simultaneously, coordination and context
switching overhead increase.
• Excessive interrupt handling degrades system performance due to frequent
context switches, cache invalidations, and inter-processor communication (IPI)
overhead.

3. Race Conditions and Synchronization Issues


• When multiple processors handle interrupts, race conditions may occur if
interrupt handlers modify shared data structures (e.g., process queues, resource
tables).
• Without proper locking mechanisms, processors may experience inconsistent
states, leading to deadlocks or priority inversion.

4. Increased Latency and Unpredictability


• Interrupts introduce latency because the CPU must stop executing the current
process, save its state, and switch to the interrupt handler.
• In real-time systems, strict timing guarantees are needed, and interrupts can
introduce jitter, making timing unpredictable.
• Synchronous preemption, however, requires low-latency, predictable preemption
that ensures fair CPU usage without unnecessary interruptions.

5. Alternative Approaches for Synchronous Preemption


Instead of relying on interrupts, multiprocessing systems use:

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 59


AL3452 – Operating Systems Unit 2 Mailam Engineering College

• Timer-based preemption: The OS sets periodic timers to trigger preemption at


fixed intervals, ensuring fairness.
• Scheduler-controlled preemption: The OS checks process states at known
synchronization points (e.g., system calls, kernel mode transitions).
• Polling mechanisms: Some real-time systems use polling instead of interrupts
to ensure deterministic scheduling.

While interrupts are essential for handling asynchronous events, they


are not ideal for synchronous preemption in multiprocessing due to
unpredictability, race conditions, and overhead. Instead, timer-based and
scheduler-driven preemption methods ensure fair and efficient process scheduling
in a controlled manner.

28. Summarize the difference between user thread and kernel thread. (NOV/DEC
2024)
Feature User Thread Kernel Thread
Definition Managed by user-level Managed and scheduled
libraries, without direct by the operating system
OS involvement. kernel.
Performance Faster, as thread Slower, due to kernel
management (creation, involvement in scheduling
switching) is done in user and context switching.
space without kernel
intervention.
Scheduling Handled by the user-level Handled by the OS
thread library; the OS is scheduler, ensuring
unaware of user threads. system-wide fairness.
Blocking If one thread blocks (e.g., If a thread blocks, other
on I/O), all threads in that threads in the same
process may block. process can still execute.
Portability More portable, as they do Less portable, as they rely
not depend on the OS on specific OS
kernel. implementations.
Multiprocessing Support Cannot take full Can run on multiple CPUs

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 60


AL3452 – Operating Systems Unit 2 Mailam Engineering College

advantage of multiple simultaneously, improving


CPUs, as the OS sees only parallelism.
a single process.
Example APIs POSIX Pthreads (user- Windows threads, Linux
level), Java threads kernel threads (k thread)

29. Consider the following snapshot: (NOV/DEC 2024)

Answer the following questions using the Banker's algorithm


(1) What is the content of the matrix needed? (2)
(2) Is the system in a safe state? Justify your answer. (2)
(3) If a request from process P1 arrives for (0, 4, 2, 0), can the request be
granted immediately? (6)

Step 1…Calculate the Need matrix.

Check if the system is in a safe state.


Step 3 …Check if the request from process P1can be granted immediately.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 61


AL3452 – Operating Systems Unit 2 Mailam Engineering College

The request from 𝑃1 is (0,4,2,0). The available resources are (3,3,2).


. The request cannot be granted immediately because there are not enough resources
available.

30. Explain the methods how deadlock can be avoided? Assume that there are
three resources, A,B, and C. There are 4 processes P0 to P3. At T0, the state of
the system is given below.

1. Create the Need Matrix:


• The "Need" matrix represents the maximum amount of each resource a process still
needs to complete its execution.
| Process | Need (A) | Need (B) | Need (C) |
|---|---|---|---|
| P0 | 1 | 1 | 1 |
| P1 | 2 | 3 | 2 |
| P2 | 0 | 1 | 1 |
| P3 | 1 | 0 | 0 |
How to calculate Need Matrix:

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 62


AL3452 – Operating Systems Unit 2 Mailam Engineering College

• For each process, subtract the allocated resources from the maximum needed
resources:
o Need(A) = Maximum(A) - Allocation(A)
o Need(B) = Maximum(B) - Allocation(B)
o Need(C) = Maximum(C) - Allocation(C)
2. Apply Banker's Algorithm:
• Available Resources: Let's assume the available resources at T0 are A=1, B=1, C=1.
• Check for Safe State:
o Step 1: Find a process that can be completely satisfied with the available
resources.
▪ P2 can be satisfied with its need (A=0, B=1, C=1) as it is fully covered by
the available resources.
▪ Update available resources: A=1, B=0, C=0.
o Step 2:
▪ P3 can be satisfied with its need (A=1, B=0, C=0) using the updated
available resources.
▪ Update available resources: A=0, B=0, C=0.
o Step 3:
▪ P0 can be satisfied with its need (A=1, B=1, C=1) using the updated
available resources.
▪ Update available resources: A=0, B=0, C=0.
o Step 4:
▪ P1 cannot be satisfied with its need (A=2, B=3, C=2) as there are no
available resources left.
• Conclusion: The system is in a safe state because a sequence of processes (P2, P3,
P0) can be executed without causing a deadlock, even if each process requests its
maximum need.
Deadlock Avoidance with Banker's Algorithm:
• Before allocating a resource to a process, check if the system would remain in a
safe state after allocation.
• If allocating a resource would result in an unsafe state, deny the request to prevent
potential deadlock.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 63


AL3452 – Operating Systems Unit 2 Mailam Engineering College

31. Consider the execution of two processes P1 and P2 with the following CPU and
I/O burst times.

Each row shows the required resource for the process and the time that the
process needs that resource. For example "Net 3" in fourth row says that P2
needs network card for 3 time units.
(i) If P2 arrives 2 time units after P1 and the scheduling policy is non-
preemptive SJF then calculate the finish time for each process and the CPU idle
time in that duration. (7)
(ii) If P2 arrives 2 time units before P1 and the scheduling policy is preemptive
SJF then calculate the finish time for each process and the CPU idle time in
that duration. (8) (April/May 2024)
(i) Non-preemptive SJF with P2 arriving 2 time units after P1:
• Process execution order: P1 (CPU 3) -> P1 (Net 4) -> P1 (Disk 3) -> P1 (CPU 2) ->
P2 (CPU 4) -> P2 (Net 4) -> P2 (Disk 3) -> P2 (CPU 3) -> P2 (Net 3)
• Finish times:
o P1: 12 time units (3 CPU + 4 Net + 3 Disk + 2 CPU)
o P2: 18 time units (from the time P2 arrives at time unit 2: 4 CPU + 4 Net +
3 Disk + 3 CPU + 3 Net)
• CPU idle time: 2 time units (between when P1 finishes its first CPU burst and P2
arrives)
(ii) Preemptive SJF with P2 arriving 2 time units before P1:
• Process execution order:
P2 (CPU 4) -> P1 (CPU 3) -> P2 (Net 4) -> P1 (Net 4) -> P2 (Disk 3) -> P1 (Disk 3) -
> P2 (CPU 3) -> P1 (CPU 2) -> P2 (Net 3)
• Finish times:
• P1: 13 time units (3 CPU + 4 Net + 3 Disk + 2 CPU)

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 64


AL3452 – Operating Systems Unit 2 Mailam Engineering College

• P2: 16 time units (4 CPU + 4 Net + 3 Disk + 3 CPU + 3 Net)


• CPU idle time:
0 time units (since preemptive scheduling allows for immediate switching
between processes when a shorter burst time becomes available).
Explanation:
• Non-preemptive SJF:
• P1 starts first as it has the shortest initial CPU burst.
• P2 arrives after 2 time units and is added to the queue.
• Since P1 has a shorter next burst (Net 4) than P2's CPU burst, P1
continues execution until it finishes its entire sequence.
• Preemptive SJF:
• P2 starts first as it arrives earlier and has the shortest initial CPU burst.
• Whenever a new process with a shorter remaining burst time arrives, the
currently executing process is preempted and the new process takes over
the CPU.
• In non-preemptive scheduling, a process will run completely before another
process can start executing even if a shorter burst arrives later.
• Preemptive scheduling allows for more efficient CPU utilization by switching to a
shorter burst whenever available.
32. Consider the following resource-allocation policy. Requests and releases for
resources are allowed at any time. If a request for resources cannot be satisfied
because the resources are not available, then we check any processes that are
blocked, waiting for resources. If they have the desired resources, then these
resources are taken away from them and are given to the requesting process.
The vector of resources for which the waiting process is waiting is increased to
include the resources that were taken away. For example, consider a system
with three resource types and the vector Available initialized to (4,2,2). If
process Po asks for (2,2,1) it gets them. If P1 asks for (1,0,1), it gets them.
Then, if Po asks for (0,0,1), it is blocked (resource not available). If P2 now asks
for (2,0,0), it gets the available one (1,0,0) and one that was allocated to Po
(since Po is blocked). Po's Allocation vector goes down to (1, 2, 1), and its Need
vector goes up to (1, 0, 1). Answer the followings:
(i) Predict whether deadlock occurs or not. If it occurs, give an example. If
not, which necessary condition cannot occur?

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 65


AL3452 – Operating Systems Unit 2 Mailam Engineering College

(ii) Predict whether indefinite blocking occurs or not. (Nov/Dec 2024)


Example scenario to illustrate indefinite blocking:
• Consider three processes P0, P1, and P2 with resource needs:
o P0 needs (1, 0, 0)
o P1 needs (0, 1, 0)
o P2 needs (0, 0, 1)

• If P0 is currently allocated (1, 0, 0) and P1 requests (0, 1, 0), P1 will be granted the
resource immediately.
• Now, if P2 requests (0, 0, 1), it will be blocked because the resource is not available.
• If P0 then requests (0, 1, 0), it will be granted the resource that was previously held
by P1 since P1 is no longer waiting for it.
• This scenario can repeat continuously, causing P2 to remain blocked indefinitely
waiting for the resource currently held by either P0 or P1.

(i) Predict whether deadlock occurs or not. If it occurs, give an example. If not,
which necessary condition cannot occur?
Deadlock Prediction
• Deadlock occurs when the system enters a state where a set of processes are
waiting for resources held by each other in a circular dependency, with no
process able to proceed.
• To analyze this, let’s check the four necessary conditions for deadlock:
1. Mutual Exclusion: Resources are allocated to one process at a time—this
condition holds.
2. Hold and Wait: Processes may hold some resources while requesting more—
this condition holds.
3. No Preemption: The system policy allows preemption, meaning resources can
be taken away from a blocked process and given to another process. This
violates the no preemption condition, preventing deadlock from occurring.
4. Circular Wait: Deadlock occurs when a circular chain of processes exists,
where each process is waiting for a resource held by the next in the chain.
However, since resources can be taken away, the chain is broken before
deadlock can occur.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 66


AL3452 – Operating Systems Unit 2 Mailam Engineering College

Hence Deadlock does not occur because the no preemption condition is violated.

(ii) Predict whether indefinite blocking occurs or not.


o Indefinite blocking (starvation) occurs when a process waits indefinitely
because resources are continuously taken away and allocated to other
processes.
o In this system, a blocked process can lose its allocated resources to another
process. If a process repeatedly gets resources taken away before it can
complete execution, it may remain in a perpetual wait state, leading to
starvation.
o Conclusion: Indefinite blocking can occur because a process that keeps
getting its resources preempted may never finish execution.

Prepared By: Ms.M.NITHYA, AP/AI&DS Page 67

You might also like