0% found this document useful (0 votes)
7 views

OS Module 3.Pptx

The document discusses process synchronization and inter-process communication (IPC) in operating systems, highlighting their importance in ensuring data consistency and preventing issues like deadlocks. It outlines types of synchronization, advantages and disadvantages of IPC, and key concepts such as race conditions and critical sections. Additionally, it covers deadlock characterization, detection, and recovery methods, emphasizing the need for careful design to avoid performance and security issues.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

OS Module 3.Pptx

The document discusses process synchronization and inter-process communication (IPC) in operating systems, highlighting their importance in ensuring data consistency and preventing issues like deadlocks. It outlines types of synchronization, advantages and disadvantages of IPC, and key concepts such as race conditions and critical sections. Additionally, it covers deadlock characterization, detection, and recovery methods, emphasizing the need for careful design to avoid performance and security issues.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

OS Module 3

Process co-ordination
Prachii Shrivastava
Synchronization
► Process Synchronization is used in a computer system to ensure that multiple processes or threads can run concurrently
without interfering with each other.
► The main objective of process synchronization is to ensure that multiple processes access shared resources without
interfering with each other and to prevent the possibility of inconsistent data due to concurrent access. To achieve this,
various synchronization techniques such as semaphores, monitors, and critical sections are used.
► In a multi-process system, synchronization is necessary to ensure data consistency and integrity, and to avoid the risk of
deadlocks and other synchronization problems. Process synchronization is an important aspect of modern operating
systems, and it plays a crucial role in ensuring the correct and efficient functioning of multi-process systems.
► Process Synchronization is the coordination of execution of multiple processes in a multi-process system to ensure that
they access shared resources in a controlled and predictable manner. It aims to resolve the problem of race conditions
and other synchronization issues in a concurrent system.
► Types of Process Synchronization
► The two primary type of process Synchronization in an Operating System are:
Competitive: Two or more processes are said to be in Competitive Synchronization if and only if they compete for the
accessibility of a shared resource.
Lack of Synchronization among Competing process may lead to either Inconsistency or Data loss.
Cooperative: Two or more processes are said to be in Cooperative Synchronization if and only if they get affected by each
other i.e. execution of one process affects the other process.
Lack of Synchronization among Cooperating process may lead to Deadlock.


IPC:
► Processes need to communicate with each other in many situations, for example, to count
occurrences of a word in text file, output of grep command needs to be given to wc
command, something like grep -o -i <word> <file> | wc -l. Inter-Process Communication or
IPC is a mechanism that allows processes to communicate. It helps processes synchronize
their activities, shared resources.
► Inter process communication (IPC) allows different programs or processes running on a
computer to share information with each other. IPC allows processes to communicate by
using different techniques like sharing memory, sending messages, or using files. It ensures
that processes can work together without interfering with each other. Cooperating
processes require an Inter Process Communication (IPC) mechanism that will allow them to
exchange data and information.
Advantages of IPC
► Enables processes to communicate with each other and share resources, leading to
increased efficiency and flexibility.
► Facilitates coordination between multiple processes, leading to better overall system
performance.
► Allows for the creation of distributed systems that can span multiple computers or
networks.
► Can be used to implement various synchronization and communication protocols, such as
semaphores, pipes, and sockets.
► Disadvantages of IPC
► Increases system complexity, making it harder to design, implement, and debug.
► Can introduce security vulnerabilities, as processes may be able to access or modify data
belonging to other processes.
► Requires careful management of system resources, such as memory and CPU time, to ensure
that IPC operations do not degrade overall system performance.
Can lead to data inconsistencies if multiple processes try to access or modify the same data
at the same time.
► Overall, the advantages of IPC outweigh the disadvantages, as it is a necessary mechanism
for modern operating systems and enables processes to work together and share resources
in a flexible and efficient manner. However, care must be taken to design and implement
IPC systems carefully, in order to avoid potential security vulnerabilities and performance
issues.
► Conclusion
A fundamental component of contemporary operating systems, IPC allows processes to
efficiently coordinate operations, share resources, and communicate. IPC is beneficial for
developing adaptable and effective systems, despite its complexity and possible security
threats.
----------------------------------------------------------------------------------------------------------------------
--
Role of Synchronization in IPC:
► In IPC, synchronization is essential for controlling access to shared resources and
guaranteeing that processes do not conflict with one another. Data consistency is ensured
and problems like race situations are avoided with proper synchronization.
Race condition:
► A race condition is a situation that may occur inside a critical section. This happens when
the result of multiple thread execution in a critical section differs according to the order in
which the threads execute. Race conditions in critical sections can be avoided if the critical
section is treated as an atomic instruction. Also, proper thread synchronization using locks
or atomic variables can prevent race conditions.
► A race condition happens when two or more processes try to access the same resource at
the same time without proper coordination. This “race” can lead to incorrect results or
unpredictable behavior because the order of execution is not controlled.
► Example: Two people trying to edit the same document at the same time, causing one’s
changes to overwrite the other’s.

Key Terms in a Race Condition


► Critical Section: A code part where the shared resources are accessed. It is critical as
multiple processes enter this section at same time leading to data corruption and errors.
► Synchronization: It is the process of controlling how and when multiple processes or
threads access the shared resources ensuring that only one can enter the critical section at
same time.
► Deadlock: A situation where two or more processes are stuck waiting for each other’s
resources, causing a deadlock(standstill).
Critical section:

► A critical section is a part of a program where shared resources like memory


or files are accessed by multiple processes or threads. To avoid issues like
data inconsistency or race conditions, synchronization techniques ensure that
only one process or thread uses the critical section at a time.
► In simple terms, a critical section is a group of instructions/statements or
regions of code that need to be executed atomically, such as accessing a
resource (file, input or output port, global data, etc.)
In concurrent programming, if one process tries to change the value of shared
data at the same time as another thread tries to read the value (i.e., data
race across threads), the result is unpredictable. The access to such shared
variables (shared memory, shared files, shared port, etc.) is to be
synchronized.
Critical Section problem:

► The use of critical sections in a program can cause a number of issues, including:
► Deadlock: When two or more threads or processes wait for each other to release a
critical section, it can result in a deadlock situation in which none of the threads or
processes can move. Deadlocks can be difficult to detect and resolve, and they can
have a significant impact on a program’s performance and reliability.
► Starvation: When a thread or process is repeatedly prevented from entering a
critical section, it can result in starvation, in which the thread or process is unable
to progress. This can happen if the critical section is held for an unusually long
period of time, or if a high-priority thread or process is always given priority when
entering the critical section.
► Overhead: When using critical sections, threads or processes must acquire and
release locks or semaphores, which can take time and resources. This may reduce
the program’s overall performance.
Advantages of Critical Section in Process Synchronization
► Prevents race conditions: By ensuring that only one process can execute the critical section at
a time, race conditions are prevented, ensuring consistency of shared data.
► Provides mutual exclusion: Critical sections provide mutual exclusion to shared resources,
preventing multiple processes from accessing the same resource simultaneously and causing
synchronization-related issues.
► Reduces CPU utilization: By allowing processes to wait without wasting CPU cycles, critical
sections can reduce CPU utilization, improving overall system efficiency.
► Simplifies synchronization: Critical sections simplify the synchronization of shared resources,
as only one process can access the resource at a time, eliminating the need for more complex
synchronization mechanisms.

Disadvantages of Critical Section in Process Synchronization


► Overhead: Implementing critical sections using synchronization mechanisms like semaphores
and mutexes can introduce additional overhead, slowing down program execution.
► Deadlocks: Poorly implemented critical sections can lead to deadlocks, where multiple
processes are waiting indefinitely for each other to release resources.
► Can limit parallelism: If critical sections are too large or are executed frequently, they can
limit the degree of parallelism in a program, reducing its overall performance.
► Can cause contention: If multiple processes frequently access the same critical section,
contention for the critical section can occur, reducing performance.
Peterson’s Solution of Critical Section
Problem;
Deadlocks
► A deadlock is a situation where a set of processes is blocked because each process is holding
a resource and waiting for another resource acquired by some other process. In this article,
we will discuss deadlock, its necessary conditions, etc. in detail.
► Deadlock is a situation in computing where two or more processes are unable to proceed
because each is waiting for the other to release resources.
► Key concepts include mutual exclusion, resource holding, circular wait, and no preemption.
► A situation occurs in operating systems when there are two or more processes that hold
some resources and wait for resources held by other(s). For example, in the below
diagram, Process 1 is holding Resource 1 and waiting for resource 2 which is acquired by
process 2, and process 2 is waiting for resource 1.
Preemptive and non preemptive Resources:
► In the context of deadlock, a "preemptive resource" is a resource that can be taken away from a process by the operating system if
needed, while a "non-preemptive resource" cannot be taken away from a process without causing issues, meaning it must be
released voluntarily by the holding process; essentially, "no preemption" is a key condition for deadlock to occur, where a process
cannot be forced to relinquish a resource it is holding, even if another process needs it to proceed.
► Preemptive resources:
► Can be taken away from a process without causing significant disruption.
► Examples: CPU time in a preemptive scheduling system, memory allocation in most operating systems.
► Helps to avoid deadlocks by allowing the system to forcibly allocate resources to processes that are waiting.
► Non-preemptive resources:
► Cannot be taken away from a process without potentially causing problems like data corruption or system instability.
► Examples: A printer currently being used to print a document, a physical device like a network connection that is actively being
accessed.
► If a process holding a non-preemptive resource needs another resource held by another process, this can lead to deadlock if
neither process can proceed.
How it relates to deadlock:
► Deadlock condition: For a deadlock to occur, all four conditions must be present: mutual exclusion, hold and wait, no
preemption, and circular wait.
Importance of "no preemption":
► The "no preemption" condition is crucial because if the operating system could forcefully take a resource from a
process, it could potentially break a deadlock situation by allocating the needed resource to another waiting process .
Deadlock Characterization

► A deadlock happens in operating system when two or more processes need some
resource to complete their execution that is held by the other process.
► A deadlock occurs if the four Coffman conditions hold true. But these conditions
are not mutually exclusive. They are given as follows −
► 1. Mutual Exclusion
► There should be a resource that can only be held by one process at a time. In
the diagram below, there is a single instance of Resource 1 and it is held by
Process 1 only.
► 2. Hold and Wait
► A process can hold multiple resources and still request more resources from
other processes which are holding them. In the diagram given below, Process
2 holds Resource 2 and Resource 3 and is requesting the Resource 1 which is
held by Process 1.

► 3. No Preemption
► A resource cannot be preempted from a process by force. A process can only
release a resource voluntarily. In the diagram below, Process 2 cannot
preempt Resource 1 from Process 1. It will only be released when Process 1
relinquishes it voluntarily after its execution is complete.
► 4.Circular Wait
► A process is waiting for the resource held by the second process, which is
waiting for the resource held by the third process and so on, till the last
process is waiting for a resource held by the first process. This forms a
circular chain. For example: Process 1 is allocated Resource2 and it is
requesting Resource 1. Similarly, Process 2 is allocated Resource 1 and it is
requesting Resource 2. This forms a circular wait loop.
Deadlock Detection And Recovery

► Deadlock Detection and Recovery is the mechanism of detecting and resolving


deadlocks in an operating system. In operating systems, deadlock recovery is
important to keep everything running smoothly. A deadlock occurs when two
or more processes are blocked, waiting for each other to release the
resources they need.
► Deadlock detection is the process of identifying when processes are stuck
waiting for resources held by other processes.
► Recovery is the method of resolving the deadlock to allow the system to
continue functioning.
► Detection is done using techniques like Resource Allocation Graphs (RAG) or
Wait-for Graphs.
► Once a deadlock is detected, recovery methods include process termination,
resource preemption, or process rollback.
Deadlock Detection

► 1. Resource Allocation Graph (RAG) Algorithm


► In this case for Deadlock detection, we can run an algorithm to check for the cycle in the Resource
Allocation Graph. The presence of a cycle in the graph is a sufficient condition for deadlock.
► In the diagram, resource 1 and resource 2 have single instances. There is a cycle R1 → P1 → R2 →
P2. So, Deadlock is Confirmed.

► 2. If There are Multiple Instances of Resources


► Detection of the cycle is necessary but not a sufficient condition for deadlock detection, in this
case, the system may or may not be in deadlock varies according to different situations.
► For systems with multiple instances of resources, algorithms like Banker’s Algorithm can be adapted
to periodically check for deadlocks.
► 3. Wait-For Graph Algorithm
► The Wait-For Graph Algorithm is a deadlock detection algorithm used to detect deadlocks in a
system where resources can have multiple instances. The algorithm works by constructing a Wait-For
Graph, which is a directed graph that represents the dependencies between processes and
resources.
Deadlock Detection Algorithms
► 1. Resource Allocation Graph (RAG) Algorithm
► Build a RAG − The first step is to build a Resource Allocation Graph (RAG) that shows the allocation and request of
resources in the system. Each resource type is represented by a rectangle, and each process is represented by a
circle.
► Check for cycles − Look for cycles in the RAG. If there is a cycle, it indicates that the system is deadlocked.
► Identify deadlocked processes − Identify the processes involved in the cycle. These processes are deadlocked and
waiting for resources held by other processes.
► Determine resource types − Determine the resource types involved in the deadlock, as well as the resources held
and requested by each process.
► Take corrective action − Take corrective action to break the deadlock by releasing resources, aborting processes, or
preempting resources. Once the deadlock is broken, the system can continue with normal operations.
► Recheck for cycles − After corrective action has been taken, recheck the RAG for cycles. If there are no more
cycles, the system is no longer deadlocked, and normal operations can resume.

Advantages
► Easy to understand and implement
► Can handle multiple types of resources
► Helps identify the processes involved in a deadlock
Disadvantages
► Can be time-consuming for large systems
► Can give false positives if there are multiple requests for the same resource
► Assumes that all resources are pre-allocated, which may not be the case in some systems.
► 2. Wait-for Graph (WFG) Algorithm
► Build a WFG − The first step is to build a Wait-for Graph (WFG) that shows the waitfor relationships
between processes. Each process is represented by a circle, and an arrow is drawn from one process
to another if the former is waiting for a resource held by the latter.
► Check for cycles − Look for cycles in the WFG. If there is a cycle, it indicates that the system is
deadlocked.
► Identify deadlocked processes − Identify the processes involved in the cycle. These processes are
deadlocked and waiting for resources held by other processes.
► Determine resource types − Determine the resource types involved in the deadlock, as well as the
resources held and requested by each process.
► Take corrective action − Take corrective action to break the deadlock by releasing resources,
aborting processes, or preempting resources. Once the deadlock is broken, the system can continue
with normal operations.
► Recheck for cycles − After corrective action has been taken, recheck the WFG for cycles. If there
are no more cycles, the system is no longer deadlocked, and normal operations can resume.
Advantages
► Can handle multiple types of resources
► Useful for systems with a large number of processes
► Provides a clear visualization of the deadlock
Disadvantages
► Can be time-consuming for large systems
► May give false positives if there are multiple requests for the same resource
► Assumes that all resources are pre-allocated, which may not be the case in some systems.
► 3. Banker's Algorithm
► It can be used as a deadlock detection algorithm. In fact, it is one of the most
well-known algorithms for deadlock detection in operating systems.
► It uses 3 data structures –
► Available −
► Vector of length m
► It indicates how many available resources of each type are there.
► Allocation −
► Matrix of size n*m
► A[i,j] indicates how many j th resource type allocated to i th process.
► Request −
► Matrix of size n*m
► Indicates request of each process.
► Request[i,j] tells the number of instances Pi process is requested of jth resource
type.
Deadlock Recovery :
► Ways of Handling a Deadlock
► There are several ways of handling a deadlock, some of which are mentioned below:
► 1. Process Termination
► To eliminate the deadlock, we can simply kill one or more processes. For this, we use two methods:
► Abort all the Deadlocked Processes : Aborting all the processes will certainly break the deadlock but at a great expense.
The deadlocked processes may have been computed for a long time, and the result of those partial computations must be
discarded and there is a probability of recalculating them later.
► Abort one process at a time until the deadlock is eliminated : Abort one deadlocked process at a time, until
the deadlock cycle is eliminated from the system. Due to this method, there may be considerable overhead, because, after
aborting each process, we have to run a deadlock detection algorithm to check whether any processes are still deadlocked.
► Advantages of Process Termination
► It is a simple method for breaking a deadlock.
► It ensures that the deadlock will be resolved quickly, as all processes involved in the deadlock are terminated
simultaneously.
► It frees up resources that were being used by the deadlocked processes , making those resources available for other
processes.
► Disadvantages of Process Termination
► It can result in the loss of data and other resources that were being used by the terminated processes.
► It may cause further problems in the system if the terminated processes were critical to the system’s operation.
► It may result in a waste of resources , as the terminated processes may have already completed a significant amount of work
before being terminated.
► 2. Resource Preemption
► To eliminate deadlocks using resource preemption, we preempt some resources from processes and
give those resources to other processes. This method will raise three issues:
► Selecting a Victim : We must determine which resources and which processes are to be preempted
and also in order to minimize the cost.
► Rollback : We must determine what should be done with the process from which resources are
preempted. One simple idea is total rollback. That means aborting the process and restarting it.
► Starvation : In a system, it may happen that the same process is always picked as a victim. As a
result, that process will never complete its designated task. This situation is called Starvation and
must be avoided. One solution is that a process must be picked as a victim only a finite number of
times.
► Advantages of Resource Preemption
► It can help in breaking a deadlock without terminating any processes, thus preserving data and
resources.
► It is more efficient than process termination as it targets only the resources that are causing
the deadlock .
► It can potentially avoid the need for restarting the system.
► Disadvantages of Resource Preemption
► It may lead to increased overhead due to the need for determining which resources and processes
should be preempted.
► It may cause further problems if the preempted resources were critical to the system’s operation.
► It may cause delays in the completion of processes if resources are frequently preempted.
► 3. Priority Inversion
► A technique for breaking deadlocks in real-time systems is called priority inversion.
This approach alters the order of the processes to prevent stalemates. A higher
priority is given to the process that already has the needed resources, and a lower
priority is given to the process that is still awaiting them. The inversion of priorities
that can result from this approach can impair system performance and cause
performance issues. Additionally, because higher-priority processes may continue to
take precedence over lower-priority processes, this approach may
starve lower-priority processes of resources.
► 4. RollBack
► In database systems, rolling back is a common technique for breaking deadlocks.
When using this technique, the system reverses the transactions of the involved
processes to a time before the deadlock. The system must keep a log of all
transactions and the system’s condition at various points in time in order to use this
method. The transactions can then be rolled back to the initial state and executed
again by the system. This approach may result in significant delays in the
transactions’ execution and data loss.

Deadlock Prevention:

► 1.Eliminate Mutual Exclusion


► It is not possible to violate mutual exclusion because some resources, such as the
tape drive, are inherently non-shareable. For other resources, like printers, we can
use a technique called Spooling (Simultaneous Peripheral Operations Online).
► In spooling, when multiple processes request the printer, their jobs (instructions of
the processes that require printer access) are added to the queue in the spooler
directory. The printer is allocated to jobs on a First-Come, First-Served (FCFS) basis.
In this way, a process does not have to wait for the printer and can continue its work
after adding its job to the queue.
► 2. Eliminate Hold and Wait
► Hold and wait is a condition in which a process holds one resource while
simultaneously waiting for another resource that is being held by a different
process. The process cannot continue until it gets all the required resources.
► There are two ways to eliminate hold and wait:
► By eliminating wait: The process specifies the resources it requires in advance so that it
does not have to wait for allocation after execution starts.
For Example, Process1 declares in advance that it requires both Resource1 and Resource2.
► By eliminating hold: The process has to release all resources it is currently holding before
making a new request.
For Example: Process1 must release Resource2 and Resource3 before requesting Resource1.
► 3.Eliminate No Preemption
► Preemption is temporarily interrupting an executing task and later resuming it. Two ways to
eliminate No Preemption:
► Processes must release resources voluntarily: A process should only give up resources it
holds when it completes its task or no longer needs them.
► Avoid partial allocation: Allocate all required resources to a process at once before it
begins execution. If not all resources are available, the process must wait.
► 4.Eliminate Circular Wait
► To eliminate circular wait for deadlock prevention, we can use order on resource
allocation.
► Assign a unique number to each resource.
► Processes can only request resources in an increasing order of their numbers.
► This prevents circular chains of processes waiting for resources, as no process can request a
resource lower than what it already holds.

You might also like