OS Module 3.Pptx
OS Module 3.Pptx
Process co-ordination
Prachii Shrivastava
Synchronization
► Process Synchronization is used in a computer system to ensure that multiple processes or threads can run concurrently
without interfering with each other.
► The main objective of process synchronization is to ensure that multiple processes access shared resources without
interfering with each other and to prevent the possibility of inconsistent data due to concurrent access. To achieve this,
various synchronization techniques such as semaphores, monitors, and critical sections are used.
► In a multi-process system, synchronization is necessary to ensure data consistency and integrity, and to avoid the risk of
deadlocks and other synchronization problems. Process synchronization is an important aspect of modern operating
systems, and it plays a crucial role in ensuring the correct and efficient functioning of multi-process systems.
► Process Synchronization is the coordination of execution of multiple processes in a multi-process system to ensure that
they access shared resources in a controlled and predictable manner. It aims to resolve the problem of race conditions
and other synchronization issues in a concurrent system.
► Types of Process Synchronization
► The two primary type of process Synchronization in an Operating System are:
Competitive: Two or more processes are said to be in Competitive Synchronization if and only if they compete for the
accessibility of a shared resource.
Lack of Synchronization among Competing process may lead to either Inconsistency or Data loss.
Cooperative: Two or more processes are said to be in Cooperative Synchronization if and only if they get affected by each
other i.e. execution of one process affects the other process.
Lack of Synchronization among Cooperating process may lead to Deadlock.
►
IPC:
► Processes need to communicate with each other in many situations, for example, to count
occurrences of a word in text file, output of grep command needs to be given to wc
command, something like grep -o -i <word> <file> | wc -l. Inter-Process Communication or
IPC is a mechanism that allows processes to communicate. It helps processes synchronize
their activities, shared resources.
► Inter process communication (IPC) allows different programs or processes running on a
computer to share information with each other. IPC allows processes to communicate by
using different techniques like sharing memory, sending messages, or using files. It ensures
that processes can work together without interfering with each other. Cooperating
processes require an Inter Process Communication (IPC) mechanism that will allow them to
exchange data and information.
Advantages of IPC
► Enables processes to communicate with each other and share resources, leading to
increased efficiency and flexibility.
► Facilitates coordination between multiple processes, leading to better overall system
performance.
► Allows for the creation of distributed systems that can span multiple computers or
networks.
► Can be used to implement various synchronization and communication protocols, such as
semaphores, pipes, and sockets.
► Disadvantages of IPC
► Increases system complexity, making it harder to design, implement, and debug.
► Can introduce security vulnerabilities, as processes may be able to access or modify data
belonging to other processes.
► Requires careful management of system resources, such as memory and CPU time, to ensure
that IPC operations do not degrade overall system performance.
Can lead to data inconsistencies if multiple processes try to access or modify the same data
at the same time.
► Overall, the advantages of IPC outweigh the disadvantages, as it is a necessary mechanism
for modern operating systems and enables processes to work together and share resources
in a flexible and efficient manner. However, care must be taken to design and implement
IPC systems carefully, in order to avoid potential security vulnerabilities and performance
issues.
► Conclusion
A fundamental component of contemporary operating systems, IPC allows processes to
efficiently coordinate operations, share resources, and communicate. IPC is beneficial for
developing adaptable and effective systems, despite its complexity and possible security
threats.
----------------------------------------------------------------------------------------------------------------------
--
Role of Synchronization in IPC:
► In IPC, synchronization is essential for controlling access to shared resources and
guaranteeing that processes do not conflict with one another. Data consistency is ensured
and problems like race situations are avoided with proper synchronization.
Race condition:
► A race condition is a situation that may occur inside a critical section. This happens when
the result of multiple thread execution in a critical section differs according to the order in
which the threads execute. Race conditions in critical sections can be avoided if the critical
section is treated as an atomic instruction. Also, proper thread synchronization using locks
or atomic variables can prevent race conditions.
► A race condition happens when two or more processes try to access the same resource at
the same time without proper coordination. This “race” can lead to incorrect results or
unpredictable behavior because the order of execution is not controlled.
► Example: Two people trying to edit the same document at the same time, causing one’s
changes to overwrite the other’s.
► The use of critical sections in a program can cause a number of issues, including:
► Deadlock: When two or more threads or processes wait for each other to release a
critical section, it can result in a deadlock situation in which none of the threads or
processes can move. Deadlocks can be difficult to detect and resolve, and they can
have a significant impact on a program’s performance and reliability.
► Starvation: When a thread or process is repeatedly prevented from entering a
critical section, it can result in starvation, in which the thread or process is unable
to progress. This can happen if the critical section is held for an unusually long
period of time, or if a high-priority thread or process is always given priority when
entering the critical section.
► Overhead: When using critical sections, threads or processes must acquire and
release locks or semaphores, which can take time and resources. This may reduce
the program’s overall performance.
Advantages of Critical Section in Process Synchronization
► Prevents race conditions: By ensuring that only one process can execute the critical section at
a time, race conditions are prevented, ensuring consistency of shared data.
► Provides mutual exclusion: Critical sections provide mutual exclusion to shared resources,
preventing multiple processes from accessing the same resource simultaneously and causing
synchronization-related issues.
► Reduces CPU utilization: By allowing processes to wait without wasting CPU cycles, critical
sections can reduce CPU utilization, improving overall system efficiency.
► Simplifies synchronization: Critical sections simplify the synchronization of shared resources,
as only one process can access the resource at a time, eliminating the need for more complex
synchronization mechanisms.
► A deadlock happens in operating system when two or more processes need some
resource to complete their execution that is held by the other process.
► A deadlock occurs if the four Coffman conditions hold true. But these conditions
are not mutually exclusive. They are given as follows −
► 1. Mutual Exclusion
► There should be a resource that can only be held by one process at a time. In
the diagram below, there is a single instance of Resource 1 and it is held by
Process 1 only.
► 2. Hold and Wait
► A process can hold multiple resources and still request more resources from
other processes which are holding them. In the diagram given below, Process
2 holds Resource 2 and Resource 3 and is requesting the Resource 1 which is
held by Process 1.
► 3. No Preemption
► A resource cannot be preempted from a process by force. A process can only
release a resource voluntarily. In the diagram below, Process 2 cannot
preempt Resource 1 from Process 1. It will only be released when Process 1
relinquishes it voluntarily after its execution is complete.
► 4.Circular Wait
► A process is waiting for the resource held by the second process, which is
waiting for the resource held by the third process and so on, till the last
process is waiting for a resource held by the first process. This forms a
circular chain. For example: Process 1 is allocated Resource2 and it is
requesting Resource 1. Similarly, Process 2 is allocated Resource 1 and it is
requesting Resource 2. This forms a circular wait loop.
Deadlock Detection And Recovery
Advantages
► Easy to understand and implement
► Can handle multiple types of resources
► Helps identify the processes involved in a deadlock
Disadvantages
► Can be time-consuming for large systems
► Can give false positives if there are multiple requests for the same resource
► Assumes that all resources are pre-allocated, which may not be the case in some systems.
► 2. Wait-for Graph (WFG) Algorithm
► Build a WFG − The first step is to build a Wait-for Graph (WFG) that shows the waitfor relationships
between processes. Each process is represented by a circle, and an arrow is drawn from one process
to another if the former is waiting for a resource held by the latter.
► Check for cycles − Look for cycles in the WFG. If there is a cycle, it indicates that the system is
deadlocked.
► Identify deadlocked processes − Identify the processes involved in the cycle. These processes are
deadlocked and waiting for resources held by other processes.
► Determine resource types − Determine the resource types involved in the deadlock, as well as the
resources held and requested by each process.
► Take corrective action − Take corrective action to break the deadlock by releasing resources,
aborting processes, or preempting resources. Once the deadlock is broken, the system can continue
with normal operations.
► Recheck for cycles − After corrective action has been taken, recheck the WFG for cycles. If there
are no more cycles, the system is no longer deadlocked, and normal operations can resume.
Advantages
► Can handle multiple types of resources
► Useful for systems with a large number of processes
► Provides a clear visualization of the deadlock
Disadvantages
► Can be time-consuming for large systems
► May give false positives if there are multiple requests for the same resource
► Assumes that all resources are pre-allocated, which may not be the case in some systems.
► 3. Banker's Algorithm
► It can be used as a deadlock detection algorithm. In fact, it is one of the most
well-known algorithms for deadlock detection in operating systems.
► It uses 3 data structures –
► Available −
► Vector of length m
► It indicates how many available resources of each type are there.
► Allocation −
► Matrix of size n*m
► A[i,j] indicates how many j th resource type allocated to i th process.
► Request −
► Matrix of size n*m
► Indicates request of each process.
► Request[i,j] tells the number of instances Pi process is requested of jth resource
type.
Deadlock Recovery :
► Ways of Handling a Deadlock
► There are several ways of handling a deadlock, some of which are mentioned below:
► 1. Process Termination
► To eliminate the deadlock, we can simply kill one or more processes. For this, we use two methods:
► Abort all the Deadlocked Processes : Aborting all the processes will certainly break the deadlock but at a great expense.
The deadlocked processes may have been computed for a long time, and the result of those partial computations must be
discarded and there is a probability of recalculating them later.
► Abort one process at a time until the deadlock is eliminated : Abort one deadlocked process at a time, until
the deadlock cycle is eliminated from the system. Due to this method, there may be considerable overhead, because, after
aborting each process, we have to run a deadlock detection algorithm to check whether any processes are still deadlocked.
► Advantages of Process Termination
► It is a simple method for breaking a deadlock.
► It ensures that the deadlock will be resolved quickly, as all processes involved in the deadlock are terminated
simultaneously.
► It frees up resources that were being used by the deadlocked processes , making those resources available for other
processes.
► Disadvantages of Process Termination
► It can result in the loss of data and other resources that were being used by the terminated processes.
► It may cause further problems in the system if the terminated processes were critical to the system’s operation.
► It may result in a waste of resources , as the terminated processes may have already completed a significant amount of work
before being terminated.
► 2. Resource Preemption
► To eliminate deadlocks using resource preemption, we preempt some resources from processes and
give those resources to other processes. This method will raise three issues:
► Selecting a Victim : We must determine which resources and which processes are to be preempted
and also in order to minimize the cost.
► Rollback : We must determine what should be done with the process from which resources are
preempted. One simple idea is total rollback. That means aborting the process and restarting it.
► Starvation : In a system, it may happen that the same process is always picked as a victim. As a
result, that process will never complete its designated task. This situation is called Starvation and
must be avoided. One solution is that a process must be picked as a victim only a finite number of
times.
► Advantages of Resource Preemption
► It can help in breaking a deadlock without terminating any processes, thus preserving data and
resources.
► It is more efficient than process termination as it targets only the resources that are causing
the deadlock .
► It can potentially avoid the need for restarting the system.
► Disadvantages of Resource Preemption
► It may lead to increased overhead due to the need for determining which resources and processes
should be preempted.
► It may cause further problems if the preempted resources were critical to the system’s operation.
► It may cause delays in the completion of processes if resources are frequently preempted.
► 3. Priority Inversion
► A technique for breaking deadlocks in real-time systems is called priority inversion.
This approach alters the order of the processes to prevent stalemates. A higher
priority is given to the process that already has the needed resources, and a lower
priority is given to the process that is still awaiting them. The inversion of priorities
that can result from this approach can impair system performance and cause
performance issues. Additionally, because higher-priority processes may continue to
take precedence over lower-priority processes, this approach may
starve lower-priority processes of resources.
► 4. RollBack
► In database systems, rolling back is a common technique for breaking deadlocks.
When using this technique, the system reverses the transactions of the involved
processes to a time before the deadlock. The system must keep a log of all
transactions and the system’s condition at various points in time in order to use this
method. The transactions can then be rolled back to the initial state and executed
again by the system. This approach may result in significant delays in the
transactions’ execution and data loss.
►
Deadlock Prevention: