Os Unit 3
Os Unit 3
System Model:
A system model or structure consists of a fixed number of resources to be circulated among
some opposing processes.
Processes:
1. Request resource.
(If resource cannot be granted immediately, process waits until it can acquire
resource.) 2. Use resource.
3. Release resource.
Every process needs some resources to complete its execution. However, the resource is
granted in a sequential order.
A Deadlock is a situation where each of the computer process waits for a resource which is
being assigned to some another process. In this situation, none of the process gets executed
since the resource it needs, is held by some other process which is also waiting for some other
resource to be released.
A deadlock happens in operating system when two or more processes need some resource to
complete their execution that is held by the other process.
Let us assume that there are three processes P1, P2 and P3. There are three different
resources R1, R2 and R3. R1 is assigned to P1, R2 is assigned to P2 and R3 is assigned to
P3.
After some time, P1 demands for R2 which is being used by P2. P1 halts its execution since it
can't complete without R2. P2 also demands for R3 which is being used by P3. P2 also stops its
execution because it can't continue without R3. P3 also demands for R1 which is being used by
P1 therefore P3 also stops its execution.
In this scenario, a cycle is being formed among the three processes. None of the process is
progressing and they are all waiting. The computer becomes unresponsive since all the
processes got blocked.
1. Mutual Exclusion
A resource can only be shared in mutually exclusive manner. It implies, if two process
cannot use the same resource at the same time.
A process waits for some resources while holding another resource at the same time.
3. No preemption
The process which once scheduled will be executed till the completion. No other process
can be scheduled by the scheduler meanwhile.
4. Circular Wait
All the processes must be waiting for the resources in a cyclic manner so that the last
process is waiting for the resource which is being held by the first process
Coffman Conditions
A deadlock occurs if the four Coffman conditions hold true. But these conditions are not mutually
exclusive.
• Mutual Exclusion
There should be a resource that can only be held by one process at a time. In the
diagram below, there is a single instance of Resource 1 and it is held by Process 1 only.
Operating Systems - Unit 3 - Lecture 23 – Deadlock in Operating Systems GNIT, Hyderabad.
• No Preemption
A resource cannot be preempted from a process by force. A process can only release a
resource voluntarily. In the diagram below, Process 2 cannot preempt Resource 1 from
Process 1. It will only be released when Process 1 relinquishes it voluntarily after its
execution is complete.
• Circular Wait
A process is waiting for the resource held by the second process, which is waiting for the
resource held by the third process and so on, till the last process is waiting for a resource
held by the first process. This forms a circular chain. For example: Process 1 is allocated
Resource2 and it is requesting Resource 1. Similarly, Process 2 is allocated Resource 1
and it is requesting Resource 2. This forms a circular wait loop.
Deadlock Detection
A deadlock can be detected by a resource scheduler as it keeps track of all the resources that
are allocated to different processes. After a deadlock is detected, it can be resolved using the
following methods −
• All the processes that are involved in the deadlock are terminated. This is not a good
approach as all the progress made by the processes is destroyed.
• Resources can be preempted from some processes and given to others till the deadlock is
resolved.
Deadlock Prevention
It is very important to prevent a deadlock before it can occur. So, the system checks each
transaction before it is executed to make sure it does not lead to deadlock. If there is even a
slight chance that a transaction may lead to deadlock in the future, it is never allowed to
execute.
Deadlock Avoidance
It is better to avoid a deadlock rather than take measures after the deadlock has occurred. The
wait for graph can be used for deadlock avoidance. This is however only useful for smaller
databases as it can get quite complex in larger databases.
Operating Systems - Unit 3 - Lecture 23 – Deadlock in Operating Systems GNIT, Hyderabad.
Strategies for handling Deadlock
1. Deadlock Ignorance
Deadlock Ignorance is the most widely used approach among all the mechanism. This is being
used by many operating systems mainly for end user uses. In this approach, the Operating
system assumes that deadlock never occurs. It simply ignores deadlock. This approach is best
suitable for a single end user system where User uses the system only for browsing and all
other normal stuff.
There is always a tradeoff between Correctness and performance. The operating systems like
Windows and Linux mainly focus upon performance. However, the performance of the system
decreases if it uses deadlock handling mechanism all the time if deadlock happens 1 out of 100
times then it is completely unnecessary to use the deadlock handling mechanism all the time.
In these types of systems, the user has to simply restart the computer in the case of deadlock.
Windows and Linux are mainly using this approach.
2. Deadlock prevention
Deadlock happens only when Mutual Exclusion, hold and wait, No preemption and circular wait
holds simultaneously. If it is possible to violate one of the four conditions at any time then the
deadlock can never occur in the system.
The idea behind the approach is very simple that we have to fail one of the four conditions but
there can be a big argument on its physical implementation in the system. We will discuss it later
in detail.
3. Deadlock avoidance
In deadlock avoidance, the operating system checks whether the system is in safe state or in
unsafe state at every step which the operating system performs. The process continues until the
system is in safe state. Once the system moves to unsafe state, the OS has to backtrack one
step.
In simple words, The OS reviews each allocation so that the allocation doesn't cause the
deadlock in the system.
We will discuss Deadlock avoidance later in detail.
Operating Systems - Unit 3 - Lecture 24 - Strategies for handling Deadlock GNIT, Hyderabad.
Deadlock Prevention
If we simulate deadlock with a table which is standing on its four legs then we can also simulate
four legs with the four conditions which when occurs simultaneously, cause the deadlock.
However, if we break one of the legs of the table then the table will fall definitely. The same
happens with deadlock, if we can be able to violate one of the four necessary conditions and
don't let them occur together then we can prevent the deadlock.
Let's see how we can prevent each of the conditions.
1. Mutual Exclusion
Mutual section from the resource point of view is the fact that a resource can never be used by
more than one process simultaneously which is fair enough but that is the main reason behind
the deadlock. If a resource could have been used by more than one process at the same time
then the process would have never been waiting for any resource.
However, if we can be able to violate resources behaving in the mutually exclusive manner then
the deadlock can be prevented.
Spooling
For a device like printer, spooling can work. There is a memory associated with the printer which
stores jobs from each of the process into it. Later, Printer collects all the jobs and print each one
of them according to FCFS. By using this mechanism, the process doesn't have to wait for the
printer and it can continue whatever it was doing. Later, it collects the output when it is
produced.
Although, Spooling can be an effective approach to violate mutual exclusion but it suffers from
two kinds of problems.
1. This cannot be applied to every resource.
2. After some point of time, there may arise a race condition between the processes to get
space in that spool.
We cannot force a resource to be used by more than one process at the same time since it will
not be fair enough and some serious problems may arise in the performance. Therefore, we
cannot violate mutual exclusion for a process practically.
Operating Systems - Unit 3 - Lecture 24 - Strategies for handling Deadlock GNIT, Hyderabad.
Hold and wait condition lies when a process holds a resource and waiting for some other
resource to complete its task. Deadlock occurs because there can be more than one process
which are holding one resource and waiting for other in the cyclic order.
However, we have to find out some mechanism by which a process either doesn't hold any
resource or doesn't wait. That means, a process must be assigned all the necessary resources
before the execution starts. A process must not wait for any resource once the execution has
been started.
!(Hold and wait) = !hold or !wait (negation of hold and wait is, either you don't hold or you
don't wait)
This can be implemented practically if a process declares all the resources initially. However,
this sounds very practical but can't be done in the computer system because a process can't
determine necessary resources initially.
Process is the set of instructions which are executed by the CPU. Each of the instruction may
demand multiple resources at the multiple times. The need cannot be fixed by the OS. The
problem with the approach is:
1. Practically not possible.
2. Possibility of getting starved will be increases due to the fact that some process may hold a
resource for a very long time.
3. No Preemption
Deadlock arises due to the fact that a process can't be stopped once it starts. However, if we
take the resource away from the process which is causing deadlock then we can prevent
deadlock. This is not a good approach at all since if we take a resource away which is being
used by the process then all the work which it has done till now can become inconsistent.
Consider a printer is being used by any process. If we take the printer away from that process
and assign it to some other process then all the data which has been printed can become
inconsistent and ineffective and also the fact that the process can't start printing again from
where it has left which causes performance inefficiency.
4. Circular Wait
To violate circular wait, we can assign a priority number to each of the resource. A process can't
request for a lesser priority resource. This ensures that not a single process can request a
resource which is being utilized by some other process and no cycle will be formed.
Among all the methods, violating Circular wait is the only approach that can be implemented
practically.
Operating Systems - Unit 3 - Lecture 24 - Strategies for handling Deadlock GNIT, Hyderabad.
Deadlock avoidance
In deadlock avoidance, the request for any resource will be granted if the resulting state of the
system doesn't cause deadlock in the system. The state of the system will continuously be
checked for safe and unsafe states.
In order to avoid deadlocks, the process must tell OS, the maximum number of resources a
process can request to complete its execution.
The simplest and most useful approach states that the process should declare the maximum
number of resources of each type it may ever need. The Deadlock avoidance algorithm
examines the resource allocations so that there can never be a circular wait condition.
Safe and Unsafe States
The resource allocation state of a system can be defined by the instances of available and
allocated resources, and the maximum instance of the resources demanded by the processes.
Operating Systems - Unit 3 - Lecture 24 - Strategies for handling Deadlock GNIT, Hyderabad.
A state of a system recorded at some random time is shown below.
Resources Assigned
Process Type 1 Type 2 Type 3 Type 4
A 3 0 2 2
B 0 0 1 1
C 1 1 1 0
D 2 1 4 0
A 1 1 0 0
B 0 1 1 2
C 1 2 1 0
D 2 1 1 2
1. E = (7 6 8 4)
2. P = (6 2 8 3)
3. A = (1 4 0 1)
Above tables and vector E, P and A describes the resource allocation state of a system. There
are 4 processes and 4 types of the resources in a system. Table 1 shows the instances of each
resource assigned to each process.
Table 2 shows the instances of the resources, each process still needs.
Vector E is the representation of total instances of each resource in the system. Vector P
represents the instances of resources that have been assigned to processes. Vector A
represents the number of resources that are not in use.
A state of the system is called safe if the system can allocate all the resources requested by all
the processes without entering into deadlock.
If the system cannot fulfill the request of all processes then the state of the system is
called unsafe.
The key of Deadlock avoidance approach is when the request is made for resources then
the request must only be approved in the case if the resulting state is also a safe state.
Operating Systems - Unit 3 - Lecture 24 - Strategies for handling Deadlock GNIT, Hyderabad.
Resource Allocation Graph
The resource allocation graph is the pictorial representation of the state of a system. As its name
suggests, the resource allocation graph is the complete information about all the processes
which are holding some resources or waiting for some resources.
It also contains the information about all the instances of all the resources whether they are
available or being used by the processes.
In Resource allocation graph, the process is represented by a Circle while the Resource is
represented by a rectangle. Let's see the types of vertices and edges in detail.
Vertices are mainly of two types, Resource and process. Each of them will be represented by a
different shape. Circle represents process while rectangle represents resource. A resource can
have more than one instance. Each instance will be represented by a dot inside the rectangle.
Operating Systems - Unit 3 - Lecture 25 - Deadlock Detection and Recovery GNIT, Hyderabad.
Edges in RAG are also of two types, one represents assignment and other represents the wait
of a process for a resource. The above image shows each of them.
A resource is shown as assigned to a process if the tail of the arrow is attached to an instance
to the resource and the head is attached to a process.
A process is shown as waiting for a resource if the tail of an arrow is attached to the process
while the head is pointing towards the resource.
Operating Systems - Unit 3 - Lecture 25 - Deadlock Detection and Recovery GNIT, Hyderabad.
Example
Let's consider 3 processes P1, P2 and P3, and two types of resources R1 and R2. The
resources are having 1 instance each.
According to the graph, R1 is being used by P1, P2 is holding R2 and waiting for R1, P3 is
waiting for R1 as well as R2.
The graph is deadlock free since no cycle is being formed in the graph.
If we analyze the graph then we can find out that there is a cycle formed in the graph since the
system is satisfying all the four conditions of deadlock.
Operating Systems - Unit 3 - Lecture 25 - Deadlock Detection and Recovery GNIT, Hyderabad.
Allocation Matrix
Allocation matrix can be formed by using the Resource allocation graph of a system. In
Allocation matrix, an entry will be made for each of the resource assigned. For Example, in the
following matrix, en entry is being made in front of P1 and below R3 since R3 is assigned to P1.
Process R1 R2 R3
P1 0 0 1
P2 1 0 0
P3 0 1 0
Request Matrix
In request matrix, an entry will be made for each of the resource requested. As in the following
example, P1 needs R1 therefore an entry is being made in front of P1 and below R1.
Process R1 R2 R3
P1 1 0 0
P2 0 1 0
P3 0 0 1
Avail = (0,0,0)
Neither we are having any resource available in the system nor a process going to release.
Each of the process needs at least single resource to complete therefore they will continuously
be holding each one of them.
We cannot fulfill the demand of at least one process using the available resources therefore the
system is deadlocked as determined earlier when we detected a cycle in the graph.
Operating Systems - Unit 3 - Lecture 25 - Deadlock Detection and Recovery GNIT, Hyderabad.
Deadlock Detection and Recovery
In this approach, The OS doesn't apply any mechanism to avoid or prevent the deadlocks.
Therefore the system considers that the deadlock will definitely occur. In order to get rid of
deadlocks, The OS periodically checks the system for any deadlock. In case, it finds any of the
deadlock then the OS will recover the system using some recovery techniques.
The main task of the OS is detecting the deadlocks. The OS can detect the deadlocks with the
help of Resource allocation graph.
In single instanced resource types, if a cycle is being formed in the system then there will
definitely be a deadlock. On the other hand, in multiple instanced resource type graph,
detecting a cycle is not just enough. We have to apply the safety algorithm on the system by
converting the resource allocation graph into the allocation matrix and request matrix.
We can snatch one of the resources from the owner of the resource (process) and give it to the
other process with the expectation that it will complete the execution and will release this
resource sooner. Well, choosing a resource which will be snatched is going to be a bit difficult.
System passes through various states to get into the deadlock state. The operating system can
rollback the system to the previous safe state. For this purpose, OS needs to implement check
pointing at every state.
The moment, we get into deadlock, we will rollback all the allocations to get into the previous
safe state.
Operating Systems - Unit 3 - Lecture 25 - Deadlock Detection and Recovery GNIT, Hyderabad.
For Process
Kill a process
Killing a process can solve our problem but the bigger concern is to decide which process to
kill. Generally, Operating system kills a process which has done least amount of work until now.
This is not a suggestible approach but can be implemented if the problem becomes very
serious. Killing all process will lead to inefficiency in the system because all the processes will
execute again from starting.
Operating
It is a banker algorithm used to avoid deadlock and allocate resources safely to each process
in the computer system. The 'S-State' examines all possible tests or activities before deciding
whether the allocation should be allowed to each process. It also helps the operating system to
successfully share the resources between all the processes.
The banker's algorithm is named because it checks whether a person should be sanctioned a
loan amount or not to help the bank system safely simulate allocation resources. In this section,
we will learn the Banker's Algorithm in detail. Also, we will solve problems based on the
Banker's Algorithm.
To understand the Banker's Algorithm first we will see a real word example of it.
Suppose the number of account holders in a particular bank is 'n', and the total money in a bank
is 'T'. If an account holder applies for a loan; first, the bank subtracts the loan amount from full
cash and then estimates the cash difference is greater than T to approve the loan amount.
These steps are taken because if another person applies for a loan or withdraws some amount
from the bank, it helps the bank manage and operate all things without any restriction in the
functionality of the banking system.
When working with a banker's algorithm, it requests to know about three things: 1. How much
each process can request for each resource in the system. It is denoted by the [MAX]
request.
2. How much each process is currently holding each resource in a system. It is denoted by
the [ALLOCATED] resource.
3. It represents the number of each resource currently available in the system. It is denoted
by the [AVAILABLE] resource.
2. Check the availability status for each type of resources [i], such as:
Need[i] <= Work
Finish[i] == false
process. 4. If Finish[i] == true; it means that the system is safe for all
processes.
1. When the number of requested resources of each type is less than the Need resources, go
to step 2 and if the condition fails, which means that the process P[i] exceeds its maximum claim
for the resource. As the expression suggests:
If Request(i) <= Need
Go to step 2;
2. And when the number of requested resources of each type is less than the available resource
for each process, go to step (3). As the expression suggests:
If Request(i) <= Available
Else Process P[i] must wait for the resource since it is not available for use.
When the resource allocation state is safe, its resources are allocated to the process P(i). And if
the new state is unsafe, the Process P (i) has to wait for each type of Request R(i) and restore
the old resource-allocation state.
P2 200 322
P3 302 902
P4 211 222
P5 002 433
P1 743
P2 122
P4 011
P5 431
Race Condition
When more than one processes are executing the same code or accessing the same memory or
any shared variable in that condition there is a possibility that the output or the value of the
shared variable is wrong so for that all the processes doing the race to say that my output is
correct this condition known as a race condition. Several processes access and process the
manipulations over the same data concurrently, then the outcome depends on the particular
order in which the access takes place.
A race condition is a situation that may occur inside a critical section. This happens when the
result of multiple thread execution in the critical section differs according to the order in which
the threads execute.
Race conditions in critical sections can be avoided if the critical section is treated as an atomic
instruction. Also, proper thread synchronization using locks or atomic variables can prevent race
conditions.
In the entry section, the process requests for entry in the Critical Section.
Any solution to the critical section problem must satisfy three requirements: • Mutual Exclusion
: If a process is executing in its critical section, then no other process is allowed to
execute in the critical section.
• Progress : If no process is executing in the critical section and other processes are
waiting outside the critical section, then only those processes that are not executing in
their remainder section can participate in deciding which will enter in the critical
section next, and the selection can not be postponed indefinitely.
• Bounded Waiting : A bound must exist on the number of times that other processes
are allowed to enter their critical sections after a process has made a request to enter
its critical section and before that request is granted.
Peterson’s Solution
Peterson’s Solution is a classical software based solution to the critical section
problem. In Peterson’s solution, we have two shared variables:
• boolean flag[i] :Initialized to FALSE, initially no one is interested in entering the critical
section
• int turn : The process whose turn is to enter the critical section.
In TestAndSet, Mutual exclusion and progress are preserved but bounded waiting
cannot be preserved.
Semaphores
2. Dining-Philosphers Problem:
The Dining Philosopher Problem states that K philosophers seated around a circular table
with one chopstick between each pair of philosophers. There is one chopstick between each
philosopher. A philosopher may eat if he can pickup the two chopsticks adjacent to him. One
chopstick may be picked up by any one of its adjacent followers but not both. This problem
involves the allocation of limited resources to a group of processes in a deadlock-free and
starvation-free manner.
Producer-Consumer solution
In computing, the producer–consumer problem (also known as the bounded-buffer problem) is a
classic example of a multi-process synchronization problem.
The problem describes two processes; the producer and the consumer that shares a common
fixed-size buffer use it as a queue.
The producer’s job is to generate data, put it into the buffer, and start again. At the same time,
the consumer is consuming the data (i.e., removing it from the buffer), one piece at a time.
Problem: Given the common fixed-size buffer, the task is to make sure that the producer can’t
add data into the buffer when it is full and the consumer can’t remove data from an empty
buffer.
Solution: The producer is to either go to sleep or discard data if the buffer is full. The next time
the consumer removes an item from the buffer, it notifies the producer, who starts to fill the
buffer again. In the same manner, the consumer can go to sleep if it finds the buffer to be
empty. The next time the producer puts data into the buffer, it wakes up the sleeping consumer.
#include <stdio.h>
#include <stdlib.h>
// Initialize a mutex to 1
int mutex = 1;
// Number of full slots as 0
int full = 0;
// Item produced
x++;
printf("\nProducer produces"
"item %d",
x);
// Driver Code
int main()
{
int n, i;
printf("\n1. Press 1 for Producer"
"\n2. Press 2 for Consumer"
"\n3. Press 3 for Exit");
case 2:
// If mutex is 1 and full
// is non-zero, then it is
// possible to consume
if ((mutex == 1)
&& (full != 0)) {
consumer();
}
// Exit Condition
case 3:
exit(0);
break;
}
}
}
Operating Systems - Unit 3 - Lecture 28 – Classical Problems of Synchronization GNIT, Hyderabad.
Classical problems of Synchronization with Semaphore Solution
The Dining Philosopher Problem – The Dining Philosopher Problem states that K philosophers
seated around a circular table with one chopstick between each pair of philosophers. There is
one chopstick between each philosopher. A philosopher may eat if he can pick up the two
chopsticks adjacent to him. One chopstick may be picked up by any one of its adjacent
followers but not both.
process P[i]
while true do
{ THINK;
PICKUP(CHOPSTICK[i], CHOPSTICK[i+1 mod 5]);
EAT;
PUTDOWN(CHOPSTICK[i], CHOPSTICK[i+1 mod 5])
}
There are three states of the philosopher: THINKING, HUNGRY, and EATING. Here there are
two semaphores: Mutex and a semaphore array for the philosophers. Mutex is used such that
no two philosophers may access the pickup or putdown at the same time. The array is used to
control the behavior of each philosopher. But, semaphores can result in deadlock due to
programming errors.
Code –
#include <pthread.h>
#include <semaphore.h>
#include <stdio.h>
#define N 5
#define THINKING 2
#define HUNGRY 1
#define EATING 0
#define LEFT (phnum + 4) % N
#define RIGHT (phnum + 1) % N
int state[N];
int phil[N] = { 0, 1, 2, 3, 4 };
sem_t mutex;
sem_t S[N];
sleep(2);
// take up chopsticks
void take_fork(int phnum)
{
sem_wait(&mutex);
sem_post(&mutex);
sleep(1);
}
sem_wait(&mutex);
test(LEFT);
test(RIGHT);
while (1) {
int* i = num;
sleep(1);
take_fork(*i);
sleep(0);
put_fork(*i);
}
}
int main()
{
int i;
pthread_t thread_id[N];
sem_init(&S[i], 0, 0);
pthread_join(thread_id[i], NULL);
}
Note – The below program may compile only with C compilers with semaphore and pthread
library.
#include<semaphore.h>
#include<stdio.h>
#include<stdlib.h>
#include<unistd.h>
#include<pthread.h>
sem_t x,y;
pthread_t tid;
pthread_t writerthreads[100],readerthreads[100];
int readercount = 0;
int main()
{
int n2,i;
printf("Enter the number of readers:");
scanf("%d",&n2);
printf("\n");
int n1[n2];
sem_init(&x,0,1);
sem_init(&y,0,1);
for(i=0;i<n2;i++)
{
pthread_create(&writerthreads[i],NULL,reader,NULL);
pthread_create(&readerthreads[i],NULL,writer,NULL); }
for(i=0;i<n2;i++)
{
pthread_join(writerthreads[i],NULL);
pthread_join(readerthreads[i],NULL);
}
Problem : The analogy is based upon a hypothetical barber shop with one barber. There is a
barber shop which has one barber, one barber chair, and n chairs for waiting for customers if
there are any to sit on the chair.
∙ If there is no customer, then the barber sleeps in his own chair.
∙ When a customer arrives, he has to wake up the barber.
∙ If there are many customers and the barber is cutting a customer’s hair, then the remaining
customers either wait if there are empty chairs in the waiting room or they leave if no chairs
are empty.
Solution : The solution to this problem includes three semaphores. First is for the customer
which counts the number of customers present in the waiting room (customer in the barber chair
is not included because he is not waiting). Second, the barber 0 or 1 is used to tell whether the
barber is idle or is working, and the third mutex is used to provide the mutual exclusion which is
required for the process to execute. In the solution, the customer has the record of the number
of customers waiting in the waiting room if the number of customers is equal to the number of
chairs in the waiting room then the upcoming customer leaves the barbershop.
When the barber shows up in the morning, he executes the procedure barber, causing him to
block on the semaphore customers because it is initially 0. Then the barber goes to sleep until
the first customer comes up.
When a customer arrives, he executes customer procedure the customer acquires the mutex for
entering the critical region, if another customer enters thereafter, the second one will not be able
to anything until the first one has released the mutex. The customer then checks the chairs in
the waiting room if waiting customers are less than the number of chairs then he sits otherwise
he leaves and releases the mutex.
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <pthread.h>
#include <semaphore.h>
// Function prototypes...
void *customer(void *num);
void *barber(void *);
printf("\nSleepBarber.c\n\n");
printf("A solution to the sleeping barber problem using semaphores.\n");
Semaphores are very useful in process synchronization and multithreading. But how to use one
in real life, for example say in C Language?
Well, we have the POSIX semaphore library in Linux systems. Let’s learn how to use it. The
basic code of a semaphore is simple as presented here. But this code cannot be written
directly, as the functions require to be atomic and writing code directly would lead to a context
switch without function completion and would result in a mess.
The POSIX system in Linux presents its own built-in semaphore library. To use it, we have to :
1. Include semaphore.h
2. Compile the code by linking with -lpthread -lrt
//critical section
sleep(4);
//signal
printf("\nJust Exiting...\n");
sem_post(&mutex);
}
int main()
{
sem_init(&mutex, 0, 1);
pthread_t t1,t2;
pthread_create(&t1,NULL,thread,NULL);
sleep(2);
pthread_create(&t2,NULL,thread,NULL);
pthread_join(t1,NULL);
pthread_join(t2,NULL);
sem_destroy(&mutex);
return 0;
Just Exiting...
Entered..
Just Exiting...
but not:
Entered..
Entered..
Just Exiting...
Just Exiting...
The monitor is one of the ways to achieve Process synchronization. The monitor is supported by
programming languages to achieve mutual exclusion between processes. For example Java
Synchronized methods. Java provides wait() and notify() constructs.
1. It is the collection of condition variables and procedures combined together in a special kind
of module or a package.
2. The processes running outside the monitor can’t access the internal variable of the monitor
but can call procedures of the monitor.
3. Only one process at a time can execute code inside monitors.
Syntax:
Condition Variables:
Two different operations are performed on the condition variables of the
monitor. Wait.
signal.
let say we have 2 condition variables
condition x, y; // Declaring variable
Wait operation
x.wait() : Process performing wait operation on any condition variable are suspended. The
suspended processes are placed in block queue of that condition variable.
Signal operation
x.signal(): When a process performs signal operation on condition variable, one of the blocked
processes is given chance.
If (x block queue empty)
// Ignore signal
else
// Resume a process from block queue.
Advantages of Monitor:
Monitors have the advantage of making parallel programming easier and less error prone than
using techniques such as semaphore.
Disadvantages of Monitor:
Monitors have to be implemented as part of the programming language . The compiler must
generate code for them. This gives the compiler the additional burden of having to know what
operating system facilities are available to control access to critical sections in concurrent
processes. Some languages that do support monitors are Java,C#,Visual Basic,Ada and
concurrent Euclid.
https://rextester.com/l/c_online_compiler_gcc
https://www.onlinegdb.com
∙ Message Passing
o e.g. sockets, pips, messages, queues
∙ Memory based IPC
o shared memory, memory mapped files
∙ Higher level semantics
o files, RPC
∙ Synchronization primitives
Message Passing
∙ Send/Receive messages
∙ OS creates and maintains a channel
o buffer, FIFO queue
∙ OS provides interfaces to processes
o a port
o processes send/write messages to this port
o processes receive/read messages from this port
Advantages
Disadvantages
∙ Overheads
1. Pipes
2. Message queues
∙ Carry "messages" among processes
∙ OS management includes priorities, scheduling of message delivery ∙
APIs : Sys-V and POSIX
Disdvantages
∙ explicit synchronization
∙ communication protocol, shared buffer management
o programmer's responsibility
Which is better?
Overheads for 1. Message Passing : must perform multiple copies 2. Shared Memory : must
establish all mappings among processes' address space and shared memory pages
Thus, it depends.
Copy vs Map
Goal for both is to transfer data from one into target address space
Use threads accessing shared state in a single addressing space, but for
IPC Synchronization
int main()
{
// We use two pipes
// First pipe to send input string from parent
// Second pipe to send concatenated string from child
if (pipe(fd1)==-1)
{
fprintf(stderr, "Pipe Failed" );
return 1;
}
if (pipe(fd2)==-1)
{
fprintf(stderr, "Pipe Failed" );
return 1;
}
scanf("%s", input_str);
p = fork();
if (p < 0)
{
fprintf(stderr, "fork Failed" );
return 1;
}
// Parent process
else if (p > 0)
{
char concat_str[100];
// child process
else
{
close(fd1[1]); // Close writing end of first pipe
while (1) {
printf("Enter string: ");
fgets(readbuf, sizeof(readbuf), stdin);
stringlen = strlen(readbuf);
readbuf[stringlen - 1] = '\0';
end_process = strcmp(readbuf, end_str);
int main(void) {
struct my_msgbuf buf;
int msqid;
int toend;
key_t key;
int main()
{
// ftok to generate unique key
key_t key = ftok("shmfile",65);
return 0;
}
#include <iostream>
#include <sys/ipc.h>
#include <sys/shm.h>
#include <stdio.h>
using namespace std;
int main()
{
// ftok to generate unique key
key_t key = ftok("shmfile",65);
// shmget returns an identifier in shmid
int shmid = shmget(key,1024,0666|IPC_CREAT);
// shmat to attach to shared memory
char *str = (char*) shmat(shmid,(void*)0,0);
printf("Data read from memory: %s\n",str);