Module 4
Module 4
Module:4 Concurrency
Dr. P. Anandan
Mail: [email protected]
Mobile: 8056445321
Module:4 Concurrency
• Blocking Send- The sending process sends the message and blocks
the operation.
• Non-blocking send. The sending process sends the message and
resumes operation.
• Blocking receive. The receiver blocks until a message is available.
• Non-blocking receive. Asynchronous message passing allows more
parallelism. A process can express its interest in receiving messages
from multiple ports simultaneously. The receiver retrieves either a
valid message or a null.
Cont..
Buffering
• Buffering queues can be implemented in three ways:
• Zero capacity. The queue has a maximum length of zero;
thus, the link cannot have any messages waiting in it. In this
case, the sender must block until the recipient receives the
message.
• Bounded capacity. The queue has finite length n; thus, at
most n messages can reside in it.
• Unbounded capacity. The queue's length is potentially
infinite; thus, any number of messages can wait in it. The
sender never blocks.
• An Example (SM): POSIX Shared Memory
• An Example (MP): Windows XP
Synchronization
• Processes synchronization or Synchronization is the way by which
processes that share the same memory space are managed in an
operating system.
• Race Condition
– When more than one process is either running the same code or
modifying the same memory, then there is a risk that the result or value of
the shared data may be incorrect. Thus, all the processes race to say that
my result is correct. This condition is called the race condition.
– Ex: Bank Balance – ‘A’ has one account no 123456789 with balance Rs.
5000 and he performs a ATM withdrawal Rs. 2000, at same time B
performs Gpay to A’s account Rs. 4000.
– According to A – Balance = 5000-2000 = 3000, and for B = 5000 + 4000
= 9000. So both the data race against each other to update in Balance
variable.
– how to avoid this race condition?
By treating the particular section as critical section and that section
can be accessed by only a single process at a time. This kind of
section is called an atomic section
Critical Section Problem
• A part of code that can only be accessed by a single
process at any moment is known as a critical section.
• So, only one process will be allowed to change at any
given moment. The other processes have to wait until the
data is free to be used.
• If read and write happens at same time then there is a high
chance that the data read by the process B will be
incorrect since A keeps on changing the data.
Solution to Critical Section Problem
• A solution to the critical section problem must satisfy the
following three conditions:
• Mutual exclusion: If a process is running in the critical
section, no other process should be allowed to run in that
section at that time.
• Progress: If no process is still in the critical section and
other processes are waiting outside the critical section to
execute, then any one of the threads must be permitted to
enter the critical section.
• Bounded Waiting (No starvation): Starvation means a
process keeps waiting forever to access the critical
section but never gets a chance. A process should not
wait forever to enter inside the critical section.
Synchronization Primitives/ Solutions To The
Critical Section Problem
• Peterson's solution
• Synchronization Hardware
• Bakery algorithm
Peterson's solution
• When a process is executing in a critical section, then
the other process executes the rest of the code.
• In Peterson's solution, we have two shared variables:
• A Boolean Flag[]: A Boolean array Flag which is
initialized to FALSE. This Flag array represents which
process wants to enter into the critical solution.
• int Turn: A integer variable Turn indicates the
process number which is ready to enter into the
critical section.
Reference:
https://www.codingninjas.com/codestudio/libra
ry/peterson-s-algorithm-for-mutual-exclusion
• Consider two process want to enter the Critical Section (CS) Pi and
Pj at same time.
• Pi will set its flag variable to ‘True’, i.e. Pi is ready to enter the CS
and turn variable to ‘j’, i.e. if Pj want to enter the CS let him go.
• Similarly, in Pj, Flag variable of Pj is set to True and turn variable to
‘i’.
• Since Turn variable is shared between both the process the value
inside Turn will be updated to ‘i’.
• Now inside Pi code:
• Note: #if a while loop is set with semicolon and if the condition
becomes true loop then controls loops/runs itself until the condition
fails i.e. next line will not be executed.
while(flag[j] && turn == j); #Condition becomes false (T && F)
So Pi will enter inside CS and execute.
At the same time inside Pj Code:
while(flag[i] && turn == i); #Condition becomes True (T && T)
So it will go to the next line until condition fails, So Pj will not enter
inside CS.
Thus only one process enters the CS.
Synchronization Hardware
• Test and Set Lock (TSL):
– Peterson solution was a software based solution for
synchronization problem (SP).
– TSL is a hardware solution to the SP.
– TSL has a shared lock variable which can take only
two values 0 (unlock) or 1 (lock).
– Before entering into Critical Section the process
checks the lock variable, if it is 1 then it waits until it
becomes free (i.e lock = 0).
– If CS is free then the process takes the lock and
execute the CS.
– Ex: Room locked inside.
• TestAndSet is a Boolean return function and returns either 0 or 1;
• Initialize lock = 0; [0/open/false – all same meaning].
• Lock variable is a shared, that’s why we are passing the address of the variable
P1: While(TestAndSet(&lock)) #while(TestAndSet(0))
• TestAndSet: *target = address of lock
• rv = value of lock
• *target = change the value of lock =1
• return rv #rv=0
While(0); #condition is false so while loop breaks and enter inside the
critical section
P2: At the same instant of time if P2 wish to enter inside the CS.
Lock variable is a shared so the updated lock value is 1 by P1
– While(TestAndSet(&lock)) #while(TestAndSet(1))
• TestAndSet: *target = address of lock
• rv = value of lock
• *target = change the value of lock =1
• return rv #rv=1
While(1); #condition is True so while loop fails and it can’t enter CS and
it “do nothing”
Once after P1 completes inside CS, P1 set lock = false #0
So now if P2 tries he can access the CS.
Bakery Algorithm
• This algorithm will work well when
no. of process is more than two.
• This algorithm works like a token
system in a canteen/bakery.
Everyone will be given a unique
number and when the number has
been displayed they can go and
collect it.
• P1 to P5 try to access the CS.
• In order enter inside CS the process
must invoke the lock function,
lock(i), where i = 0,1,2…
• Num is a global variable.
• Ex: process are P1 P2 P3 P4 P
P1 P2 P3 P4 P5
Initially Value of num assigned to
0 0 0 0 0
each process
Index num[0] num[1] num[2] num[3] num[4]
Lets say P3 executes first P1 P2 P3 P4 P5
among all.
Value of num assigned to
num[2]=max(num[0] to 0 0 1 0 0
each process
num[4])+1
= 0+1=1 Index
num[0] num[1] num[2] num[3] num[4]
Similarly if P4 executes P1 P2 P3 P4 P5
next
Value of num assigned to
num[3]=max(num[0] to 0 0 1 2 0
each process
num[4])+1
= 1+1=2 Index num[0] num[1] num[2] num[3] num[4]
P1 P2 P3 P4 P5
Similarly next P5 P2 P1 Value of num assigned to
5 4 1 2 3
starts its execution each process
Index num[0] num[1] num[2] num[3] num[4]
else {
printf("Buffer is full!");
}
Producer
void producer()
{
// Decrease mutex value by 1
--mutex;
// Increase the number of full slots by 1
++full;
// Decrease the number of empty slots by 1
--empty;
// Item produced
x++;
printf("\n Producer produces item %d“, x);
// Increase mutex value by 1
++mutex;
}
Consumer
if ((mutex == 1) && (full != 0)) void consumer()
{
{
full ?
consumer(); empty ?
} x ?
// Otherwise, print Buffer is empty
}
else {
printf("Buffer is empty!");
}
Consumer
if ((mutex == 1) && (full != 0)) void consumer()
{
{
// Decrease mutex value by 1
consumer(); --mutex;
} --full;
// Increase the number of empty
// Otherwise, print Buffer is empty
// slots by 1
else { ++empty;
printf("Buffer is empty!"); printf("\nConsumer consumes "
} "item %d",
x);
x--;
// Increase mutex value by 1
++mutex;
}
Dining Philosopher Problem
The dining philosophers problem was
formulated by Edsger Dijkstra in 1965. The
problem illustrates synchronization issues in
systems made up of concurrent tasks. The
objective of the problem is to have progress
and avoid deadlock in systems. Also, the
entities needing resources should get them
in reasonable time and not face starvation
of resources.
Solution Dining Philosopher Problem
i. Ordering of resources