0% found this document useful (0 votes)
20 views

Module 4

Computer networks chapter 4

Uploaded by

nab
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

Module 4

Computer networks chapter 4

Uploaded by

nab
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

BCSE303L - Operating Systems

Module:4 Concurrency

Dr. P. Anandan
Mail: [email protected]
Mobile: 8056445321
Module:4 Concurrency

Inter-process communication, Synchronization - Implementing


synchronization primitives (Peterson’s solution, Bakery
algorithm, synchronization hardware) - Semaphores – Classical
synchronization problems, Monitors: Solution to Dining
Philosophers problem – IPC in Unix, Multiprocessors and Locking
- Scalable Locks - Lock-free coordination.
Inter-process Communication(IPC)
• Interprocess communication is the mechanism provided by the
operating system that allows processes to communicate with
each other i.e transferring the data from one process to another.
• Processes within a system may be independent or cooperating.
• Cooperating process can affect or be affected by other processes,
including sharing data.
• Independent process cannot affect or be affected by the other
processes executing in the system.

• Ex: Water management system of a plant:


– Process 1: Data Acquisition like temp, moisture, wind
– Process 2: Processes the data from Process 1
– Process 3: Control System, which instruct the valve to Open/ Close
Cont..
Cooperation is important for several reasons:
– Information sharing - Several processes may need to access
the same data.
– Modularity - dividing the system functions into separate
processes or threads.
– Computation speedup - A task can often be run faster if it is
broken into subtasks and distributed among different
processes.
– Convenience - Even an individual user may work on many
tasks at the same time.
Ex: Since water management is automatic, the farmer can
do the remaining jobs in his farm.
Cont..
Two models of IPC
– Shared memory
– Message passing

• In the shared-memory model, a region of memory that is


shared by cooperating processes is established in RAM.
Processes can then exchange information by reading and
writing data to the shared region.
• In the message passing model, communication takes place by
means of messages exchanged between the cooperating
processes with the help of a shared memory placed inside
kernel and with the help of system calls like send and receive.
Cont..
(a) Message passing. (b) shared memory.
Cont..
• Message passing is useful for exchanging smaller
amounts of data, because no conflicts need be avoided.
• Message passing is also easier to implement than shared
memory for inter computer communication.
• Shared memory allows maximum speed and convenience
of communication.
• Shared memory is faster than message passing, as
message passing systems are typically implemented using
system calls and thus require the more time-consuming task
of kernel intervention.
• In contrast, in shared memory systems, system calls are
required only to establish shared-memory regions.
Cont..
• If processes P and Q want to communicate, they
must send messages to and receive messages from
each other; a communication link must exist
between them.
• This link can be implemented in different
ways.
➢ Direct or indirect communication
➢ Synchronous or asynchronous
communication
➢ Automatic or explicit buffering
Cont..
• Under direct communication, each process that wants to
communicate must explicitly name the recipient or sender of
the communication.
• In this scheme, the send() and receive() primitives are defined
as:
Q ->Send(P, message) -Send a message to process P.
P -> Receive (Q, message)-Receive a message from process Q.

• Indirect communication, the messages are sent to and


received from mail boxes, or ports.
Send (A, message) -Send a message to mailbox A.
Receive (A, message)-Receive a message from mailbox A.
Cont..

• Synchronous or asynchronous communication also known as either


blocking or non-blocking. A synchronous operation blocks a
process till the operation completes. An asynchronous operation
is non-blocking and only initiates the operation.

• Blocking Send- The sending process sends the message and blocks
the operation.
• Non-blocking send. The sending process sends the message and
resumes operation.
• Blocking receive. The receiver blocks until a message is available.
• Non-blocking receive. Asynchronous message passing allows more
parallelism. A process can express its interest in receiving messages
from multiple ports simultaneously. The receiver retrieves either a
valid message or a null.
Cont..
Buffering
• Buffering queues can be implemented in three ways:
• Zero capacity. The queue has a maximum length of zero;
thus, the link cannot have any messages waiting in it. In this
case, the sender must block until the recipient receives the
message.
• Bounded capacity. The queue has finite length n; thus, at
most n messages can reside in it.
• Unbounded capacity. The queue's length is potentially
infinite; thus, any number of messages can wait in it. The
sender never blocks.
• An Example (SM): POSIX Shared Memory
• An Example (MP): Windows XP
Synchronization
• Processes synchronization or Synchronization is the way by which
processes that share the same memory space are managed in an
operating system.
• Race Condition
– When more than one process is either running the same code or
modifying the same memory, then there is a risk that the result or value of
the shared data may be incorrect. Thus, all the processes race to say that
my result is correct. This condition is called the race condition.
– Ex: Bank Balance – ‘A’ has one account no 123456789 with balance Rs.
5000 and he performs a ATM withdrawal Rs. 2000, at same time B
performs Gpay to A’s account Rs. 4000.
– According to A – Balance = 5000-2000 = 3000, and for B = 5000 + 4000
= 9000. So both the data race against each other to update in Balance
variable.
– how to avoid this race condition?
By treating the particular section as critical section and that section
can be accessed by only a single process at a time. This kind of
section is called an atomic section
Critical Section Problem
• A part of code that can only be accessed by a single
process at any moment is known as a critical section.
• So, only one process will be allowed to change at any
given moment. The other processes have to wait until the
data is free to be used.
• If read and write happens at same time then there is a high
chance that the data read by the process B will be
incorrect since A keeps on changing the data.
Solution to Critical Section Problem
• A solution to the critical section problem must satisfy the
following three conditions:
• Mutual exclusion: If a process is running in the critical
section, no other process should be allowed to run in that
section at that time.
• Progress: If no process is still in the critical section and
other processes are waiting outside the critical section to
execute, then any one of the threads must be permitted to
enter the critical section.
• Bounded Waiting (No starvation): Starvation means a
process keeps waiting forever to access the critical
section but never gets a chance. A process should not
wait forever to enter inside the critical section.
Synchronization Primitives/ Solutions To The
Critical Section Problem

• Peterson's solution
• Synchronization Hardware
• Bakery algorithm
Peterson's solution
• When a process is executing in a critical section, then
the other process executes the rest of the code.
• In Peterson's solution, we have two shared variables:
• A Boolean Flag[]: A Boolean array Flag which is
initialized to FALSE. This Flag array represents which
process wants to enter into the critical solution.
• int Turn: A integer variable Turn indicates the
process number which is ready to enter into the
critical section.
Reference:
https://www.codingninjas.com/codestudio/libra
ry/peterson-s-algorithm-for-mutual-exclusion
• Consider two process want to enter the Critical Section (CS) Pi and
Pj at same time.
• Pi will set its flag variable to ‘True’, i.e. Pi is ready to enter the CS
and turn variable to ‘j’, i.e. if Pj want to enter the CS let him go.
• Similarly, in Pj, Flag variable of Pj is set to True and turn variable to
‘i’.
• Since Turn variable is shared between both the process the value
inside Turn will be updated to ‘i’.
• Now inside Pi code:
• Note: #if a while loop is set with semicolon and if the condition
becomes true loop then controls loops/runs itself until the condition
fails i.e. next line will not be executed.
while(flag[j] && turn == j); #Condition becomes false (T && F)
So Pi will enter inside CS and execute.
At the same time inside Pj Code:
while(flag[i] && turn == i); #Condition becomes True (T && T)
So it will go to the next line until condition fails, So Pj will not enter
inside CS.
Thus only one process enters the CS.
Synchronization Hardware
• Test and Set Lock (TSL):
– Peterson solution was a software based solution for
synchronization problem (SP).
– TSL is a hardware solution to the SP.
– TSL has a shared lock variable which can take only
two values 0 (unlock) or 1 (lock).
– Before entering into Critical Section the process
checks the lock variable, if it is 1 then it waits until it
becomes free (i.e lock = 0).
– If CS is free then the process takes the lock and
execute the CS.
– Ex: Room locked inside.
• TestAndSet is a Boolean return function and returns either 0 or 1;
• Initialize lock = 0; [0/open/false – all same meaning].
• Lock variable is a shared, that’s why we are passing the address of the variable
P1: While(TestAndSet(&lock)) #while(TestAndSet(0))
• TestAndSet: *target = address of lock
• rv = value of lock
• *target = change the value of lock =1
• return rv #rv=0
While(0); #condition is false so while loop breaks and enter inside the
critical section

P2: At the same instant of time if P2 wish to enter inside the CS.
Lock variable is a shared so the updated lock value is 1 by P1
– While(TestAndSet(&lock)) #while(TestAndSet(1))
• TestAndSet: *target = address of lock
• rv = value of lock
• *target = change the value of lock =1
• return rv #rv=1
While(1); #condition is True so while loop fails and it can’t enter CS and
it “do nothing”
Once after P1 completes inside CS, P1 set lock = false #0
So now if P2 tries he can access the CS.
Bakery Algorithm
• This algorithm will work well when
no. of process is more than two.
• This algorithm works like a token
system in a canteen/bakery.
Everyone will be given a unique
number and when the number has
been displayed they can go and
collect it.
• P1 to P5 try to access the CS.
• In order enter inside CS the process
must invoke the lock function,
lock(i), where i = 0,1,2…
• Num is a global variable.
• Ex: process are P1 P2 P3 P4 P
P1 P2 P3 P4 P5
Initially Value of num assigned to
0 0 0 0 0
each process
Index num[0] num[1] num[2] num[3] num[4]
Lets say P3 executes first P1 P2 P3 P4 P5
among all.
Value of num assigned to
num[2]=max(num[0] to 0 0 1 0 0
each process
num[4])+1
= 0+1=1 Index
num[0] num[1] num[2] num[3] num[4]

Similarly if P4 executes P1 P2 P3 P4 P5
next
Value of num assigned to
num[3]=max(num[0] to 0 0 1 2 0
each process
num[4])+1
= 1+1=2 Index num[0] num[1] num[2] num[3] num[4]
P1 P2 P3 P4 P5
Similarly next P5 P2 P1 Value of num assigned to
5 4 1 2 3
starts its execution each process
Index num[0] num[1] num[2] num[3] num[4]

Once every process has P3 Code: For{ P1 P2 P3 P4 P5


While(T and F);
obtained their number
#Condition fails so P3 go 5 4 0 2 3
For loop starts which scans
inside CS and once it has
which process has completed it changes
minimum value num[0] num[1] num[2] num[3] num[4]
num[2] = 0
Semaphores
Introduction
• Semaphore was proposed by Dijkstra in 1965 which is a very
significant technique to manage concurrent processes by using
a simple integer value, which is known as a semaphore.
• Semaphores are of two types:
Binary Semaphore – (ATM Room - Single ATM machine)
This is also known as mutex lock. It can have only two values
– 0 and 1
Counting Semaphore – (ATM Room - Three ATM machines)
Its value depends on the resources present in the system.
• The two operations that can be used to access
and change the value of the semaphore
variables are Wait() and Signal().
• Wait() – Help to control the entry of the
process inside CS, when S is positive integer,
it decrement until it reaches ‘0’ and then
allows that process inside CS.
• Signal() – Used to control the exit of a process
from CS. It increments the value of S from 0 to
1 after it completes its execution.
Binary Semaphore
Counting Semaphore
• Allow concurrent process to execute inside
CS, S is initialized to n (no. of resources
available in system).
• Let S = 3,
• If P1 -> Enter CS -> S = S-1 = 2
• If P2 -> Enter CS -> S = S-1 = 1
• If P1 -> Exits CS -> S = S+1 = 2
• If P3 -> Enter CS -> S = S-1 = 1
Classic Problems of
Synchronization
• Producer-consumer problem:
– The problem describes two processes, the producer
and the consumer that shares a common fixed-size
buffer.
– The producer’s job is to generate data, put it into
the buffer, and start again.
– At the same time, the consumer is consuming the
data (i.e., removing it from the buffer), one piece
at a time.
• Problem: Given the common fixed-size buffer, the task
is to make sure that the producer can’t add data into
the buffer when it is full and the consumer can’t
remove data from an empty buffer.
• Solution:
• The producer is to either go to sleep or discard data if
the buffer is full.
• The next time the consumer removes an item from the
buffer, it notifies the producer, who starts to fill the
buffer again.
• In the same manner, the consumer can go to sleep if it
finds the buffer to be empty.
• The next time the producer puts data into the buffer, it
wakes up the sleeping consumer.
// Initialize a mutex to 1 (Similar to Binary Semaphore)
int mutex = 1; // Lock Open
// Number of full slots as 0
int full = 0;
// Number of empty slots as size of buffer
int empty = 3;
int x = 0; //no. of items
if ((mutex == 1) && (empty != 0))
{
producer();
}

else {
printf("Buffer is full!");
}
Producer
void producer()
{
// Decrease mutex value by 1
--mutex;
// Increase the number of full slots by 1
++full;
// Decrease the number of empty slots by 1
--empty;
// Item produced
x++;
printf("\n Producer produces item %d“, x);
// Increase mutex value by 1
++mutex;
}
Consumer
if ((mutex == 1) && (full != 0)) void consumer()
{
{
full ?
consumer(); empty ?
} x ?
// Otherwise, print Buffer is empty
}
else {
printf("Buffer is empty!");
}
Consumer
if ((mutex == 1) && (full != 0)) void consumer()
{
{
// Decrease mutex value by 1
consumer(); --mutex;
} --full;
// Increase the number of empty
// Otherwise, print Buffer is empty
// slots by 1
else { ++empty;
printf("Buffer is empty!"); printf("\nConsumer consumes "
} "item %d",
x);
x--;
// Increase mutex value by 1
++mutex;
}
Dining Philosopher Problem
The dining philosophers problem was
formulated by Edsger Dijkstra in 1965. The
problem illustrates synchronization issues in
systems made up of concurrent tasks. The
objective of the problem is to have progress
and avoid deadlock in systems. Also, the
entities needing resources should get them
in reasonable time and not face starvation
of resources.
Solution Dining Philosopher Problem

i. Ordering of resources

The operation of picking up a fork is an exclusive


operation as only one philosopher can pick up a fork
at a time and forks cannot be shared. Only, after a
philosopher puts back (releases) a fork, another
philosopher can pick it up. To take care of these
requirements, forks are implemented as Pthreads
mutexes. A philosopher picks up a fork by locking
the concerned mutex and releases it by unlocking it.
ii. Taking help of central arbitrator (waiter)

The waiter is implemented as a mutex. Only


one philosopher can eat at any time, even
though there are five forks. So, this has less
parallelism than the first solution.
Dining Philosopher
Problem

• The Dining Philosopher Problem states that K philosophers


seated around a circular table with one chopstick between each
pair of philosophers.
• There is one chopstick between each philosopher
• A philosopher may eat if he can pick up the two chopsticks
adjacent to him.
• One chopstick may be picked up by any one of its adjacent
followers but not both.
• Philosopher has 2 States – Think and Eat
• In OS point of view chopsticks are the resources and philosopher represent
process.
• Let chopstick be numbered as 0,1,2,3,4 #Total 5 Chopstick
• P1 needs 0 and 1, P2 needs 1 and 2, P3 needs 2 and 3….P5 needs 4 and 0
• Initialize all chopstick [i] = 1 (chopstick act like Semaphore)
– Solution of Dining Philosophers Problem using Semaphores
repeat
For P5: # if wait(0); then condition fails and go to next line
#i.e for P5 if chopstick 4 and 0 is free then he need not wait and start eating.
wait(chopstick[i]); // i = 4
wait(chopstick[(i+1) mod 5]); //i = (4+1) mod 5 = 0
...
Eat
...
signal(chopstick[i]);
signal(chopstick[(i+1) mod 5]);
...
think
...
until false;
• From the above pseudocode simultaneous
access to the chopstick was avoided but what
if all the philosopher gets hungry at same time
and picks up their left chopstick?
• Deadlock occurs because each one will have
only one chopstick and will be waiting for the
another one.
• Solution:
– At a time allow only n-1 philosopher to access the
chopstick
– Allow the philosopher to pickup his chopstick only
if both are available.
Monitors
• Semaphore would lead to deadlock like in Monitor monitorName
dinning philosopher problem. {
• Monitors are high level abstraction that variables_declaration;
provide a convenient and effective condition_variables;
mechanism for process synchronization.
procedure p1{ ... };
• Monitor helps mutual exclusion [means: to procedure p2{ ... };
prevent or restrict the entrance of]. ...
• Monitors will have shared variables and procedure pn{ ... };
functions/procedures.
{
• The declared variables can be accessed only initializing_code;
by those procedures which were declared }
inside monitor.
• Only one procedure can be active inside the }
monitor.
• Condition variables are synchronization primitives that
enable threads to wait until a particular condition occurs.
Condition variables are user-mode objects that cannot be
shared across processes. Condition variables enable
threads to atomically release a lock and enter the
sleeping state.
• Condition variable Ex: Condition x;
• Used to invoke x.wait( ) and x.signal( )
• A new data structure enum is a special "class" that represents a
group of constants /unchangeable variables.
• Dining Philosopher Problem:
– enum {thinking,hungry,eating} state[5];
– State of each philosopher will be either one of this enum. Ex: P5
– state[5] = eating;
– P5 can change his/her state = eating only if state[4] and state[1]
not eating. In general:
state[(i +4)%5]!=eating and state[(i +1)%5]!=eating
– Condition self[5] – hungry but unable to get chopstick.
– If a philosopher is hungry and need’s to eat he must call the
procedure “pickup” to pickup the chopstick, we have one more
function “test” which checks whether the neighbouring
philosophers eating or not and “putdown” is used to release the
chopstick.
• If a philosopher want to eat he first change his state to hungry, then
test function is called to check the neighbours eating or not.
• If neighbours are not eating then, the current philosopher will
change its state to eating and starts to eat. And once he/she has
completed the eating process, then they calls the signal function so
that if neighbor wishes to eat they can.
• If any one of the neighbor is eating, so the if condition inside test
function fails. So functions returns and self [i] will call wait function
and waits until some other philosopher signals.
• Similarly after eating putdown is called and change the state to
thinking and calls the test function for its neighbors so if they are
hungry they can eat.
• So deadlock will not occur because test function make sure that
though philosopher is hungry he checks the neighbors if they are not
eating i.e chopstick are free then only he can pickup the chopstick
and start eating.
• Similarly only one process is allowed inside the monitor so if one
process executes other person waits.
IPC
• Inter Process Communication (IPC) refers to a
mechanism, where the operating systems allow
various processes to communicate with each other.
This involves synchronizing their actions and
managing shared data.

Communication can be of two types:


i. Between related processes initiating from only one
process, such as parent and child processes.
ii. Between unrelated processes, or two or more
different processes.
IPC in Unix
• The pipe is the IPC mechanism in UNIX
• It provides a reliable unidirectional byte stream
between two processes.
• Created by the system call pipe().
• Each end of a pipe has an associated file descriptor.
• Pipe is of Fixed size - it buffers the output of the writer
and suspends the writer if the pipe gets full.
• Usually kept in memory by the normal block buffer
cache.
• The socket mechanism can be used by unrelated
processes.
Pipes
• The following data structure would be created
when the pipe system call is executed:
➢ int fd[2];
➢ pipe(fd);
Unix Pipes
There are two kinds of pipes
unnamed pipes
• used for communication between a parent
process (creating the pipe) and its child, with
one process writing and the other process
reading
named pipes
• Any process can communicate with another
using named pipes.
Unnamed pipes
• The typical sequence of events for a communication is as follows:
• The parent process creates an unnamed pipe using “pipe()”.
• The parent process forks.
• The processes communicate by using “write()” and “read()” calls.
• Each process closes its active pipe descriptor when it’s finished with it.
Named Pipes
• Named pipes- often referred to as FIFOs( first in,
first out)
• They have a name that exists in the file system.
• They have a larger buffer capacity, typically about
40K.
• Unidirectional.
• Named pipes exist as special files in the file
system and may be created in one of two ways:
➢ by using the UNIX mknod utility
➢ by using the “mknod()” system call
Close and remove a pipe
• When a process has finished using a named
pipe, it should close it using “close()”.
• when a named pipe is no longer needed, it
should be removed from the file system using
“unlink()”.
• Writer processes should open a named pipe for
writing only, and reader processes should open
a pipe for reading only.
Multiprocessing
• Multiprocessing systems are those that run multiple CPUs in a single
system.
• Multiprocessing is a less expensive option because a single system can
share many expensive hardware components such as power supplies,
primary and secondary storage, and the main system bus.

A simplified multiprocessor system architecture


Types of Multiprocessing
Asymmetric multiprocessing
➢ The OS runs on only one designated CPU.
➢ The other CPUs run only applications.
➢ Not commonly used because of performance bottlenecks
due to running the OS only on one processor.

Symmetric multiprocessing ( SMP )


➢ OS can be running on any CPU.
➢ A running program obviously will be modifying its state
(data).
➢ Multiple instances of the OS running on different CPUs
must be prevented from changing the same data structure at
the same time.
Locking
Why locks?
• In multiprocessing, different tasks run concurrently on different
CPUs but concurrency can cause inconsistent results if multiple
tasks try to use the same variables at the same time.
• In order to make it consistent, serializability is required.
• To ensure serializability, locks are used.
• A lock is an object with two operations: acquire(L) and release(L)
• The lock has state: it is either locked or not locked
• A call to acquire(L) waits until L is not locked, then changes its state
to locked and returns.
• A call to release(L) marks L as not locked
Drawbacks:
• Locks can decay performance.
Lock free coordination
Why do we want to avoid locks?
• Locks limit scalability
• Degrades performance
• Complexity
• Possibility of deadlock

• A lock-free data structure can be used to improve performance.


• A lock-free data structure increases the amount of time spent
in parallel execution rather than serial execution, improving
performance on a multi-core processor.
Lock free coordination
• A lock free algorithm protects a shared data
structure through a non-blocking algorithm.
• A non-blocking algorithm ensures that threads
competing for a shared resource do not have their
execution indefinitely postponed by mutual
exclusion.
• A non-blocking algorithm is lock-free if there is
guaranteed system-wide progress and wait-free if
there is also guaranteed per-thread progress.
End of Module 4

You might also like