0% found this document useful (0 votes)
47 views

Unit - 3 Ipc and Synchronization

- A lock variable is a shared variable that can be set to 0 or 1 to indicate whether a critical section is available - Before entering a critical section, a process checks the lock variable; if 0, it sets to 1 to acquire the lock, if 1 it waits - After leaving the critical section, the process sets the lock variable to 0 to release the lock for another process - However, lock variables do not fully prevent race conditions and deadlocks if processes are preempted in between acquiring and releasing the lock

Uploaded by

Ayush Shrestha
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views

Unit - 3 Ipc and Synchronization

- A lock variable is a shared variable that can be set to 0 or 1 to indicate whether a critical section is available - Before entering a critical section, a process checks the lock variable; if 0, it sets to 1 to acquire the lock, if 1 it waits - After leaving the critical section, the process sets the lock variable to 0 to release the lock for another process - However, lock variables do not fully prevent race conditions and deadlocks if processes are preempted in between acquiring and releasing the lock

Uploaded by

Ayush Shrestha
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 68

INTER PROCESS COMMUNICATION

AND SYNCHRONIZATION
• INTRODUCTION
• RACE –CONDITION, CRITICAL REGIONS .AVOIDING CRITICAL REGION:
MUTUAL EXCLUSION AND SERIALIZABILITY, MUTUAL EXCLUSION
CONDITIONS
• PROPOSALS FOR ACHIEVING MUTUAL EXCLUSION
• DISABLING INTERRUPTS, LOCK VARIABLE, STRICT
ALTERATION(PETERSON’S SOLUTION), THE TSL INSTRUCTION
• SLEEP AND WAKEUP, TYPES OF MUTUAL EXCLUTIOS(SEMAPHORE,
MONITORS, MUTXES, MESSAGE PASSING,BOUNDED BUFFER)
• SERIALIZABILITY: LOCKING PROTOCOLS AND TIME STAMP
PROTOCOLS, CLASSICAL IPC PROBLEMS(DINNING PHILOSOPHERS
PROBLEMS), THE READER AND WRITERS PROBLEMS, THE SLEEPING
BARBAR PROBLEM.
Introduction of Inter process
communication
- Processes in a system can be independent or cooperating
- Independent process cannot affect by execution of another
process. Cooperating process can affect by the execution of
another process. To co- operate process need inter process
communication mechanisms.
- Inter process communication is a communication between
two or more processes.
- Inter process communication helps in information sharing,
communication speed-up; modularity and convenience.
- But data corruption, deadlocks and complexity may be the
issue.
Please refer co-operating process which was discussed in
previous class for more details
Models for Inter process
communication
- Message passing and shared memory:
1. Message passing is performed through the kernel memory space
whereas share memory performed using available memory
2. Message is simple to implement where pre- defined region in memory.
Share memory allows large message which is only limited by physical
memory.
Inter process communication
Race Condition
- Suppose you are going to leave college early due to some reason without
telling your teacher.
- When you press the lift button , at the same time your teacher from next
floor press the button as well.
- The lift receiving signals from both your floor and the teacher floor
simultaneously, faces a dilemma. It decides to open its door on both floors
at the same time. You believe that you can make your college boarding
secret from your teacher, and finds both facing each other with puzzled
expression.
- In this situation the lift becomes a shared space for both you and your
teacher .
- In real world of programming a race condition occurs when two or more
processes try to access shared resource simultaneously without proper
synchronization
Race condition
• A race condition is an undesirable situation
that occurs when a device or system attempts
to perform two or more operations at the
same time.
• Race condition is a situation like this where
processes access the same data concurrently
and the outcome of execution depends on the
particular order in which the access takes
place .
Race condition
Critical regions/Section
• Lets take the public restroom analogy:
- Imagine a public place with only one restroom available for multiple
people in our case restroom can be viewed as shared resource.
- The critical section here is actual use of the restroom. It is the physical
space in our case a part of code where only one person should be
allowed at a time to avoid conflicts or awkward situations.
- Here restroom there should be a mechanism for synchronization to
ensure that only one person enters the critical section uses the
restroom at a time.
- A sign on the restroom door that indicates whether it is occupied or
not, that determines the next person insertion inside the restroom. It
can be relate to mutex(mutual exclusion).
- If there were no synchronization mechanism like sign or lock multiple
people might try to enter the restroom simultaneously leading
uncomfortable situation. It can be relate with race conditions.
Critical section
Critical section
• A critical section is a part of the code where
shared resources are accessed, and its crucial to
ensure that only one process(or thread) can
access it at a time.
• This prevents data corruption or unexpected
behavior.
• The goal of managing critical sections is to
prevent race conditions , where multiple
processes or threads attempt to access shared
resources simultaneously, leading to
unpredictable results.
Avoiding critical region: Mutual
exclusion and serializability
Avoiding critical region: Mutual
exclusion and serializability
- Mutual exclusion refers to the idea of ensuring that only
one process or thread can access a shared resource at a
time in concurrent programming environment.
- It is a critical aspect of managing concurrent access to
shared data to prevent conflicts and maintain data
consistency.
- It is the way of making sure that if one process is using a
shared variable or file; other processes will be stopped
from doing the same thing.
- Common techniques for achieving mutual exclusion include
the use of locks, semaphores, and other synchronization
mechanisms.
Avoiding critical region: Mutual
exclusion and serializability
• Serializability: It refers to the property of a
schedule of concurrent transactions that
makes it equivalent to some serial execution
of those transaction.
• Here the outcome of the concurrent execution
should be consistent with the result that
would be obtained if the transactions were
executed one after the other in some
sequential order.
Example of Serializability
• Lets look two schedule s1,s2 where s1 is non-
serializable and s2 is serializable.
s1 S2
T1: Read(x) T1: Read(x)
T2: Read(y) T1:x=x+10
T1: x=x+10 T2: Read(y)
T2: y=y*2 T2:y=y*2
Mutual exclusion condition
- When using shared resources, it is important to ensure
mutual exclusion between various processes. There
cannot be two processes running simultaneously in
either of their critical sections.
- It is not advisable to make assumptions about the
relative speeds of the unstable processes
- For access to the critical section a process that is
outside of it must not obstruct another process.
- Its critical section must be accessible by multiple
processes in a finite amount of time. Multiple
processes should never be kept waiting in an infinite
loop.
Mechanisms for achieving mutual
exclusion:
- Disabling interrupts
- Shared lock variable
- Strict alteration
- TSL(Test and set Lock) instruction
- Peterson’s solution
Disabling interrupt
- Interrupts are signals sent to the processor to notify it of events that
require attention.
- Disabling interrupts is common technique used to achieve mutual
exclusion, especially in shared resource environment.
- Here is how disabling interrupts can be used to achieve mutual exclusion:
a. Disable interrupts: When a thread or process wants to access to access a
shared resource it can disable interrupts.
b. Critical section: The code that access the shared resource is often
referred to as critical section. Since interrupts are disabled no other
thread or process can preempt the current one and enter the critical
section concurrently.
c. Re-enable interrupts: After the critical section is executed interrupts are
re- enabled. This allows system to resume normal operation, and other
threads or processes can be scheduled to run.
Disabling interrupts
• Example (pseudo- code)
// Disable interrupts
disable_interrupts(){
//critical section
//access shared resource
}
//Re- enable interrupts
enable interrupts()
But there may be:
- Risk of deadlocks
- Impact on system responsiveness
- Interrupt latency, etc. in this mechanism.
Disabling interrupts
• Problems:
- Unattractive to give user processes the power
to turn off interrupts.
- What if one of the did disable interrupt and
never turned them on(enable interrupt)
again? That could be the end of the system.
Lock Variable
• A shared variable lock having value 0 or 1
• Before entering into critical section a process
check a shared variable lock’s value
• It the value of lock is 0 then set it to 1 before
entering the critical section and enters into
critical section and set it to 0 immediately after
leaving the critical section.
• If the value of lock is 1 then wait until it becomes
0 by some other process which is in critical
section.
Lock variable
• Algorithm:
do{
Acquire lock
Critical section
release lock
}
Lets assume initially lock value =0;
1. while(lock==1); //ENTRY CODE
2. lock=1
3. critical section
4. lock=0// EXIT CODE
Lock variable

Case 1:
Say two processes p1 and p2 are executing.
At first lock value is 0 i.e critical section is vacant.
P1 comes and start executing entry code and execute line
no 1,2,3 and enter in critical section.
After some time p2 comes and start executing entry code
but find lock value= 1 and enters in infinite loop.
When p1 completes its critical section execution and
come into line 4 and set lock =0 ; then p2 can execute its
next line of code.
• Execute in user mode
• Multi process solution
• No mutual exclusion guarantee
Lock variable
Case 2:
Say two processes p1 and p2 are executing.
-At first lock value is 0 i.e critical section is vacant.
-P1 comes and start executing entry code as soon as it complete its
line 1 execution preemption occurs and p1 leave the execution
-P2 comes and starts executing the code as lock = 0; so it enters in line
no 2. Again at line 2 lock =1 so it enters in its critical section.
-As soon as p2 enters in line 3 p1 arrive and it also move to line no 2 as
it already executed line 1 and it also enters into line 3 i.e critical
section.
-That’s why lock variable do not guarantee mutual exclusion. This
problem is solve in next method which and TSL instruction.
The TSL instruction
• Test and set lock instruction
• It reads the contents of the memory word lock
into register and then stores a nonzero value at
the memory address lock.
• The operations of reading the word and storing
into it are guaranteed to be indivisible- no other
process can access memory word until the
instruction is finished.
• The CPU executing the TSL instruction locks the
memory bus to prohibit other CPUs from
accessing memory until it is done.
TSL instruction
• Here we combine line no 1 and 2 of entry code
because when preemption occurs between line 1
and 2 the mutual exclusion is violated in previous
case. Here we are testing and setting the variable
at single line and making it atomic.
Entry code:
1. while(lock==1);
2. lock=1
3. critical section
4. lock=0
The TSL instruction
Lets p1, and p2 are two processes.
Initially lock = false;
When first process p1 arrives and call test_and_set with the memory address of false; definition of
function will execute.
Inside function when target pointer gets memory reference of false then it stores the value in r i.e r=
false. And target pointer is updated to TRUE which makes lock variable is true. And r is returned as
boolean i.e false. So condition become false and p1 enters inside critical section.

while(test_and_set(&lock));
cs //critical section
lock=false;

boolean test_and_set(boolean *target) //pointer variable *target contains the memory ref
of lock variable
{
boolean r= *target; //the value of target pointer is stored in r.
*target=TRUE;
return r;
}
Strict Alternation
• It is defined as a synchronization mechanism that
is implemented in user mode and also known as
strict alternation method.
• It is used for synchronizing two processes. It is
used to solve the problem of Lock Variable.
• In lock variable approach, the process is able to
enter into the critical section only when the lock
is set to 1 and multiple processes has a lock
variable value of 1 at the same time.
• Due to this reason lock variable is not able to
guarantee mutual exclusion.
Strict Alternation
• Lets go through example:
• Consider we have two processes p0 and p1
• Turn value=0: When the turn value is set to 0, it
means that process p0 will enter into the critical
section:
• Initially when p0 arrives and value of turn
variable is set to 0:
while(turn!=0); //Entry Section
//critical section
turn=1; //Exit Section
Strict Alternation
• Turn value =1: when the turn value is set to 1,
it means that process p1 will enter into the
critical section.
• initially when p1 arrives and value of turn
variable is set to 1;
while(turn!=1); //Entry Section
//critical section
turn=0; //Exit section
Feature of strict Alteration
1. Mutual exclusion: Ensures mutual exclusion properly.
2. Progress: Does not guarantee progress because only
after one process complete its critical section then
another process will get its critical section.
3. Portability: The turn variable is implemented is user
mode and does not require any kind of special instruction
form the operating system. Therefore, it provides
portability
4. Bounded waiting: Each process gets the chance, once a
previous process is executing the next process gets the
chance therefore turn variable ensures bounded waiting.
Peterson’s solution
• A classic software based solution to the critical section problem.
• It provides a good algorithmic description of solving the critical
section problem and illustrates some of the complexities involved in
designing software that address the requirements of mutual
exclusion, progress and bounded waiting requirements.
• It restrict two process that alternate execution between their
critical sections and remainder sections.
• It is humble algorithm as if any one process want to enter in its
critical section but It will give chance another process to enter in its
critical section.
• For example if you and your friend try to enter inside bus at same
time you become humble and give a chance to your friend to enter
inside the bus before you.
Peterson solution
• Lets consider two processes Pi and Pj
• Peterson solution requires two data items to be shared
between the two processes: they are int turn and
boolean flag variables.
• Turn variable indicate the turn to enter in critical
section. If turn = I then process Pi turn to enter and if
turn=j then process Pj turn to enter in its critical
section.
• Flag is boolean variable which may be true or false. It
used to indicate if a process is ready to enter its critical
section. Here flag[i]= true then process Pi is ready to
enter in its critical section
Peterson solution
Structure of process Pi :
do{
flag[i]= true;
turn=j;
while(flag[j]&&turn==[j]); // if process Pj also want to enter CS so it set flag[j]= true.
critical section
flag[i]=false;
remainder section
}while(TRUE);

Structure of process Pj:


do{
flag[jj= true;
turn=i;
while(flag[i]&&turn==[i]); // if process Pi also want to enter CS so it set flag[i]= true.
critical section
flag[ij=false;
Remainder section
}while(TRUE);
SLEEP AND WAKEUP
- Peterson’s solution and solution using TSL have the limitation
of requiring busy waiting.
- When a process wants to enter in its critical section, it
checks to see if the entry is allowed.
- If it is not allowed, the process goes into a loop and waits I.e
starts busy waiting until it’s allowed to enter
- This approach waste CPU time
- Inter process primitives pair of sleep and wakeup are system
calls have one parameter that represents a memory address
used to match up ‘sleeps’ and ‘wakeups’.
- The concept of sleep and wakeup refer to the states and
transitions that a process can undergo regarding its execution.
Sleep and wakeup
• Sleep: When a process or thread is in a sleep
state it means that is temporarily inactive and
not using CPU resources. Sleep occurs because of
waiting of I/O, Synchronization, etc.
• Wakeup: The term wakeup refers to the event
when a sleeping process or thread is signaled to
resume execution. The process transitions form
the sleep state to the ready state. I/O completion,
event notification, timer expiry can be the reason
of wakeup.
Producer and consumer problem/
bounded buffer problem.
- It is multi process synchronization problem.
- It is also known as bounded buffer problem.
- This problem describes two processes producer
and consumer, who share common, fixed sized
buffer.
- Producer: Produce some information and put it
into buffer
- Consumer: Consume this information i.e remove it
from the buffer.
Producer consumer Problem
What producer consumer problem is?
1. buffer is empty:
- Producer want to produce - YES
- Consumer want is consume- No
2. Buffer is full:
- Producer want to produce – no
- Consumer want to consume- yes
3. Buffer is partial filled:
- producer want to produce –yes
- - Consumer want to consume- yes
Solution for the producer:
- Producer either go to sleep or discard data if the buffer is full
- Once the consumer removes an item from the buffer, it notifies (wakeup) the producer to put
the data into buffer
Solution for the consumer:
- Consumer can go to sleep if the buffer is empty
- Once the producer puts data into buffer it notifies(wakeups) the consumer to remove or use
data from the buffer.
Producer consumer problem
Producer consumer problem using sleep and wakeup:
#define N 4 //maximum slot in buffer
#define count =0 // items in buffer

void producer(void) {
int item;
While(True) {
item= produce_item(); //producer produces an item
if(count==N) //indicate the buffer is full then producer will sleep
sleep();
insert_item(item); // the item is inserted into buffer
count=count+1;
if(count==1)//the producer will wake up the consumer if there is at least 1 item in the buffer
wake_up(consumer);
}
}
void consumer(void) {
int item;
while(true) {
if (count==0)// the consumer will sleep if the buffer is empty
sleep();
item=remove_item();
count =count-1;
if(count==N-1) //if there is at least one slot available in the buffer then the consumer will wake up producer
wakeup(producer);
consume_item(item); //the item is read by consumer
}
}
Sleep and wakeup
PROBLEM OF SLEEP AND WAKEUP: It contains a race condition that can lead
to a deadlock.
- The consumer has just read the variable count, noticed it’s zero and is just
about to move inside
- Just before calling sleep, the consumer is suspended and the producer is
resumed
- The producer creates an item, puts it into the buffer, and increased count.
- Because the buffer was empty prior to the last addition, the producer tries
to wake up the consumer.
- Unfortunately, the consumer wasn’t yet sleeping, and the wakeup call is lost.
- When the consumer resumes, it goes to sleep and will never be awakened
again. This is because the consumer is only awakened by the producer when
count is equal to 1.
- The producer will loop until the buffer is full, after which it will also go to
sleep.
- Finally both the processes will sleep forever. This cause deadlocks.
TYPES OF MUTUAL EXCLUSION
• SEMAPHORE
• MONITORS
• MUTEXES
• MESSAGE PASSING
• BOUNDED BUFFER
SEMAPHORE
• A semaphore is a variable that provides an abstraction for controlling the
access of a shared resource by multiple processes in a parallel
programming environment.
• It is a synchronization tool which is used to deal with critical section
problem.
• A semaphore S is an integer variable that, apart from initialization, is
accessed only through two standard atomic operations: wait() also
denoted as P and signal() also denoted as V.
• There are 2 types of semaphores:
1. Binary semaphores: It can take only two values(0/1). Binary semaphores
have two methods associated with it.(up/down, lock/unlock). They are
used to acquire locks. It helps to achieve mutual exclusion
2. Counting semaphores: It can have possible values more than two. It
helps to access multiple instances of resources by multiple processes.
Semaphore
• Definition of wait()
P(Semaphore S){
While(S<=0)
; ///no operation
S--;
}
//enters into critical section or access shared resources.
• Definition of signal()
V (Semaphore S){
S++;
}
Semaphore
• Lets go through an example how counting semaphore works:
• Say 3 process p0,p1 and p2 wants to access resource with 2
instance R(r1, r2) at same time(value of Semaphore S=2 here).
Assume initially all resource are vacant and no process are using it.
• Process p0 first call wait() and enter in while loop and find S<=0 is
false so it decrement the value of S by 1 and start using resource r1.
• Process p1 also call wait() and enter in while loop and find S<=0 is
false so it decrement the value of S by 1 and start using resource r2.
• Process p2 also call wait() and enter in while loop and find S<=0 is
true so it gets into while loop.
• After some time interval process p0 release the resource r1 and call
the signal() and increment the value of S by one and release the
resource. Then process p2 will get the resource.
Explore an example how binary semaphore works .
Monitor
• It is a synchronization primitive used for controlling
access to a shared resource. Monitors are a high level
abstraction that simplifies the process of designing
concurrent programs and managing synchronization.
• Monitors are implemented as programming languages,
and provide mutual exclusion, condition variables and
data encapsulation in a single construct
• Monitors have some limitations. They can be less
efficient than lower level synchronization primitives
such as semaphores and lock as they may involve
additional overhead due to their higher level
abstraction.
Monitor

• A procedure defined within a monitor can access only those


variables declared locally within the monitors and its formal
parameters.
• Local variables of a monitor can be accessed by only the local
procedures
• The monitor construct ensures that only one process at a time can
be active within the monitor.
• Condition construct: condition x ,y;
- The only operation that can be invoked on a condition variables are
wait() and signal().
- The operation x.wait(); means that the process invoking this
operation is suspended until another process invokes x.signal();
- The x.signal() operation resumes exactly one suspended process
Monitors
Syntax of monitor:
Monitor monitor_name{
//shared variable declaration
Procedure p1(…..){ // operation that can perform on shared variables
……
}
Procedure p2(…..){
……
}
Procedure pn(…..){
……
}
Initialization code(…..){
……
}

}
Schematic view of monitor:
MESSAGE PASSING
• This method will use two primitives
a. send: it is used to send message
Send(destination, &message)// in this syntax
destination is the process to which sender want to
send message and message is what the sender
wants to send.
a. Receive: it is used to receive message
Receive(source, &message) // in this syntax source
is the process that has send message and message
is what the sender has sent.
Producer consumer problem using message passing:
#define N 4 //maximum slot in buffer
void producer(void) {
int item; message m;
While(True) {
item= produce_item(); //producer produces an item
receive(consumer, &m); //wait for empty to arrive
build_message(&m,item); //construct a message to send
send(consumer, &m); // send item to consumer
}
}
void consumer(void) {
int item; message m;
for(i=0;i<N;i++)
send(producer, &m); // send N empties
while(true) {
receive(producer,&m); //get message containing item
item= extract_item(&m); // extract item from message
send(producer,&m); //send back empty reply
consume_item(item); //do something with the item.
}
}
Mutex
• It stands for mutual exclusion object. It is mainly used to provide mutual
exclusion to a specific portion of the code so that the process can execute
and work with a particular section of the code at a particular time.
• It is binary variable to perform locking mechanism while using shared
resources/critical section by multiple processes.
• It is kernel resource that provides synchronization which is also known as
synchronization primitives.
• It is program object which gives permission to access and use of shared
resources for multiple processes.
• Mutex have name and unique Id by which any process can get access of it.
• Consider the standard producer –consumer problem. A producer thread
collects the data and writes it to the buffer. A consumer thread process
the collected data from the buffer. The objective is both threads should
not run at the same time. A mutex provides mutual exclusion either
producer or consumer can have the key(mutex) and proceed with their
work.
Classical IPC problem:
1. Dinning- philosophers problem: When there is a limited resource and it needs to
share among the process there can exist some problem. To demonstrate this
dinning philosophers problem came into place.
- In this problem 5 philosophers sitting at a round table doing 2 thing eating and
thinking.
- While eating they are not thinking and while thinking they are not eating
- Each philosopher has plates that is total of 5 and there is a chopstick place
between each pair of adjacent philosophers that of 5 forks.
- Each philosopher need 2 forks to eat and each philosophers can only use the
chopstick on his immediate left and immediate right. A philosopher may pick up
only one chopstick at a time
Dinning philosophers problem
• One simple solution is to represent each
chopstick with a semaphore.
• A philosopher tries to grab a chopstick by
executing a wait() operation on that semaphore.
• He release his chopostick by executing the
signal() operation on the appropriate
semaphores.
• So the share data are semaphore chopstick[5]
where all the elements of chopstick are initialize
to 1. here we are considering binary semaphore where it initialize to 1
then semaphore is free; when 0 it is in used
Dinning philosopher problem
• The structure of philosopher I
Do{
Wait(chopstick[i]);
Wait(chopstick[(i+1)%5]); // for 5th philosopher as he
need chopstick[1].
…….
//eat
Signal(chopstick[i]);
Signal(chopstick[(i+1)%5]);
think
}while(TRUE);
Dinning philosopher problem
• Above solution guarantees that no two neighbors are
eating simultaneously, but still it can create deadlock.
• Suppose that all five philosophers become hungry at
same time and each grabs their left chopstick. All the
elements of chopstick will now be equal to 0. When
each philosopher tries to grab its right chopstick he will
be delayed forever.
• Some possible remedies:
- Allow at most four philosopher to be sitting
simultaneously at the table.
- Allow a philosopher to pick up his chopsticks only if
both chopsticks are available. ,etc.
Readers writers problem
• A database is to be shared among several concurrent
processes.
• Some of these processes may want only to read the
database, whereas others may want to update i.e R/W the
database
• We distinguish between these two types of processes by
referring to the former as reader and to the latter as writer.
• When two readers access the shared data there is no any
problem. But when one writer and other thread either
reader or writer access shared database simultaneously,
problem may arise.
• This synchronization problem is referred to as reader writer
problem
The reader writer problem
• Solution to the readers writers problem using semaphores:
We will make use of two semaphores and an integer variable.
1. mutex, a semaphore(initialize to 1) which is used to
ensure mutual exclusion when readcount is updated i.e
when any reader enters or exit form the critical section.
2. wrt, a semaphore (initialize to 1) common to both reader
and writer process
3. readcount, an integer variable(initilize to 0) that keeps track
of how many processes are currently reading the object.
The reader writer problem
Writer process
Do{
//writers request for critical section
Wait(wrt); //perform the write
Signal(wrt);
}while(true)
Reader process
Do{
Wait(mutex); //reader will acquire mutux so that no two process access
readcnt at same time
Readcnt++; // the number of readers has now increased by 1
If(readcnt==1) // There is atleast one reader who is reading
Wait(wrt); // this ensure no writer can enter if there is even one reader
Signal(mutex) // other readers can enter while this current reader is inside critical
section
Wait(mutex); //in order to modify readcnt variable
Readcnt--; // a reader wants to leave
If(readcnt==0) // no reader is left in the critical section
Signal(wrt); // writer can enter
Signal(mutex) // reader leavs
} while(true);
THE SLEEPING BARBAR PROBLEM
• There is one barber, and N chairs for waiting customers
• If there are no customers, then the barber sits in his
chair and sleeps
• When a new customer arrives and the barber is
sleeping then he will wakeup the barber
• When a new customer arrives and the barber is busy
then he will sit on the chairs if there is any available,
otherwise(when all the chairs are full) he will leave.
• Checking waiting room, entering shop, taking watiting
room chair, etc are actions taken by barber and
customers. Which takes unknown amount of time.
Sleeping barber problem using
semaphore
• One semaphore: customer
Customer-------->>> barber
// No customer: barber falls asleep
Customer says I have arrived; waiting for your service.
Barber wakes up if sleeping
• One semaphore: Barber
Barber----->>> customer
// Barber says I am ready to give service to next customer.
// Customers acquires the barber for service
// Customer waits if barber busy.
Serializability: Locking protocols and
time stamp protocols
Transactions are the series of read and write operation;
which is the collection of instructions or operations that
performs single logical function.
Atomic Operations are; No other operation can occur
between the start and end of the atomic operation. This
is important in concurrent applications where more than
one thread of execution is active.
Concurrent Transactions:
- Must be equivalent to serial execution – serializability
- Could perform all transaction in critical section which is
inefficient and too restrictive
- Concurrency control algorithms provide serializability
• Atomically executed transaction order called
serial schedule and Execution sequence is
called schedule.
Serializability
• It is a property of a system describing how different processes operate on
shared data. A system is serializable if its result is the same as if the
operations were executed in some sequential order, meaning there is no
overlap in execution.
• A serializable schedule is a sequence of actions R/W operations that does
not violate the serializability property. This property ensures that each
transaction appears to execute atomically and is isolated from other
transaction effects.
• Serializability is mainly two types:
1. Conflict: Conflicting serializability in which conflicting operations on the
same data items are executed in an order that preserves operational
consistency. This ensures that no two conflicting operations are executed
concurrently.
2. View: A type of serializability in which each transaction produces results
that are equivalent to some well defined sequential execution of all
transaction in the system.
Locking Protocol
• Ensure serializability by associating lock with each data item by following locking
protocol for access control.
• Two types of locks
1. Shared- Ti has shared mode lock (S) on item Q, Ti can read Q but not write Q
2. Exclusive- Ti has exclusive mode lock (X) on Q, Ti can read and write Q.
Require every transaction on item Q acquire appropriate lock. If lock already held, new
request may have to wait which is similar to reader- writer algorithm.
To ensure serializability: 2 Phase locking(2PL) protocol is used.
- Each transactio issues lock and unlock requests in two phases they are: Growing
and Shrinking phase. In growing phase lock can obtain but cant release any lock
and in shrinking phase lock can only release but cant obtain.
- Initially transaction is in the growing phase where transaction acquires locks as
needed. Once the transaction releases a lock, it enters the shrinking phase where
no more requests can be issued.
- This ensures conflict serializability but it cant prevent deadlock
Timestamp based protocols
• Select order among transaction in advance-
timestamp- ordering. Timestamp is unique value
assigned to any transaction.
• Transaction Ti associated with timestamp TS(Ti) before
Ti starts
- TS(Ti)<TS(Tj) if Ti entered system before Tj
- TS can be generated from system clock or as logical
counter incremented at each entry of transaction.
Time stamp determines serializability order
- If TS(Ti)<TS(Tj), system must ensure produced schedule
equivalent to serial schedule where Ti appears before
Tj.

You might also like