OS Unit-2(Disha Notes)
OS Unit-2(Disha Notes)
Process Synchronization
Process synchronization in OS is the:
→ task of coordinating the execution of processes in such a way that
→ no two processes can access the same shared data and resources.
→ It is a critical part of operating system design, as it ensures that processes can safely share resources
without interfering with each other
Need of Synchronization
Terminologies
Critical Section
→ Critical Section is the part of a program which tries to access shared resources. That resource may be
any resource in a computer like a memory location, Data structure, CPU or any IO device.
→ The critical section cannot be executed by more than one process at the same time; operating system
faces the difficulties in allowing and disallowing the processes from entering the critical section.
→ Example:-
➢ The following illustration shows how inconsistent results may be produced if multiple processes execute
concurrently without any synchronization.
• Consider-
○ Two processes P1 and P2 are executing concurrently.
○ Both the processes share a common variable named “count” having initial value = 5.
○ Process P1 tries to increment the value of count.
○ Process P2 tries to decrement the value of count.
Race Condition
→ Race Condition occurs when more than one process tries to access and modify the same shared data or
resources because many processes try to modify the shared data or resources there are huge chances
of a process getting the wrong result or data.
→ Therefore, every process race to say that it has correct data or resources and this is called a race
condition.
→ The final value is up to the sequence of execution and use of the shared variable
Page 1
Critical Section Problem
1. Mutual Exclusion - If process Pi is executing in its critical section, then no other processes can be
executing in their critical sections
2. Progress - If no process is executing in its critical section and there exist some processes that wish to
enter their critical section, then the selection of the process that will enter the critical section next
cannot be postponed indefinitely
3. Bounded Wait(No Starvation)- A bound must exist on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to enter its critical section
and before that request is granted
○ Assume that each process executes at a nonzero speed
○ No assumption concerning relative speed of the n processes
Process P0 Process P1
P0 P1
While(1) While(1)
{ {
while (turn!=0); while (turn!=1);
CRITICAL SECTION CRITICAL SECTION
turn=1; turn=0;
REMAINDER SECTION REMAINDER SECTION
} }
Check this solution is valid or not. Here the entry section that will allow only one process to
enter in CS.
Exit section if any process is leaving the CS then another process can enter in CS.
Now check progress, when P0 execute and P1 execute. The P0 will enter in CS. So this solution
does not consider that actually a process wants to enter in CS or not.
So this solution does not follow progress.
Page 2
Synchronization for Two Processes using flag Variable (two Variable)
Initially flag
P0 P1
While(1) While(1)
{ {
flag[0]=T; flag[1]=T;
while (flag[1]); while (flag[0]);
CRITICAL SECTION CRITICAL SECTION
flag[0]=F; flag[1]=F;
REMAINDER SECTION REMAINDER SECTION
} }
SEMAPHORES:
Page 3
do Now, let us see how it implements mutual exclusion. Let there be two processes P1
{ wait(S) and P2 and a semaphore s is initialized as 1. Now if suppose P1 enters in its critical
CS section then the value of semaphore s becomes 0. Now if P2 wants to enter its critical
signal(S) section then it will wait until s > 0, this can only happen when P1 finishes its critical
RS section and calls V operation on semaphore s.
} while(T); This way mutual exclusion is achieved and progress is also achieved.
Producer-Consumer Problem
Semaphore S=1
Semaphore E=n
Semaphore F=0
n is the size of buffer.
The is a buffer of n slots.
Mutual Exclusion:
• Example: Producer waits for S before appending. If S is 0 (locked), another process is accessing the
buffer. When done, S is signaled (set to 1).
• Ensured By: Semaphore S ensures only one process accesses the buffer at a time.
Bounded Waiting:
• Example: Producer calls wait(E) to check if there’s space in the buffer. If E is 0, the producer waits.
Similarly, the consumer calls wait(F) to check if there are items to consume.
• Ensured By: Semaphore E (for producers) and F (for consumers) prevent indefinite waiting.
Progress:
• Example: If a producer or consumer can proceed (buffer not full for producer, not empty for consumer),
they will proceed without unnecessary delays.
• Ensured By: Semaphores E, F, and S ensure processes move forward when conditions are met.
Dekker's Solution
Mutual Exclusion:
• Ensured By: If P0 and P1 want to enter the
critical section, turn and flag ensure only
one enters.
Bounded Waiting:
• Ensured By: Each process waits a finite time
due to the turn variable.
Progress:
• Ensured By: Processes not in the critical
section don't hinder others.
Page 4
Test and Set
While (lock==1);
Lock=1
// Critical Section
Lock =0
→ Another classical IPC problem takes place in a barber shop. The barber shop has one barber, one
barber chair, and n chairs for waiting customers, if any, to sit on.
→ If there are no customers present, the barber sits down in the barber chair and falls asleep.
→ When a customer arrives, he has to wake up the sleeping barber. If additional customers arrive while
the barber is cutting a customer's hair, they either sit down (if there are empty chairs) or leave the shop
(if all chairs are full).
→ The problem is to program the barber and the customers without getting into race conditions
→ The solution uses three semaphores, customers, which counts waiting customers (excluding the
customer in the barber chair, who is not waiting), barbers, the number of barbers (0 or 1) who are idle,
waiting for customers, and mutex, which is used for mutual exclusion. We also need a variable, waiting,
which also counts the waiting customers.
→ The reason for having waiting is that there is no way to read the current value of a semaphore. In this
solution, a customer entering the shop has to count the number of waiting customers. If it is less than
the number of chairs, he stays; otherwise, he leaves
Initially chairs=n;
semaphore customer = 0; //No. of customers in waiting
room semaphore barber = 0; //barber is idle semaphore
mutex = 1; //for mutual exclusion
int waiting = 0; //No. of waiting customer
Page 5
Customer Process Barber Process {
{ while(1)
While(1)
{ wait(mutex); {
If(waiting<chairs) wait(cutomer);
{ wait(mutex);
waiting=waiting+1; waiting = waiting
signal(customer); -1;
signal(mutex); signal(barber)
wait(barber); signal(mutex);
get_haircut(); cut_hair() }
}
else
{
signal(mutex);
}
}
}
Readers-Writers Problem
→ The readers-writers problem is a classical problem of process synchronization, it relates to a data set
such as a file that is shared between more than one process at a time.
→ Among these various processes, some are Readers - which can only read the data set; they do not
perform any updates, some are Writers - can both read and write in the data sets.
→ The readers-writers problem is used for managing synchronization among various reader and writer
process so that there are no problems with the data sets, i.e. no inconsistency is generated.
→ Here the task is to design the code in such a manner that if one reader is reading then no writer is
allowed to update at the same point of time,
→ similarly, if one writer is writing no reader is allowed to read the file at that point of time and if one
writer is updating a file other writers should not be allowed to update the file at the same point of
time.
→ However, multiple readers can access the object at the same time
→ In the above code of reader, mutex and write are semaphores that have an initial value of 1,
→ whereas the readcount variable has an initial value as 0.
→ The readcount variable denotes the number of readers accessing the file concurrently.
→ The moment variable readcount becomes 1, wait operation is used to write semaphore which decreases
the value by one. This means that a writer is not allowed how to access the file anymore.
→ On completion of the read operation, readcount is decremented by one. When readcount becomes 0, the
signal operation which is used to write permits a writer to access the file
Page 6
Code for Writer Process Code for Reader Process
while(1) int readcount = 0;
{ wait (mutex);
wait(write); readcount ++; // on each entry of reader increment readcount
WRITE INTO THE FILE if (readcount == 1)
signal(write); {
} wait (write);
If a writer wishes to access the file, wait }
operation is performed on write semaphore, signal(mutex);
which decrements write to 0 and no other --READ THE FILE?
writer can access the file. On completion of the wait(mutex);
writing job by the writer who was accessing readcount --; // on every exit of reader decrement readcount
the file, the signal operation is performed on if (readcount == 0)
write. {
signal (write);
}
signal(mutex);
Explanation:
- Initial semaphore write = 1.
- Two processes P0 and P1 want to write.
- P0 enters first, executes `Wait(write)`, setting write = 0, and continues writing.
- P1's `Wait(write)` puts it in an infinite loop since write = 0.
- When P0 finishes, it signals `write`, setting write = 1.
- P1 can now write.
- This shows only one process can write at a time.
Explanation:
- Initial semaphore mutex = 1, readcount = 0.
- P0 wants to read, P1 wants to write.
- P0 executes `Wait(mutex)`, decrementing mutex to 0, increments readcount to 1.
- If readcount == 1, `wait(write)` sets write = 0, preventing writers.
- `signal(mutex)` increments mutex to 1, allowing other readers.
- A writer cannot enter because `wait(write)` is in an infinite loop.
- When P0 finishes reading, it signals `write` and `mutex`.
- Writers can now proceed.
Explanation:
- Initial semaphore write = 1.
- P0 writes, P1 wants to read.
- P0 executes `Wait(write)`, setting write = 0.
- P1's `Wait(mutex)` decrements mutex to 0, increments readcount to 1.
- If readcount == 1, `wait(write)` traps P1 in an infinite loop.
- When P0 finishes writing, it signals `write`.
- P1 can now read.
Page 7
The Dining Philosophers Problem
→ The dining philosopher's problem is the classical problem of synchronization which says that Five
philosophers are sitting around a circular table and their job is to think and eat alternatively.
→ A bowl of noodles is placed at the center of the table along with five chopsticks for each of the
philosophers.
→ To eat a philosopher needs both their right and a left chopstick. A philosopher can only eat if both
immediate left and right chopsticks of the philosopher is available.
→ In case if both immediate left and right chopsticks of the philosopher are not available then the
philosopher puts down their (either left or right) chopstick and starts thinking again.
→ The dining philosopher demonstrates a large class of concurrency control problems hence it's a classic
synchronization problem
void Philosopher
{
while(1)
{
Wait( take_chopstickC[i] );
Wait( take_chopstickC[(i+1) % 5] ) ;
EATING THE NOODLE
Signal( put_chopstickC[i] );
Signal( put_chopstickC[ (i+1) % 5] ) ;
THINKING
}
}
Page 8
Let's understand how the above code is giving a solution to the dining philosopher problem?
With i=0(initial value):
We have demonstrated that no two nearby philosophers can eat at the same time from the aforementioned
solution to the dining philosopher problem. The problem with the above solution is that it might result in a
deadlock situation. If every philosopher picks their left chopstick simultaneously, a deadlock results, and no
philosopher can eat. This situation occurs when this happens.
→ Inter process communication is the mechanism provided by the operating system that allows processes
to communicate with each other. This communication could involve a process letting another process
know that some event has occurred or transferring of data from one process to another.
Page 9
The models of interprocess communication are as follows −
→ Memory communication is faster on the shared memory model as compared to the message
passing model on the same machine.
Page 10