0% found this document useful (0 votes)
2 views10 pages

OS Unit-2(Disha Notes)

The document discusses process synchronization in operating systems, emphasizing the importance of coordinating concurrent processes to prevent data inconsistency. It outlines critical sections, race conditions, and the requirements for solving the critical section problem, including mutual exclusion, progress, and bounded waiting. Additionally, it covers various synchronization mechanisms such as semaphores, the producer-consumer problem, and the readers-writers problem, providing examples and solutions for each scenario.

Uploaded by

Somi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views10 pages

OS Unit-2(Disha Notes)

The document discusses process synchronization in operating systems, emphasizing the importance of coordinating concurrent processes to prevent data inconsistency. It outlines critical sections, race conditions, and the requirements for solving the critical section problem, including mutual exclusion, progress, and bounded waiting. Additionally, it covers various synchronization mechanisms such as semaphores, the producer-consumer problem, and the readers-writers problem, providing examples and solutions for each scenario.

Uploaded by

Somi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Unit-2: Concurrent Processes

Sunday, June 9, 2024 2:16 PM

Process Synchronization
Process synchronization in OS is the:
→ task of coordinating the execution of processes in such a way that
→ no two processes can access the same shared data and resources.
→ It is a critical part of operating system design, as it ensures that processes can safely share resources
without interfering with each other

Need of Synchronization

→ Process synchronization is needed


○ When multiple processes execute concurrently sharing some system resources.
○ To avoid the inconsistent results.

Process in-depth(Based on functioning)

Let’s understand some different sections of a program.


○ Entry Section:- This section is used to decide the entry of the
process
○ Critical Section:- This section is used to make sure that only one
process access and modifies the shared data or resources.
○ Exit Section:- This section is used to allow a process that is waiting
in the entry section and make sure that finished processes are also
removed from the critical section.
○ Remainder Section:- The remainder section contains other parts of
the code which are not in the Critical or Exit sections

Terminologies

Critical Section
→ Critical Section is the part of a program which tries to access shared resources. That resource may be
any resource in a computer like a memory location, Data structure, CPU or any IO device.
→ The critical section cannot be executed by more than one process at the same time; operating system
faces the difficulties in allowing and disallowing the processes from entering the critical section.
→ Example:-
➢ The following illustration shows how inconsistent results may be produced if multiple processes execute
concurrently without any synchronization.
• Consider-
○ Two processes P1 and P2 are executing concurrently.
○ Both the processes share a common variable named “count” having initial value = 5.
○ Process P1 tries to increment the value of count.
○ Process P2 tries to decrement the value of count.

Race Condition
→ Race Condition occurs when more than one process tries to access and modify the same shared data or
resources because many processes try to modify the shared data or resources there are huge chances
of a process getting the wrong result or data.
→ Therefore, every process race to say that it has correct data or resources and this is called a race
condition.
→ The final value is up to the sequence of execution and use of the shared variable

Page 1
Critical Section Problem

Requirements for solution to critical-section problem / Rules of Critical Sections

1. Mutual Exclusion - If process Pi is executing in its critical section, then no other processes can be
executing in their critical sections
2. Progress - If no process is executing in its critical section and there exist some processes that wish to
enter their critical section, then the selection of the process that will enter the critical section next
cannot be postponed indefinitely
3. Bounded Wait(No Starvation)- A bound must exist on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to enter its critical section
and before that request is granted
○ Assume that each process executes at a nonzero speed
○ No assumption concerning relative speed of the n processes

Solutions for Critical Section Problem

Synchronization for two processes using turn variable(single variable)

Initially, turn =0;

Process P0 Process P1

P0 P1
While(1) While(1)
{ {
while (turn!=0); while (turn!=1);
CRITICAL SECTION CRITICAL SECTION
turn=1; turn=0;
REMAINDER SECTION REMAINDER SECTION
} }

Check this solution is valid or not. Here the entry section that will allow only one process to
enter in CS.
Exit section if any process is leaving the CS then another process can enter in CS.

Turn is a boolean variable


P0 and P1 are independent
Initally turn =0
For P0 while is false and P0 will enter the CS
Now, in CS P0 will preempt and control goes to P1.

Now, for P1 turn =0


While condition is true P1 will continue to remain in loop till turn=0
Now P1 gives control to P0. P0 will execute and exit section will set turn=1.
Now In P1, turn =1
Condition is false and P1 will enter in the CS
So when turn =0 P0 will enter CS
Turn =1 P1 will enter CS
At a time, only one process will enter in CS.

Now check progress, when P0 execute and P1 execute. The P0 will enter in CS. So this solution
does not consider that actually a process wants to enter in CS or not.
So this solution does not follow progress.

Page 2
Synchronization for Two Processes using flag Variable (two Variable)

Initially flag

P0 P1
While(1) While(1)
{ {
flag[0]=T; flag[1]=T;
while (flag[1]); while (flag[0]);
CRITICAL SECTION CRITICAL SECTION
flag[0]=F; flag[1]=F;
REMAINDER SECTION REMAINDER SECTION
} }

Check for Mutual Exclusion


Suppose P0 wants to enter the Critical Section then sets `flag[0] = T`. But P0 first checks if P1 wants to enter
the Critical Section by checking `flag[1]`. Here, P1 does not want to enter the Critical Section, so `flag[1] = F`.
As per the `while` loop in P0, it will evaluate `while (flag[1]);` which is `while (F);`. The condition will be false,
and P0 will enter the Critical Section.
So according to this solution, only one process will enter the Critical Section at a time.
Therefore, mutual exclusion is achieved.

Check for Bounded Wait


Suppose P0 wants to enter the Critical Section then sets `flag[0] = T` and checks if P1 is in the Critical
Section by evaluating `while (flag[1]);`. If P1 is frequently entering and exiting the Critical Section, `flag[1]` will
often be `T`, causing P0 to wait.
There is no mechanism to ensure that P0 will get a chance to enter the Critical Section within a bounded
time if P1 keeps setting `flag[1] = T`.
So according to this solution, a process may wait indefinitely if the other process frequently occupies the
Critical Section.
Therefore, bounded wait is not achieved.

Check for Progress


Suppose both P0 and P1 are in their Remainder Sections. P0 wants to enter the Critical Section and sets
`flag[0] = T`. It checks if P1 is in the Critical Section by evaluating `while (flag[1]);`. If P1 is not trying to enter
(i.e., `flag[1] = F`), P0 will enter the Critical Section immediately.
Since there is no process in the Critical Section and P0 is ready to enter, it will do so without unnecessary
delay.
So according to this solution, if the Critical Section is free, the process that intends to enter can do so
promptly.
Therefore, progress is achieved.

SEMAPHORES:

→ Critical section problem solution for n process Binary semaphore –


→ A semaphore is an integer variable that apart from → integer value can range only
initialization is accessed only through two standard atomic between 0 and 1
operations:- → Same as a mutex lock

i. wait() ii. signal() Counting semaphore –


Here int S=1; signal(S) → integer value can range over an
wait(S) { unrestricted domain Generally used
{ S=S+1 when multiple instances of
while(S<=0); } resources are there.
S=S-1
} Application of Semaphores
→ In CS Problem Solution
→ To decide the order of execution among the process
→ Resource Management

Page 3
do Now, let us see how it implements mutual exclusion. Let there be two processes P1
{ wait(S) and P2 and a semaphore s is initialized as 1. Now if suppose P1 enters in its critical
CS section then the value of semaphore s becomes 0. Now if P2 wants to enter its critical
signal(S) section then it will wait until s > 0, this can only happen when P1 finishes its critical
RS section and calls V operation on semaphore s.
} while(T); This way mutual exclusion is achieved and progress is also achieved.

Producer-Consumer Problem

→ The Producer-Consumer problem is a classical multi-process synchronization problem, that is we are


trying to achieve synchronization between more than one process.
→ There is one Producer in the producer-consumer problem, Producer is producing some items,
→ whereas there is one Consumer that is consuming the items produced by the Producer.
→ The same memory buffer is shared by both producers and consumers which is of fixed size.
→ The task of the Producer is to produce the item, put it into the memory buffer, and again start producing
items. Whereas the task of the Consumer is to consume the item from the memory buffer.

S(binary semaphore , mutex) void Producer() void Consumer()


INITIALLY { while(T) { while(T)
S=1 { Produce() {
E (empty)=n wait(E) wait(F)
F (Full)=0 wait(S) wait(S)
append() take()
signal(S) signal(S)
signal(F) signal(E)
} }
} }

These Semaphores are:

Semaphore S=1
Semaphore E=n
Semaphore F=0
n is the size of buffer.
The is a buffer of n slots.

Mutual Exclusion:
• Example: Producer waits for S before appending. If S is 0 (locked), another process is accessing the
buffer. When done, S is signaled (set to 1).
• Ensured By: Semaphore S ensures only one process accesses the buffer at a time.
Bounded Waiting:
• Example: Producer calls wait(E) to check if there’s space in the buffer. If E is 0, the producer waits.
Similarly, the consumer calls wait(F) to check if there are items to consume.
• Ensured By: Semaphore E (for producers) and F (for consumers) prevent indefinite waiting.
Progress:
• Example: If a producer or consumer can proceed (buffer not full for producer, not empty for consumer),
they will proceed without unnecessary delays.
• Ensured By: Semaphores E, F, and S ensure processes move forward when conditions are met.

Dekker's Solution

Mutual Exclusion:
• Ensured By: If P0 and P1 want to enter the
critical section, turn and flag ensure only
one enters.
Bounded Waiting:
• Ensured By: Each process waits a finite time
due to the turn variable.
Progress:
• Ensured By: Processes not in the critical
section don't hinder others.

Page 4
Test and Set

Initially lock =0 Initially lock=FALSE

While (lock==1);

Lock=1

// Critical Section

Lock =0

Peterson Solution for 2 process

Sleeping Barber Problem

→ Another classical IPC problem takes place in a barber shop. The barber shop has one barber, one
barber chair, and n chairs for waiting customers, if any, to sit on.
→ If there are no customers present, the barber sits down in the barber chair and falls asleep.
→ When a customer arrives, he has to wake up the sleeping barber. If additional customers arrive while
the barber is cutting a customer's hair, they either sit down (if there are empty chairs) or leave the shop
(if all chairs are full).
→ The problem is to program the barber and the customers without getting into race conditions
→ The solution uses three semaphores, customers, which counts waiting customers (excluding the
customer in the barber chair, who is not waiting), barbers, the number of barbers (0 or 1) who are idle,
waiting for customers, and mutex, which is used for mutual exclusion. We also need a variable, waiting,
which also counts the waiting customers.
→ The reason for having waiting is that there is no way to read the current value of a semaphore. In this
solution, a customer entering the shop has to count the number of waiting customers. If it is less than
the number of chairs, he stays; otherwise, he leaves

Initially chairs=n;
semaphore customer = 0; //No. of customers in waiting
room semaphore barber = 0; //barber is idle semaphore
mutex = 1; //for mutual exclusion
int waiting = 0; //No. of waiting customer

Page 5
Customer Process Barber Process {
{ while(1)
While(1)
{ wait(mutex); {
If(waiting<chairs) wait(cutomer);
{ wait(mutex);
waiting=waiting+1; waiting = waiting
signal(customer); -1;
signal(mutex); signal(barber)
wait(barber); signal(mutex);
get_haircut(); cut_hair() }
}
else
{
signal(mutex);
}
}
}

Readers-Writers Problem

→ The readers-writers problem is a classical problem of process synchronization, it relates to a data set
such as a file that is shared between more than one process at a time.
→ Among these various processes, some are Readers - which can only read the data set; they do not
perform any updates, some are Writers - can both read and write in the data sets.
→ The readers-writers problem is used for managing synchronization among various reader and writer
process so that there are no problems with the data sets, i.e. no inconsistency is generated.
→ Here the task is to design the code in such a manner that if one reader is reading then no writer is
allowed to update at the same point of time,
→ similarly, if one writer is writing no reader is allowed to read the file at that point of time and if one
writer is updating a file other writers should not be allowed to update the file at the same point of
time.
→ However, multiple readers can access the object at the same time

→ In the above code of reader, mutex and write are semaphores that have an initial value of 1,
→ whereas the readcount variable has an initial value as 0.
→ The readcount variable denotes the number of readers accessing the file concurrently.
→ The moment variable readcount becomes 1, wait operation is used to write semaphore which decreases
the value by one. This means that a writer is not allowed how to access the file anymore.
→ On completion of the read operation, readcount is decremented by one. When readcount becomes 0, the
signal operation which is used to write permits a writer to access the file

Page 6
Code for Writer Process Code for Reader Process
while(1) int readcount = 0;
{ wait (mutex);
wait(write); readcount ++; // on each entry of reader increment readcount
WRITE INTO THE FILE if (readcount == 1)
signal(write); {
} wait (write);
If a writer wishes to access the file, wait }
operation is performed on write semaphore, signal(mutex);
which decrements write to 0 and no other --READ THE FILE?
writer can access the file. On completion of the wait(mutex);
writing job by the writer who was accessing readcount --; // on every exit of reader decrement readcount
the file, the signal operation is performed on if (readcount == 0)
write. {
signal (write);
}
signal(mutex);

CASE 1: WRITING - WRITING → NOT ALLOWED


When two or more processes want to write, it is not allowed.

Explanation:
- Initial semaphore write = 1.
- Two processes P0 and P1 want to write.
- P0 enters first, executes `Wait(write)`, setting write = 0, and continues writing.
- P1's `Wait(write)` puts it in an infinite loop since write = 0.
- When P0 finishes, it signals `write`, setting write = 1.
- P1 can now write.
- This shows only one process can write at a time.

CASE 2: READING - WRITING → NOT ALLOWED


When one or more processes are reading, writing is not allowed.

Explanation:
- Initial semaphore mutex = 1, readcount = 0.
- P0 wants to read, P1 wants to write.
- P0 executes `Wait(mutex)`, decrementing mutex to 0, increments readcount to 1.
- If readcount == 1, `wait(write)` sets write = 0, preventing writers.
- `signal(mutex)` increments mutex to 1, allowing other readers.
- A writer cannot enter because `wait(write)` is in an infinite loop.
- When P0 finishes reading, it signals `write` and `mutex`.
- Writers can now proceed.

CASE 3: WRITING - READING → NOT ALLOWED


When one process is writing, reading is not allowed.

Explanation:
- Initial semaphore write = 1.
- P0 writes, P1 wants to read.
- P0 executes `Wait(write)`, setting write = 0.
- P1's `Wait(mutex)` decrements mutex to 0, increments readcount to 1.
- If readcount == 1, `wait(write)` traps P1 in an infinite loop.
- When P0 finishes writing, it signals `write`.
- P1 can now read.

CASE 4: READING - READING → ALLOWED


Multiple processes can read simultaneously.

Page 7
The Dining Philosophers Problem

→ The dining philosopher's problem is the classical problem of synchronization which says that Five
philosophers are sitting around a circular table and their job is to think and eat alternatively.
→ A bowl of noodles is placed at the center of the table along with five chopsticks for each of the
philosophers.
→ To eat a philosopher needs both their right and a left chopstick. A philosopher can only eat if both
immediate left and right chopsticks of the philosopher is available.
→ In case if both immediate left and right chopsticks of the philosopher are not available then the
philosopher puts down their (either left or right) chopstick and starts thinking again.
→ The dining philosopher demonstrates a large class of concurrency control problems hence it's a classic
synchronization problem

Five Philosophers sitting around the table

Dining Philosophers Problem


Let's understand the Dining Philosophers Problem with the below code, we have used fig 1 as a reference to
make you understand the problem exactly. The five Philosophers are represented as P0, P1, P2, P3, and P4
and five chopsticks by C0, C1, C2, C3, and C4.

The solution of the Dining Philosophers Problem


We use a semaphore to represent a chopstick and this truly acts as a solution of the Dining Philosophers
Problem. Wait and Signal operations will be used for the solution of the Dining Philosophers Problem, for
picking a chopstick wait operation can be executed while for releasing a chopstick signal semaphore can be
executed. The structure of the chopstick is an array of a semaphore which is represented as shown below
1.semaphore C[5]; Initially, each element of the semaphore C0, C1, C2, C3, and C4 are initialized to 1 as the
chopsticks are on the table and not picked up by any of the philosophers

void Philosopher
{
while(1)
{
Wait( take_chopstickC[i] );
Wait( take_chopstickC[(i+1) % 5] ) ;
EATING THE NOODLE
Signal( put_chopstickC[i] );
Signal( put_chopstickC[ (i+1) % 5] ) ;
THINKING
}
}

→ First, `wait(take_chopstickC[i])` and `wait(take_chopstickC[(i+1) % 5])` are called, indicating philosopher i


picks up the left and right chopsticks. Then, the eating function is executed.

→ After eating, `signal(take_chopstickC[i])` and `signal(take_chopstickC[(i+1) % 5])` are called, indicating


philosopher i has put down both chopsticks. The philosopher then starts thinking again.

Page 8
Let's understand how the above code is giving a solution to the dining philosopher problem?
With i=0(initial value):

- Philosopher P0 wants to eat:


- Executes `Wait(take_chopstickC[i])`, holding C0 and setting semaphore C0 to 0.
- Executes `Wait(take_chopstickC[(i+1) % 5])`, holding C1 (since (0 + 1) % 5 = 1) and setting semaphore C1 to 0.

- Philosopher P1 wants to eat:


- Executes `Wait(take_chopstickC[i])`, tries to hold C1 but fails since semaphore C1 is already 0.
- P1 enters an infinite loop and cannot pick up C1.

- Philosopher P2 wants to eat:


- Executes `Wait(take_chopstickC[i])`, holding C2 and setting semaphore C2 to 0.
- Executes `Wait(take_chopstickC[(i+1) % 5])`, holding C3 (since (2 + 1) % 5 = 3) and setting semaphore C3 to 0.

Problem with Dining Philosopher

We have demonstrated that no two nearby philosophers can eat at the same time from the aforementioned
solution to the dining philosopher problem. The problem with the above solution is that it might result in a
deadlock situation. If every philosopher picks their left chopstick simultaneously, a deadlock results, and no
philosopher can eat. This situation occurs when this happens.

Some of the solutions include the following:


1. Limit the philosophers at the table to four. Philosopher P3 uses chopsticks C3 and C4, eats, then releases
them, incrementing semaphores C3 and C4.
2. Philosopher P2, with chopsticks C2 and C3, eats and releases them, allowing others to eat.
3. Odd-position philosophers take the right chopstick first; even-position philosophers take the left first.
4. Philosophers choose chopsticks only if both are available simultaneously.
5. P0, P1, P2, and P3 take the left chopstick first, then the right; P4 does the opposite. This forces P4 into a loop,
keeping chopstick C4 free for P3, preventing deadlock.

Inter process Communication

→ Inter process communication is the mechanism provided by the operating system that allows processes
to communicate with each other. This communication could involve a process letting another process
know that some event has occurred or transferring of data from one process to another.

A process can be of two types:


Independent process.
Co-operating process.
An independent process is not affected by the execution of other processes while a co-operating process can
be affected by other executing processes.

Page 9
The models of interprocess communication are as follows −

1. Shared Memory Model


→ Shared memory is the memory that can be simultaneously accessed by multiple processes. This is
done so that the processes can communicate with each other. All POSIX systems, as well as
Windows operating systems use shared memory.

Advantage of Shared Memory Model

→ Memory communication is faster on the shared memory model as compared to the message
passing model on the same machine.

Disadvantages of Shared Memory Model

Some of the disadvantages of shared memory model are as follows −


→ All the processes that use the shared memory model need to make sure that they are not writing to
the same memory location.
→ Shared memory model may create problems such as synchronization and memory protection that
need to be addressed.

2. Message Passing Model


→ Multiple processes can read and write data to the message queue without being connected to each
other. Messages are stored on the queue until their recipient retrieves them. Message queues are
quite useful for interprocess communication and are used by most operating systems.

Advantage of Messaging Passing Model


→ The message passing model is much easier to implement than the shared memory model.
Disadvantage of Messaging Passing Model
→ The message passing model has slower communication than the shared memory model because the
connection setup takes time.
→ A diagram that demonstrates the shared memory model and message passing model is given as
follows −

Page 10

You might also like