0% found this document useful (0 votes)
3 views

OS Session1 U2

Uploaded by

1005thakkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

OS Session1 U2

Uploaded by

1005thakkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

Process Synchronization

• Background
• The Critical-Section Problem
• Peterson’s Solution
• Synchronization Hardware
• Semaphores
• Classic Problems of Synchronization

Dr P S Patheja
• Monitors
• Synchronization Examples
• Atomic Transactions
Cooperating Processes
Process Synchronization
Processes running in the OS

Independent Processes Cooperating Processes

 Cannot affect or be affected by the  Affects or is affected by the other


other processes. processes
 Doesn’t share any data with any other  Shares data with other processes.
process.

Dr P S Patheja
Process Cooperation Purposes
• Information sharing (Share files)
• Computation speedup (create subtasks to run simultaneously)
• Modularity (system function divide into separate processes or threads)
• Convenience (Run several tasks of the same user in the same environment at the

same time)
Bernstein’s Conditions:
One way to detect parallelism in a sequential
algorithm is to look for operations that can be
carried out independently of each other.
Consider following statements in a program:
a=x+y;  S1
b=z+1;  S2
c=a+b;  S3

Dr P S Patheja
w=c+1;  S4
Statements S1 and S2 can be executed in parallel.
BUT Statement S3 can NOT be executed before
statement S1 and S2 i.e. within this program there is
“Precedence Constraints”.
Bernstein’s Conditions: Contd…..
Consider following statements in a program:
a=x+y;  S1
b=z+1;  S2
c=a+b;  S3
w=c+1;  S4

Suppose we want to execute some of these statements


concurrently.
The statement c := a + b cannot be executed before both
a and b have been assigned values.
Similarly, (S4) w := c + 1 cannot be executed before the

Dr P S Patheja
new values of c has been computed in S3.
The statements a := x + y and b := z + 1 could be
executed concurrently since neither depends upon the
other.
Bernstein’s Conditions Contd……

Let R(Si) = {al, a2, a3 ....., an}, the Read Set for Si, is the set
of all variables whose values are referenced in statement Si
during the execution.
Let W(Si) = {b1, b2, ..., bn}. the Write Set for Si, is the set of
all variables whose values are updated (written) by the
execution of statement Si.

R(S3) = R(c = a + b) = { a , b } // variables needed in the process of reading [ R ]


W(S3) = W(c = a + b) = { c } // variables needed in the process of writing [ W ]

R(S4) = R(w = c + 1) = { c } // variables needed in the process of reading [ R ]

Dr P S Patheja
W(S4) = W(w = c + 1) = { w } // variables needed in the process of writing [ W ]

a=x+y;  S1
b=z+1;  S2
c=a+b;  S3
w=c+1;  S4
a=x+y;  S1
b=z+1;  S2
c=a+b;  S3
w=c+1;  S4

For any 2 statements S1 and S2 to run concurrently following


3 conditions must be satisfied:
1) R(S1) ∩ W(S2) = { } or Φ or NULL
2) W(S1)
3) W(S1)


R(S2) = { }
W(S2) = Φ } Bernstein’s
Conditions

R(S1) ∩ W(S2) = {x , y} ∩ { b } ={}

Dr P S Patheja
W(S1) ∩ R(S2) ={a} ∩ {z} ={}
W(S1) ∩ W(S2) ={a} ∩ {b} ={}

But S2 can not be executed with S3 because


W(S2) ∩ R(S3) = { b } // != NULL
Introduction to Cooperating Processes
• Processes within a system may be independent or
cooperating.
• Independent process cannot affect or be affected by
the execution of another process.
• Cooperating process can affect or be affected by other
processes, including sharing data.
• Reasons for cooperating processes:
– Information sharing (Eg Shared File)
– Computation speed-up (Multiple CPUs)
– Modularity (Divide into separate process/fn)
– Convenience (Multiple tasks to work upon)
Inter process communication (IPC) mechanisms
Concurrent execution of cooperating processes requires mechanisms
• that allow processes to communicate with one another
• and to synchronize their actions.

Mechanisms to Communicate

Mechanisms to Synchronize

Dr P S Patheja
Shared Resource

Cooperating processes require an interprocess communication (IPC)


mechanism that will allow them to exchange data and information to
communicate and to synchronize.
• OS provides those IPC mechanisms.
IPC models
There are two fundamental models of interprocess communication:
(1) shared memory and
(2) message passing.

• In the shared-memory model


• a region of memory that is shared by cooperating processes is
established.
• Processes can then exchange information by reading and writing

Dr P S Patheja
data to the shared region.

• In the message passing model


• communication takes place by means of messages exchanged
between the cooperating processes.
IPC/Communication models

Dr P S Patheja
Communications models. (a) Message passing. (b) Shared memory.
Producer Consumer paradigm
more synchronization
The bounded-buffer assumes a fixed buffer size.
The Bounded Buffer Problem - to make sure that the producer won't try to add data
into the buffer if it's full and that the consumer won't try to remove data from an empty
buffer.

Synchronization: Bounded buffer


Producer – Consumer example

Producer Consumer

Dr P S Patheja
Waits if the Produce Consume Waits if the
buffer is full Information Information buffer is empty

Buffer is Provided:
 Either by the OS through the use of Message Passing system
 Or by explicitly coded application with the shared memory
Cooperation among Processes by Sharing
• Processes use and update shared data such as
shared variables, memory, files, and databases.
• Writing must be mutually exclusive to prevent a
race condition leading to inconsistent data
views.
• Critical sections are used to provide this data
integrity.

Dr P S Patheja
• A process requiring the critical section must not
be delayed indefinitely; no deadlock or
starvation.
Producer/Consumer (P/C) Problem (1)
• Paradigm for cooperating processes –
Producer process produces information
that is consumed by a Consumer process.
–Example 1: a print program produces
characters that are consumed by a
printer.
–Example 2: an assembler produces

Dr P S Patheja
object modules that are consumed by a
loader.
May have Multiple Producers and Consumers

Dr P S Patheja
Buffering
• We need a buffer to hold items that are
produced and later consumed:
• Implemented in one of three ways
1. Zero capacity – 0 messages
Sender must wait for receiver (rendezvous)
2. Bounded capacity – finite length of n messages

Dr P S Patheja
Sender must wait if link full
3. Unbounded capacity – infinite length of buffer
Sender never waits
Producer/Consumer (P/C) Problem (2)
• We need a buffer to hold items that are
produced and later consumed:
– unbounded-buffer places no practical limit
on the size of the buffer.
– bounded-buffer assumes that there is a
fixed buffer size.

Consumer must wait if buffer is Empty &


Producer must wait if buffer if Full.
Producer/Consumer (P/C) Dynamics
1. A producer process produces information that is
consumed by a consumer process.
2. At any time, a producer activity may create some
data.
3. At any time, a consumer activity may want to accept
some data.
4. The data should be saved in a buffer until they are
needed.

Dr P S Patheja
5. If the buffer is finite, we want a producer to block if
its new data would overflow the buffer.
6. We also want a consumer to block if there are no data
available when it wants them.
Idea for Producer/Consumer Solution
Assumptions:
Buffer size = ‘n’ (i.e. 0 to n-1)
Var = n (size of buffer)
Type = item…; (to be produced in buffer)
Var buffer : array (0..n-1) of item;
(declare buffer as array)
in, out : 0..n-1; (initialize value 0 to in,out)
-----------
in & out = Logical Pointers
in = Next free pointer in buffer
out = First free pointer in buffer
Idea for Producer/Consumer Solution
in & out = Logical Pointers
in, out : 0..n-1; (initialize value 0 to in,out)
Buffer = EMPTY (when in = out)
Buffer = FULL (when in+1 mod n = out )
no-op = Do-Nothing instruction (for the process)

Counter initialized to 0; Counter = 0 = Empty &


Counter = n = Full
Bounded-Buffer – Shared-memory Solution
• Shared data
#define BUFFER_SIZE 10
typedef struct {
...
} item;
item buffer[BUFFER_SIZE];
int in = 0;

Dr P S Patheja
int out = 0;
• Suggested solution is correct, but can only use
BUFFER_SIZE-1 elements.
Bounded-Buffer – Producer Process

item nextP;

while (TRUE) {
while (((in + 1) mod n) == out) do no-op;
/* do nothing – no free slots - Buffer is Full*/
buffer[in] = nextP;

Dr P S Patheja
in = (in + 1) mod n;
}
Bounded-Buffer – Consumer Process
item nextConsumed;
while (TRUE) {
while (in == out) do no-op;
/* do nothing – nothing to consume
– Buffer Empty*/
nextC = buffer[out];

Dr P S Patheja
out = (out + 1) mod n;
}
Allows N-1 items at the same time.
Counter is incremented by 1 : when we add item to buffer &
Counter is decremented by 1 : when we remove item from buffer.
Bounded-Buffer – Producer Process
repeat
item nextP;
while (TRUE) {
/* produce an item and put in nextP */
while (counter == n) do no-op; /* do nothing –
no free slots - Buffer is Full*/
buffer[in] = nextP;

Dr P S Patheja
in = (in + 1) mod n;
counter = counter + 1;
}
Bounded-Buffer – Consumer Process
item nextC;
while (TRUE) {
while (counter == 0) do no-op;
/* do nothing – nothing to consume */
nextC = buffer[out];
out = (out + 1) mod n;
counter = counter - 1;
/* consume the item in nextC */

Dr P S Patheja
}
Although Producer and Consumer routines are correct
separately, They may NOT function correct when
executed Concurrently
Bounded-Buffer – Shared Counter (4)
• If both the producer and consumer
attempt to update the buffer
concurrently, the assembly language
statements may get interleaved.

• The interleaving depends upon how the

Dr P S Patheja
producer and consumer processes are
scheduled.
Race Condition
• counter++ could be implemented as
register1 = counter S0
register1 = register1 + 1 S1
counter = register1 S2
• counter-- could be implemented as
register2 = counter S3
register2 = register2 – 1 S4
counter = register2 S5
Consider this execution interleaving with “counter = 5” initially:
Order will be arbitrary (Random) for execution of these instructions.
S0: producer execute register1 = counter {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6}
S3: consumer execute register2 = counter {register2 = 5}
S4: consumer execute register2 = register2 - 1 {register2 = 4}

Dr P S Patheja
S2: producer execute counter = register1 {counter = 6 }
S5: consumer execute counter = register2 {counter = 4}
INCORRECT STATE of Counter = 4 & Counter = 6
Therefore, Solution = Make sure only 1 process manipulate
counter at any time
i.e. need for PROCESS SYNCRONIZATION
This is the Race Condition
• Race condition: The situation where
several processes access and manipulate
shared data concurrently. The final value
of the shared data depends upon which
process finishes last.
• To prevent race conditions, concurrent
processes must coordinate
or be synchronized.
The Critical-Section Problem
• n processes competing to use some shared data
like Shared Variables, Writing a file etc.
• No assumptions may be made about speeds or
the number of CPUs.
• Each process has a code segment, called Critical
Section (CS), in which the shared data is
accessed.
• Problem – ensure that when one process is
executing in its CS, no other process
is allowed to execute in its CS.
CS Problem Dynamics
• The section of code implementing this request is
called the Entry Section (ES) (Negotiation Protocol).
• The critical section (CS) might be followed by a
Leave/Exit Section (LS) (Release Protocol).
• The remaining code is the Remainder Section (RS).
• The critical section problem is to design a CS protocol

Dr P S Patheja
that the processes can use so that their action will
not depend on the order in which their execution is
interleaved (possibly on many processors).
General structure of process Pi (other is Pj)

do {
entry section Negotiation Protocol
critical section Critical Section
exit/ leave section Release Protocol
remainder section
} while (TRUE);

Dr P S Patheja
• Processes may share some common variables
to synchronize their actions.
Solution to Critical-Section Problem
• There are 3 requirements that must stand for
a correct solution:
1. Mutual Exclusion
2. Progress
3. Bounded Waiting
• We can check on all three requirements in
each proposed solution, even though the

Dr P S Patheja
non-existence of each one of them is enough
for an incorrect solution.
Solution to CS Problem – Mutual Exclusion
1. Mutual Exclusion – If process Pi is executing
in its critical section, then no other processes
can be executing in their critical sections.
• Implications:
 Critical sections better be focused and short.
 Better not get into an infinite loop in there.

Dr P S Patheja
 If a process somehow halts/waits in its critical
section, it must not interfere with other
processes.
Solution to CS Problem – Progress
2. Progress – If no process is executing in its
critical section and there exist some
processes that wish to enter their critical
section, then the selection of the process that
will enter the critical section next cannot be
postponed indefinitely:
• If only one process wants to enter, it should be

Dr P S Patheja
able to.
• If two or more want to enter, one of them should
succeed.
Solution to CS Problem – Bounded Waiting
3. Bounded Waiting – A bound must exist on the
number of times that other processes are
allowed to enter their critical sections after a
process has made a request to enter its critical
section and before that request is granted.
• Assume that each process executes at a nonzero
speed.

Dr P S Patheja
• No assumption concerning relative speed of the n
processes.
Types of solutions to CS problem

– Software approaches (Dekker’s,


Peterson’s and Bakery Algorithm)
– Hardware support (test and set atomic
operation)
– OS solution (semaphores)
– PL (Programming Language) solution
(monitors)

Dr P S Patheja
– Distributed OS solution ( message
passing)
Mutual Exclusion Assumptions
• Any facility that provides mutual exclusion should
meet these requirements:
1. No assumption regarding the relative speeds of the
processes.
2. A process is in its CS for a finite time only i.e. process is
executing in non-zero speed.
3. Only one process allowed in the CS.
4. Process requesting access to CS should not wait

Dr P S Patheja
indefinitely.
5. A process waiting to enter CS cannot be blocking a
process in CS or any other processes.
Software Solutions: Algorithm 1
1. Process 0 • Process 1
2. ... ...
3. while turn != 0 do • while turn != 1 do
nothing; nothing;
// busy waiting
// busy waiting
4. < Critical Section>
• < Critical Section>
5. turn = 1;
• turn = 0;

Dr P S Patheja
6. ...
7. RS • RS...

Problems : Strict alternation,


Busy Waiting
Page 50
Algorithm 2
• PROCESS 0 1. PROCESS 1
• ... 2. ...
• flag[0] = TRUE; 3. flag[1] = TRUE;
• while flag[1] do nothing; 4. while flag[0] do nothing;
• <CRITICAL SECTION> 5. <CRITICAL SECTION>
• flag[0] = FALSE; 6. flag[1] = FALSE;
• RS 7. RS

Dr P S Patheja
PROBLEM : Potential for
deadlock.

Page 51
Algorithm 3- Dekker’s or Peterson’s Algorithm
• Combined shared variables of algorithms 1 and 2.
• Initially flag[0] = flag[1] = False; & TURN = 0 or 1 (Immaterial)
• Process Pi
do {
flag [i]:= true;
turn = j;
while (flag [j] and turn = j) do no-op;
Critical Section
flag [i] = false;

Dr P S Patheja
Remainder section
} while (1);

• Solves the critical-section problem for two processes.

Page 52
Bakery Algorithm (1)
• Critical Section for n processes:
– Before entering its critical section, a process receives
a number (like in a bakery). Holder of the smallest
number enters the critical section.
– The numbering scheme here always generates
numbers in increasing order of enumeration;
i.e., 1,2,3,3,3,3,4,5...
– If processes Pi and Pj receive the same
number, if i < j, then Pi is served first; else Pj
is served first (PID assumed unique).
Bakery Algorithm (2)
• Choosing a number:
– max (a0,…, an-1) is a number k, such that k  ai for
i = 0, …, n – 1
• Notation for lexicographical order (ticket #, PID #)
–(a,b) < (c,d) if a < c or (if a == c and b < d)
• Shared data:
boolean choosing[n];

Dr P S Patheja
int number[n];
Data structures are initialized to FALSE and 0,
respectively.
Bakery Algorithm for Pi
do {
choosing[i] = TRUE;
number[i] = max(number[0], …, number[n – 1]) +1;
choosing[i] = FALSE;
for (j = 0; j < n; j++) {
while (choosing[j]) do no-op;

while ((number[j] != 0) &&


((number[j],j) < (number[i],i))) ;

Dr P S Patheja
}
critical section
number[i] = 0;
remainder section
} while (TRUE);

You might also like