0% found this document useful (0 votes)
14 views

Process SYNC

Uploaded by

anuradha Roy
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Process SYNC

Uploaded by

anuradha Roy
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 27

PROCESS

SYNCHRONIZATION
Introduction
Interprocess Communication
Critical Section
Process Hierarchy:
In a computer system, we require to run many processes at a time
and some processes need to create other processes whilst their
execution. When a process creates another process, then the parent
and the child processes tend to associate with each other in certain
ways and further. The child process can also create other processes if
required. This parent-child like structure of processes form a
hierarchy, called Process Hierarchy.

Two-State Process Model


A Two-State Process Model categorizes a process in two categories:
• Sharing
• Execution
Precedence Graph
is a directed acyclic graph which is used to show the execution
level of several processes in operating system. It consists of nodes
and edges. Nodes represent the processes and the edges represent
the flow of execution.

• It is a directed graph.
• It is an acyclic graph.
• Nodes of graph correspond to individual statements of program
code.
• Edge between two nodes represents the execution order.
• A directed edge from node A to node B shows that statement A
executes first and then Statement B executes.
Problem :
S2 and S3 can be executed after S1 completes.
S4 can be executed after S2 completes.
S5 and S6 can be executed after S4 completes.
S7 can be executed after S5, S6 and S3 complete
Graph:

A=x+y,6+9
B=x-y (-3)
X=A
X=B
Sleep(1)
X=6,15,-3,- Sleep(1)
(swap)
3,15 Print (x),
print (x),15
Y=9

P1,P2,P3

9,8
X=6+y
Types of process
1. Independent Processes:
Two processes are said to be independent if the execution of one
process does not affect the execution of another process.
2. Cooperative Processes:
Two processes are said to be cooperative if the execution of one
process affects the execution of another process. These processes
need to be synchronized so that the order of execution can be
corrected
Process Synchronization
It is the task of coordinating the execution of processes in a way that
no two processes can have access to the same shared data and
resources.
It is specially needed in a multi-process system when multiple
processes are running together, and more than one processes try to
gain access to the same shared resource or data at the same time.
This can lead to the inconsistency of shared data. So the change
made by one process not necessarily reflected when other processes
accessed the same shared data. To avoid this type of inconsistency of
data, the processes need to be synchronized with each other.
Introduction:
When two or more process cooperates with each other, their order of
execution must be preserved otherwise there can be conflicts in their
execution and inappropriate outputs can be produced.
A cooperative process is the one which can affect the execution of
other process or can be affected by the execution of other process.
Such processes need to be synchronized so that their order of
execution can be guaranteed.
The procedure involved in preserving the appropriate order of
execution of cooperative processes is known as Process
Synchronization. There are various synchronization mechanisms that
are used to synchronize the processes.
Race Condition
A Race Condition typically occurs when two or more threads try to
read, write and possibly make the decisions based on the memory that
they are accessing concurrently.
Critical Section
The regions of a program that try to access shared resources and may
cause race conditions are called critical section. To avoid race condition
among the processes, we need to assure that only one process at a
time can execute within the critical section.
Parts of Critical Section:
Entry Section: It is part of the process which decides the entry of a
particular process.
Critical Section: This part allows one process to enter and modify
the shared variable.
Exit Section: Exit section allows the other process that are waiting
in the Entry Section, to enter into the Critical Sections. It also checks
that a process that finished its execution should be removed through
this Section.
Remainder Section: All other parts of the Code, which is not in
Critical, Entry, and Exit Section, are known as the Remainder
Section.
Entry section,(1) Entry Process
P2

X= 1 CS
A=X++ execution
X=A
Sleep (1)
Print (x,a)
Exit,1,0 Allow to exit the
process

Remainder section Non- Critical


Section Code
A critical section is a segment of code which can be accessed by a
signal process at a specific point of time. The section consists of
shared data resources that required to be accessed by other
processes.

The entry to the critical section is handled by the wait() function, and
it is represented as P().
The exit from a critical section is controlled by the signal() function,
represented as V().
In the critical section, only a single process can be executed. Other
processes, waiting to execute their critical section, need to wait until
the current process completes its execution.
Requirements of Synchronization mechanisms/ Rules of
Critical Section:

Primary
Mutual Exclusion
Our solution must provide mutual exclusion. By Mutual Exclusion, we
mean that if one process is executing inside critical section then the
other process must not enter in the critical section.

P2
P1 P2 P1 P3

P3
Critical Section
P4 Critical Section
P4
Progress
Progress means that if one process doesn't need to execute into
critical section then it should not stop other processes to get into the
critical section.

CS

P1 P2

Secondary
Bounded Waiting
We should be able to predict the waiting time for every process to get
into the critical section. The process must not be endlessly waiting for
getting into the critical section.
CS

1 P1 n
P3
2 P2
No Assumption
Our mechanism must be architectural natural. It means solution
should be universal and should not depends on any particular
requirement.
User

System Application
software software

Operating System

Hardware CPU RAM I/O


Critical Section:
To resolve the problem of CS we have 2 different methods
1. Lock Variable
2. Test_and_Set

Lock Variable:
A simplest synchronization method for processes. Some points of
lock variable method are:
a. execute user mode
b. Multi - processes Solution
c. ME is not confirmed

1. While (lock == 1); Entry do


2. Lock = 1 Code {
Acquire Lock
CS;
2. Critical Section Release Lock;
}

4. Lock =0 Exit
Code
Lock target r
Test_and_set() T 1000 False
Test=
Set= 1000

1. while(test_and_set(&lock));

2. CS

3. Lock = false

Lock = false

&=address Boolean test_and_set(boolean *target)


*= valu of that {
address boolean r= *target
*target = TRUE
Return r;}
Test_and_Set Function:
The main problem of Lock variable is no guarantee of ME. Test and
Set Lock (TSL) is a synchronization mechanism.
It uses a test and set instruction to provide the synchronization
among the processes executing concurrently.

Test-and-Set Instruction

• It is an instruction that returns the old value of a memory


location and sets the memory location value to 1 as a single
atomic operation.
• If one process is currently executing a test-and-set, no other
process is allowed to begin another test-and-set until the first
process test-and-set is finished.
Producer –Consumer Problem:
Here 2 process are work in cooperative mode where one process
produces the item and stored in the buffer and second process
consume an item from buffer.
Problem:
• if buffer is full the producer will have to wait for empty block in
buffer.
• If buffer is empty, consumer will have to wait for an item in
buffer.
We can use the Peterson’s algorithm for this problem:

P1 P2
i=1 buffer i=1
buffer buffer
P0 P1

Turn =0 Turn =1
While(turn==1); While(turn==0);
CS
CS turn = 0;
turn = 1; Remainder;
Remainder; }
}

Here we can control the flow of ME but it is not sure about


progress But here P1 and P0 working on serial condition it
means P0 return the turn variable to p1 after the
execution.
P0 P1
Flag [1]=T [0]=T
While(flag[0]); [1]=T
Flag [0]=T
While(flag[1]); CS
CS Flag[1]= F;
Flag[0]= F; }
}

Here, Deadlock can occur if flag[0]=flag[1]=T;


so to resolve the problem of it we can apply the both format of
code together to follow the Peterson's algorithm.
P0 P1

While(1) While(1)
{ {
Flag [0]=T Flag [1]=T
Turn=1 Turn = 0;
While(turn==1&& While(turn==0&&
flag[1] ==T) flag[1] ==T)
CS CS
Flag[0]= F; Flag[1]= F;
} }
Peterson’s Algorithm:
Peterson’s Algorithm is used to synchronize two processes. It uses
two variables, a bool array flag of size 2 (i.e.0 &1) and an int
variable turn to accomplish it.
In the solution “0” represents the Consumer and “1” represents the
Producer. Initially the flags are false. When a process wants to
execute it’s critical section, it sets it’s flag to true and turn as the
index of the other process. This means that the process wants to
execute but it will allow the other process to run first. The process
performs busy waiting until the other process has finished it’s own
critical section.
After this the current process enters it’s critical section and adds or
removes a random number from the shared buffer. After completing
the critical section, it sets it’s own flag to false, indication it does not
wish to execute anymore.
Semaphores are integer variables that are used to solve the critical
section problem by using two atomic operations, wait and signal that
are used for process synchronization.
The definitions of wait and signal are as follows −
Wait():
The wait operation decrements the value of its argument S, if it is
positive. If S is negative or zero, then no operation is performed.
Entry section
Signal():
The signal operation increments the value of its argument S. exit
section
Counting Semaphores
These are integer value semaphores and have an unrestricted
value domain. These semaphores are used to coordinate the
resource access, where the semaphore count is the number of
available resources. If the resources are added, semaphore count
automatically incremented and if the resources are removed, the
count is decremented.
Binary Semaphores
The binary semaphores are like counting semaphores but their
value is restricted to 0 and 1. The wait operation only works when
the semaphore is 1 and the signal operation succeeds when
semaphore is 0. It is sometimes easier to implement binary
semaphores than counting semaphores.
Binary semaphore

signal (s)
R/QS=1 {
wait (s) If (Suspended list is empty)
{ {
If(s==1) s=1
CS,P1 }
{
s=0 Else
} Select process from
Else suspended queue/ wakeup()
Block the process /sleep(1) }
} P1
WAIT (S) signal (S)
{ 3 {
S=S-1 S=S+1
If (s<0) If (s<0)
{ {
Block your process Select the suspended
[sleep()] process n enter to
} R/Q[wakeup()]
Else (P4,P5) execute the
Return ; Cs, exit section
p1,p2,p3, }
Code of entry section Else
Return ;(CS)

S=17
Wait=5
Signal =3
Wait =1
17-5+3-1
Inter-process communication: (IpC)
It is the mechanism provided by the operating system that allows
processes to communicate with each other. This communication could
involve a process letting another process know that some event has
occurred or the transferring of data from one process to another.
Techniques of Communication:
Global Memory Sharing:
Simplex:
Duplex/ full Duplex:
Half Duplex:
Int A=5
101 5
*p=&a
Print a
Print p
Print *p
101
Print &a

5,101,5,
101
Suppose we replace the wait() and signal() operation of monitor
with a single construct await()
Write a monitor using t

You might also like