0% found this document useful (0 votes)
7 views

Os Unit-Iinew

Uploaded by

satya.G
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Os Unit-Iinew

Uploaded by

satya.G
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

Unit – II: Process Synchronization: The Critical Section Problem, Semaphores, And Classical Problems of

Synchronization, Critical Regions, Monitors, Synchronization examples.


Deadlocks: Principles of Deadlocks, System Model, Deadlocks Characterization, Methods for Handling Deadlocks,
Deadlock Prevention, Avoidance, Detection & Recovery from Deadlocks.
Introduction to Process Synchronization:

A co-operation process is one that can affect or be affected by other processes executing in the system. Co-
operating process may either directly share a logical address space or be allotted to the shared data only through
files. This concurrent access is known as Process synchronization.

In an O.S, there are two types of processes. They are;


1. Independent process
2. Co-operating process
A process which doesn’t depend on any other process is called independent process, whereas, a process which
depends on any other process is called co-operating process.
The co-operating process is implemented with the help of shared memory or message passing.
The message passing (or) shared memory can be implemented by using producer-consumer algorithm.
The producer & consumer can share the common variables.

The variables are represented as Producer algorithm: Consumer algorithm:


follows: Repeat Repeat consume a value into x
Var n; Produce a value into x While in=out do no operation
Var buffer: array [0....n-1] while(in+1)%n=out do no X=buffer [out]
in, out : 0....n-1; operation Out = (out+1)%n
int n, in, out; buffer [in]=x ..................
int buffer [10]; in=(in+1)%n ..................
.................. until false
until false
 Buffer is a shared memory for producer and consumer.
 A producer produce a value, then that value is stored into buffer and the variable ‘in’ is incremented by
1 that is in=(in+1)%n
 A consumer consumes a value from the buffer, then the variable ‘out’ is incremented by 1 that is
out=(out+1)%n.
 If (in = out, it indicates that the buffer is empty.
 In this algorithm, it follows circular queue
 It follows FIFO (First In First Out
It can provide various facilities for execution of user applications. They are;
1. Modularity
2. Communication speed up
3. Information sharing
4. Convenience
1. MODULARITY:The functionality of modularity is used to divide the process into blocks. The blocks are
called as modules.
2. COMMUNICATION SPEEDUP:The functionality of communication speedup is used to, if a process run
faster then the process is divided into sub processes.
3. INFORMATION SHARING: The functionality of information sharing is used to perform simultaneous
access with the help of shared memory (or) message passing.
4. CONVENIENCE: The functionality of convenience is used to perform various
operations like printing, compiling, editing and so on...

INTER-PROCESS COMMUNICATION
In co-operating process, the processes are communicating through a shared memory. In inter-process
communication, the processes communicate with two functions. They are send & receive. There are
various ways to implement IPC. They are;
1. Direct/indirect communication
2. Symmetric/Asymmetric communication
3. Automatic/Explicit communication
4. Send by copy/send by reference
5. Fixed size/variable size communication
DIRECT COMMUNICATION:
In the direct communication, each process is communicated with explicity name of the sender or receiver. In
this mechanism, the send & receive functions are represented as follows;
 Send (A, message)
 Send a message to the process A.
 Receive (B, message)
 Receives the message from the process B.
In this communication the link has following properties;
A link is associated with exactly two processes.

1
Between each pair of processes, there exists exactly one link.
The link may be unidirectional but it is usually bidirectional.
INDIRECT COMMUNICATION:
In this functionality, the messages are sent and received through mailbox. A mail box is an unique identification
between the processes. It is a shared mailbox.
 Send (A, message)
 The message is sent to mailbox A.
 Receive (A, message)
 Receive the message from mailbox A.
In this communication, the link has following properties;
It have a shared mailbox.
A link may be either unidirectional (or) bidirectional.
SYMMETRIC/ASYMMETRIC COMMUNICATION:
In the symmetric communication, both sender & receiverknows the receiver name & receiver knows the sender
name i.e;
 Send (p, message)
 Receive (q, message)
 In asymmetric communication, only sender doesn’t require name of the sender i.e;
 Send (p, message)
 Receive (ID, message)
BUFFERING:
It is used to represent the number of messages that can be stored temporarily. There are 3 ways.
1. Zero capacity:in this functionality, the queue length is zero i.e; the link doesn’t have any messages.
2. Bounded capacity: in this functionality, the queue has finite length i.e; the link contain n messages.
3. Unbounded capacity:in this functionality, the queue has infinite length i.e; the link contain any number of
messages.
The producer and consumer can be implemented through interprocess communication (IPC) is represented as
follows;
PRODUCER: CONSUMER:
Repeat Repeat
Produce ‘a’ message .......................
........................ Receive (Sender, message);
Send (consumer, message) Consume the message
.................... ......................
until false until false
CRITICAL SECTION PROBLEM:
In the O.S there are many processes. Each process has a segment of code. The segment of code is called Critical
section. In general, the critical section data belongs to database (or) file.
In the critical section it satisfies the requirements. They are
Mutual exclusion
Bounded wait
Progress
The general form of critical section is represented as follows
Repeat
Entry section
Critical section
Exit section
Remainder section
Until false
In those sections the entry section and exit sections are major part in the operating system
Mutual Exclusion: - In this property, if any process entered into the critical section. If other process is
requesting to enter into the critical section, the O.S doesn’t allow
Bounded wait: - In this property, there exist a bound on the no. of times that other process are allowed to enter
their critical section
Progress: - In this property if no process is executing in the critical section and there exist some process to wish
to enter critical section those cannot be postponed
For example, the critical section is implemented for two process algorithms is as follows
Repeat
While turn ≠ I
Do no operation
Critical section
Turn = I
Remainder section
Until false
In this algorithm, turn is a variable. The value of a variable is either ‘i’ or ‘j’ turn =i then pi
process is entered into the critical section
Algorithm:-
Repeat

2
Flag [I] =true
While flag [J] =true
Do no operation
Critical section
Flag [I] =false
Remainder section
Until false
In this algorithm, does not satisfy the mutual exclusion property
Algorithm: - In this algorithm, we declare 2 variables as flag, turn the flag contains flag [0], flag [1] and the
value of turn is 0 or 1.

Repeat
Flag [I] = True
Turn =J
While (flag [I] = true and Turn =J)
Do no operation
Critical section
Flag [I] =false
Remainder section
Until false
The algorithm “J” satisfies the mutual exclusion, progress. Bounded wait properties
Multiple Process algorithm (OR) Bakery algorithm.
In this algorithm, there are 2 conditions
1. (a, b) <(c, d) if a<c (or) if a=c and b<d
2. Max (a 0, a1, a2, a n-1) is a number (k) such that k> =ai
Repeat
Choosing [I] = true
No [I] = Man [no [0], no [1], no [n-1]] +1
Choosing [I] = false
For J=0 to n-1
Begin
While (choosing [J] = true)
Do no operation
While (no [J] ≠ 0) and (no [J] < no[j])
Do no operation
End
Critical section
No [I] =0
Remainder section
Until false
SYNCHRONIZATION HARDWARE:
A uniprocessor system is a computer system with a single central processing unit. The critical section problem
could be solved simply in a uniprocessor environment if we can prevent interrupts to occur while a shared
variable is being modified. But the critical section problem solution is not feasible in multiprocessor
environment. The special hardware instructions are available on many systems and they can be used effectively
in solving the critical section problem allows us either test and modify the content of a word or to swap the
content of two words atomically (i.e., as one uninterruptable unit).
The TestAndSet instructions are executed simultaneously (each on a different CPU), they will be executed
sequentially in some random order.
boolean TestAndSet(boolean &target)
{
boolean rv = target;
target = true;
return rv;
}
Mutual Exclusion with Test-and-Set:
Shared data:
boolean lock = false;
Process Pi
do {
while (TestAndSet(lock)) ;
critical section
lock = false;
remainder section
}
The swap instruction operates on the contents of two words like the TestAndSet instruction, it is executed
atomically.
void Swap(boolean &a, boolean &b)

3
{
boolean temp = a;
a = b;
b = temp;
}
Mutual Exclusion with Swap:
Shared data (initialized to false):
boolean lock;
boolean waiting[n];
Process Pi
do
{
key = true;
while (key == true)
Swap(lock, key);
critical section
lock = false;
remainder section
}
SEMAPHORES:
a. 1. At is one of the process synchronization tools
2. It performs 2 atomic operations. They are
*wait
*signal
3. The “wait” is indicated by ‘p’ and signal is indicated by ‘v’.
4. To apply semaphore on wait and signal are represented as follows
Wait(s)
While( s<= 0)
do no operation
s= s-1
Signal (s)
s= s+1
Where, s is semaphore variable.
The general form of wait and signal on critical section is as follows
Repeat
Wait (s)
Critical section
Signal (s)
Remainder section
Until false
The semaphore satisfies the mutual exclusion. Bounded wait and progress properties. In the O.S too many
process are requesting entered into the critical section then it maintains A ‘queue’
Classical problems of process synchronization:
produce – consumer on semaphore
We assume the pool consists of ‘n’ locations the mutex (mutual exclusive) is a semaphore. It is
initialized to ‘I’ the empty and full buffers. The number of empty and full buffers. The empty is initialized to ‘n’
and full is initialized to ‘0’
Producer Algorithm:- Consumer Algorithms:-
Repeat Repeat
Produce an item into next ‘p’ Wait (full)
----- ----- Wait (mutex)
Wait (empty) -----
Wait (empty) -----
----- ----- Consume an item from the buffer
Add an item into buffer (pool) ----- -----
----- ----- Signal (mutex)
Signal (mutex) Signal (empty)
Signal (mutex) Until false
Until false
Reader & writers problem:-
In this problem, the readers are reading from the database (or) file. In that situation the O.S does
not allow for writing
In writer is writing the O.S does not allow readers are well as other writers
Reader’s algorithm:-
Repeat
If a writer is writing then wait
{Read}
Wake up on writer (or) ‘n’ readers if it is requesting

4
Until false
Writers Algorithm:-
Repeat
If readers are reading (or)
Writers are writing then wait
{Wait}
Wake up one writer (or) ‘n’ readers, if it is requesting
Until false
Dining philosopher:-
Consider five philosophers who spend their lives thinking and eating. The philosophers share a
common circular table surrounded by 5 chairs. Each belongs to one philosopher. In the centre of the table there
is a bowl of noodles.
A philosopher may pickup only one chopstick at a time. If both forks are available then pick up both
forks. After completion of eating putdown both forks and thinking.

Algorithm:-
Repeat
If both forks are available then pick up both forks
{Eat}
Putdown both forks
{Think}
Until false
It can be implemented with the help of wait and signal operations are as follows
Repeat
Wait (chopstick [I])
Wait (chopstick [i+1%5])
---------------
Eat
----------
Signal (chopstick [I])
Signal (chopstick [I+1%5]
----------
Think
----------
Until false
Critical Regions (CR) is a synchronization concept in Operating Systems that ensures exclusive access to shared
resources.
Definition:A Critical Region is a section of code that accesses shared resources and must be executed
atomically, without interruption from other processes.
Characteristics:
1. Mutual Exclusion: Only one process can execute the critical region at a time.
2. Atomicity: The critical region is executed as a single, indivisible unit.
3. No Preemption: The operating system does not preempt the process executing the critical region.
Types of Critical Regions:
1. Binary Semaphore Critical Region
- Uses a binary semaphore (0 or 1) to protect the critical region.
- Only one process can acquire the semaphore and enter the critical region.
2. Mutex Lock Critical Region
- Uses a mutex (mutual exclusion) lock to protect the critical region.
- Only one process can acquire the lock and enter the critical region.
3. Monitor Critical Region
- Uses a monitor (a high-level synchronization construct) to protect the critical region.
- Monitors provide both mutual exclusion and condition variables

Critical Region Structure:


1. Entry Section: Acquires exclusive access to the shared resource.
2. Critical Section: Executes the critical region code.
3. Exit Section: Releases exclusive access to the shared resource.
Implementation Techniques:
1. Disabling Interrupts

5
2. Using Semaphore or Mutex Locks
3. Implementing Monitors
4. Using Atomic Instructions
Examples: Advantages: Challenges:
1. Bank Account Transaction 1. Prevents Data Inconsistency 1. Starvation (priority inversion)
2. File System Access 2. Ensures Atomicity 2. Deadlocks
3. Printer Spooling 3. Prevents Race Conditions 3. Livelocks
4.Network Packet Transmission 4. Improves System Reliability 4. Performance Overhead

MONITORS
 Although semaphores provide a convenient and effective mechanism for process synchronization , using
them incorrectly can result in timing errors that are difficult to detect.
 To illustrate how, we review the semaphore solution to the critical section problem.
 All the process share a semaphore variable mutex, which is initialized to 1.
 Each process must execute wait(mutex) before entering the critical section and signal(mutex) afterward.
 If this sequences is not observed, two processes may be in their critical section simultaneously.
 Suppose that a process interchange the order in which the wait() and signal() operations on the
semaphore mutex are executed, resulting in the following execution :
Signal(mutex);
……….
Critical section
……….
Wait(mutex);

 In this situation several processes may be executing in their critical sections simultaneously.
 Suppose that a process replaces a signal (mutex) with wait(mutex) . that is, it executes :
Wait(mutex);
………..
Critical section
………..
Wait(mutex);
 In this case a deadlock will occur.
 To deal with such errors, researchers have developed high level language constructs called monitor type.
Monitor usage
 A monitor type is an ADT ( abstract data type ) that includes a set of programmer defined operations
that are provided with mutual exclusion within the monitor.
 The monitor type also declares the variable whose values define the state of an instance of that type,
along with bodies of functions that operate on those variables.
 The syntax of a monitor type is shown below
Monitor monitor name
{/*shared variable declarations*/
Function P1 (………..)
{ …………….
}
Function P2 (………..)
{ …………..
}
…..
Function Pn (……….)
{ …………..
}
Initialization_code(………..)
{ …………….
}
}
 A function defined within a monitor can access only those variable declared locally within the monitor
and its formal parameters
 However, the monitor construct , as defined so far, is not sufficiently powerful for modeling some
synchronization schemes.
 For this purpose, we need to define additional synchronization mechanism.
 These mechanisms are provided by the condition construct.
 A programmer can define one or more variables of type condition :
condition x, y;
 The only operation that can be invoked on a condition variable are wait() and signal(). The operation
x.wait();
x.signal();

 The x.signal() operation resumes exactly one suspended process.

6
 If no process is suspended, then the signal() operation has no effect; that is , the state of x is the same as
if the operation had been executed.
 Now suppose that, when the x.signal() operation is invoked by a process P, there exists a suspended
processes Q associated with the condition x.
 Note, however, that conceptually both processes can continue with their execution.
 Two possibilities exists:
1. Signal and Wait : P either waits until Q leaves the monitor or waits for another condition.
2. Signal and Continue: Q either waits until P leaves the monitor or waits for another condition.

Schematic view of monitor:

Synchronization Examples
Solaris
Windows
XP Linux
Pthreads
Solaris Synchronization
Implements a variety of locks to support multitasking, multithreading (including real-time
Threads), and multiprocessing.
Uses adaptive mutexes for efficiency when protecting data from short code segments
Uses condition variables and readers-writers locks when longer sections of code need access to
data Uses turnstiles to order the list of threads waiting to acquire either an adaptive mutex or
reader-writer lock
Windows XP Synchronization
Uses interrupt masks to protect access to global resources on uniprocessor systems
Uses spinlocks on multiprocessor systems
Also provides dispatcher objects which may act as either mutexes and semaphores
Dispatcher objects may also provide events
An event acts much like a condition variable
Linux Synchronization
Linux:lPrior to kernel Version 2.6, disables interrupts to implement short critical sections
Version 2.6 and later, fully preemptive
Linux provides: semaphors spin locks

You might also like