0% found this document useful (0 votes)
16 views

9-Operating Systems -Synchronization, interprocess communication, deadlock(1)

The document discusses Inter Process Communication (IPC) methods such as shared memory, message passing, and signals, highlighting their advantages and limitations. It also covers synchronization issues, race conditions, and various solutions for achieving mutual exclusion in concurrent processes, including Peterson's solution and the Bakery algorithm. Additionally, it addresses the implementation of locking mechanisms and hardware support for atomic operations to ensure proper synchronization in multi-core systems.

Uploaded by

elsa6.6.2023
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

9-Operating Systems -Synchronization, interprocess communication, deadlock(1)

The document discusses Inter Process Communication (IPC) methods such as shared memory, message passing, and signals, highlighting their advantages and limitations. It also covers synchronization issues, race conditions, and various solutions for achieving mutual exclusion in concurrent processes, including Peterson's solution and the Bakery algorithm. Additionally, it addresses the implementation of locking mechanisms and hardware support for atomic operations to ensure proper synchronization in multi-core systems.

Uploaded by

elsa6.6.2023
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 162

Inter process

Communication
Synchronization,
Deadlocks
Dr. M. Brindha
Associate Professor
Department of CSE
NIT, Trichy-15
Inter Process
Communication
• Advantages of Inter Process Communication (IPC)
– Information sharing
– Modularity/Convenience
• 3 ways
– Shared memory
– Message Passing

– Signals
Shared
Memory userspace
• One process will create an area in RAM which
the other process can access Process 1
• Both processes can access shared memory like
a regular working memory
– Reading/writing is like regular reading/writing
– Fast Shared
• Limitation : Error prone. Needs synchronization memory
between processes
Process 2
Shared Memory in
Linux
• int shmget (key, size, flags)
– Create a shared memory segment;
– Returns ID of segment : shmid
– key : unique identifier of the shared memory segment
– size : size of the shared memory (rounded up to the
PAGE_SIZE)
• int shmat(shmid, addr, flags)
– Attach shmid shared memory to address space of the
calling process
– addr : pointer to the shared memory address space
• int shmdt(shmid)
– Detach shared memory
Example
server. client.c
c
Message
Passing userspace
• Shared memory created in the kernel
Process 1
• System calls such as send and receive
used for communication
– Cooperating : each send must have a
receive

Advantage : Explicit sharing, less error Process 2
prone

Limitation : Slow. Each call involves
marshalling / demarshalling of Kernel
information
Shared
memory
Pipe
s
– Always between parent and child
– Always unidirectional
– Accessed by two associated file descriptors:
• fd[0] for reading from pipe
• fd[1] for writing to the pipe
Pipes for two
way
communicatio
n

• Two pipes opened • Close the unnecessary


pipe0 and pipe1 pipes
• Note the unnecessary
pipes
Example
(child process sending a string to
parent)
Signal
s
• Asynchronous unidirectional communication
between processes
• Signals are a small integer
– eg. 9: kill, 11: segmentation fault
• Send a signal to a process
– kill(pid, signum)
• Process handler for a signal
– sighandler_t signal(signum, handler);
– Default if no handler defined

ref : http://www.comptechdoc.org/os/linux/programming/linux_pgsignals.html 10
Synchronization
Motivating
Scenario
shared variable
program 0 int counter=5; program 1
{ {
* *
* *
counter++ counter--
* *
} }

• Single core
– Program 1 and program 2 are executing at the same time but sharing a
single core

1 2 1 2 1 2 1 2

CPU usage wrt time


Motivating
Scenario
Shared variable
program 0 int counter=5; program 1
{ {
* *
* *
counter++ counter--
* *
} }

• What is the value of counter?


– expected to be 5
– but could also be 4 and 6
Motivating
Scenario
Shared variable
program 0 int counter=5; program 1
{ {
* *
* *
counter++ counter--
* *
} }

R1  counter R1  counter R2  counter


R1  R1 + 1 R2  counter R2  counter
counter R2  R 2 - R2  R 2 + 1
conte  R 1 counter  R 2
1 counter  R 2
xt R2  R1  R1 + 1
switch counter R2 counter  R1 R2  R2 - 1
 R2 - 1 counter  R2
counter R
counter =25 counter = 6 counter = 4
Race
Conditions

• Race conditions
– A situation where several processes access and manipulate the same
data (critical section)
– The outcome depends on the order in which the access take place
– Prevent race conditions by synchronization
• Ensure only one process at a time manipulates the critical data

{
*
*
counter++ critical section
*
} No more than one
process should execute in
critical section at a time
Race Conditions in
Multicoreshared variable
program 0 int counter=5; program 1
{ {
* *
* *
counter++ counter--
* *
} }

• Multi core
– Program 1 and program 2 are executing at the same time on different
cores

1
2
CPU usage wrt time
Critical
Section
• Requirements
– Mutual Exclusion : No more than one process in
critical section at a given time
– Progress : When no process is in the critical section,
any process that requests entry into the critical
section must be permitted without any delay
– No starvation (bounded wait): There is an upper
bound on the number of times a process enters the
critical section, while another is waiting.
Locks and
Unlocks
shared variable
program 0 int counter=5; program 1
{ lock_t L; {
* *
* *
lock(L) lock(L)
counter++ counter--
unlock(L)
* unlock(L)
} *
}
• lock(L) : acquire lock L exclusively
– Only the process with L can access the critical section
• unlock(L) : release exclusive access to lock L
– Permitting other processes to access the critical section
Whento have
Locking?
• Single instructions by themselves are
atomic
eg. add %eax, %ebx

• Multiple instructions need to be explicitly


made atomic
– Each piece of code in the OS must be
checked if they need to be atomic
How to Implement
Locking
Using
Process 1
while(1){
Interruptswhile(1){ Process 2

lock disable interrupts () disable interrupts ()


critical section critical section
unlock enable interrupts () enable interrupts ()
other code other code
} }

• Simple
– When interrupts are disabled, context switches won’t
happen
• Requires privileges
– User processes generally cannot disable interrupts
• Not suited for multicore systems
Software Solution
(Attempt 1)
Shared
int turn=1;
Process 1 Process 2
while(1){ while(1){
while(turn == 2); // lock while(turn == 1); // lock
critical section critical section
turn = 2; // turn = 1; // unlock
unlock other other code
code }
}

• Achieves mutual exclusion


• Busy waiting – waste of power and time
• Needs to alternate execution in critical section
process1  process2  process1  process2
Software Solution
(Attempt 2)
shared
p2_inside = False, p1_inside = False
Process 1 Process 2
while(1){ while(1){
while(p2_inside == True); while(p1_inside == True);
lock
p1_inside = True; p2_inside = True;
critical section critical section
unlock p1_inside = False; p2_inside = False;
other code other code
} }

• Need not alternate execution in critical section


• Does not guarantee mutual exclusion
Attempt 2: No mutual
exclusion
CPU p1_inside p2_inside
while(p2_inside == True); False False
context switch
time

while(p1_inside == True); False False


p2_inside = True; False True
context switch
p1_inside = True; True True

Both p1 and p2 can enter into the critical section at the same time
Software Solution
(Attempt 3)
globally defined
p2_wants_to_enter, p1_wants_to_enter
Process 1 Process 2
while(1){ while(1){
p1_wants_to_enter = True p2_wants_to_enter = True
lock while(p2_wants_to_enter = True); while(p1_wants_to_enter = True);
critical section critical section
k p1_wants_to_enter = False
unloc p2_wants_to_enter = False
other code other code
} }

• Achieves mutual exclusion


• Does not achieve progress (could deadlock)
Attempt 3: No
Progress
CPU p1_inside p2_inside
p1_wants_to_enter = True False False
context switch
time

p2_wants_to_enter = True False False

There is a tie!!!

Both p1 and p2 will loop infinitely


Peterson’s
Solution
globally defined
p2_wants_to_enter, p1_wants_to_enter, favored
Process 1
while(1){
If the second process wants to enter. favor
p1_wants_to_enter = True
lock it. (be nice !!!)
favored = 2

while (p2_wants_to_enter AND favored is used to break the tie when


favored = 2); both p1 and p2 want to enter the critical
critical section section.
p1_wants_to_enter =
(* the process which sets favored last
unloc False k
other code
looses the tie *)
}

Break the tie with a ‘favored’ process


Peterson’s
Solution
globally defined
p2_wants_to_enter, p1_wants_to_enter, favored
Process 1 Process 2
while(1){ while(1){
p1_wants_to_enter = True p2_wants_to_enter = True
favored = 2 favored = 1

while (p2_wants_to_enter AND while (p1_wants_to_enter AND


favored = 2); favored = 1);
critical section critical section
p1_wants_to_enter = False p2_wants_to_enter = False
other code other code
} }
Bakery
Algorithm
• Synchronization between N > 2 processes
• By Leslie Lamport
Eat
when 196
displayed

wait your turn!!

http://research.microsoft.com/en-us/um/people/lamport/pubs/bakery.pdf 29
Simplified Bakery
Algorithm
• Processes numbered 0 to N-1
• num is an array N integers (initially 0).
– Each entry corresponds to a process
lock(i){ num[i] = MAX(num[0], num[1], ….,
num[N-1]) + 1 for(p = 0; p < N; ++p){
while (num[p] != 0 and num[p] < num[i]);
}
}

This is at the doorway!!!


critical section It has to be atomic
to ensure two processes
unlock(i) do not get the same token
{ num[i] =
0;
}
30
Original Bakery
Algorithm
• Without atomic operation assumptions
• Introduce an array of N Booleans: choosing, initially all values False.
lock(i){ choosing[i] = True
num[i] = MAX(num[0], num[1], …., num[N-1]) + 1
choosing[i] = False doorway
for(p = 0; p < N; ++p){
while (choosing[p]);
while (num[p] != 0 and (num[p],p)<(num[i],i));
}
}

critical section
Choosing ensures that a process
unlock(i)
Is not at the doorway
{ num[i] =
0;
}
(a, b) < (c, d) which is equivalent to: (a < c) or ((a == c) and (b < d)) 31
Analyze
this
• Does this scheme provide mutual exclusion?
Process 1 Process 2
while(1) while(1)
{ while(lock != { while(lock !=
0); lock= 1; // 0); lock = 1; //
lock critical lock critical
section section
lock = 0; // lock = 0; // unlock
unlock other other code
code }
} lock = 0
No P1: while(lock != 0); context switch
P2: while(lock != 0);
P2: lock = 1;
P1: lock = 1;
…. Both processes in critical section
If
only…
• We could make this operation atomic
Process 1
while(1){
Make atomic
while(lock != 0);
lock= 1; // lock
critical section
lock = 0; // unlock
other code
}

Hardware to the rescue….


Hardware Support
(Test & Set
Instruction)
• Write to a memory location, return its old value
mic
ato while(1)
int test_and_set(int *L){ { while(test_and_set(&lock) ==
int prev = *L; 1); critical section
*L = 1; lock = 0; //
return prev; unlock other
} code
}
equivalent software representation
(the entire function is executed atomically) Usage for locking
Why does this work? If two CPUs execute test_and_set at the same time, the
hardware ensures that one test_and_set does both its steps before the other
one starts.
So the first invocation of test_and_set will read a 0 and set lock to 1 and
return. The second test_and_set invocation will then see lock as 1, and will
loop continuously until lock becomes 0
Intel Hardware Software (xchg
instruction)
int xchg(addr, value){
%eax = value
• xchg : Intel instruction. xchg %eax, (addr)
exchange. }

void acquire(int *locked){


while(1){
typical usage :
if(xchg(locked, 1) == 0)
xchg reg, mem break;
}
}

void release(int *locked){


Note. %eax is returned locked = 0;
}
High Level
Constructs
• Spinlock

• Mutex

• Semaphore
Spinlocks
Process 1
Usage
acquire(&locked) int xchg(addr, value){
critical section %eax = value
release(&locked) xchg %eax, (addr)
}
Process 2
acquire(&locked) void acquire(int *locked){
critical section while(1){
release(&locked) if(xchg(locked, 1) == 0)
• One process will acquire the lock break;
}
• The other will wait in a loop }
repeatedly checking if the lock is
available void release(int *locked){
• The lock becomes available when locked = 0;
the former process releases it }

See spinlock.c and spinlock.h in xv6 [15] 37


Issues with
Spinlocks
xchg %eax, X

• No compiler optimizations should be allowed


– Should not make X a register variable
• Write the loop in assembly or use volatile

• Should not reorder memory loads and stores


• Use serialized instructions (which forces instructions not to be reordered)
• Luckly xchg is already implements serialization
More issues with
Spinlocks
xchg %eax, X
CPU0 cache coherence CPU1
protocol
L1 cache L1 cache
#LOCK

Memory
X
• No caching of (X) possible. All xchg operations are bus
transactions.
– CPU asserts the LOCK, to inform that there is a ‘locked ‘ memory
• access
acquire function in spinlock invokes xchg in a loop…each operation
is a bus transaction …. huge performance hits
int xchg(addr,
value){
A better acquire
%eax = value
xchg %eax, (addr)
}

void acquire(int *locked){ void acquire(int *locked) {


reg = 1 reg = 1;
while(1) while (xchg(locked, reg) == 1)
if(xchg(locked, reg) == 0) while (*locked == 1);
break; }
}

Original. Better way


Loop with xchg. inner loop allows caching of
Bus transactions. locked. Access cache instead of memory.
Huge overheads
Spinlocks
(when should it be
used?)
• Characteristic : busy waiting
– Useful for short critical sections, where much CPU
time is not wasted waiting
• eg. To increment a counter, access an array element,
etc.

– Not useful, when the period of wait is unpredictable or


will take a long time
• eg. Not good to read page from disk.
• Use mutex instead (…mutex)
Spinlock in
pthreads

lock

unlock

create spinlock

destroy spinlock
Mutexe
s int xchg(addr, value){
• Can we do better than busy %eax = value
xchg %eax, (addr)
waiting? }
– If critical section is locked then
yield CPU void lock(int *locked){
• Go to a SLEEP state while(1){
if(xchg(locked, 1) == 0)
– While unlocking, wake up
break;
sleeping process else
sleep();
}
}

void unlock(int *locked){


locked = 0;
wakeup();
}
Ref: wakeup(2864), sleep(2803) 43
Thundering Herd
Problem int xchg(addr, value){
• A large number of processes %eax = value
xchg %eax, (addr)
wake up (almost
}
simultaneously) when the event
occurs. void lock(int *locked){
– All waiting processes wake up while(1){
if(xchg(locked, 1) == 0)
– Leading to several context break;
switches else
sleep();
– All processes go back to sleep }
except for one, which gets the }
critical section
• Large number of context switches void unlock(int *locked){
locked = 0;
• Could lead to starvation
wakeup();
}
44
Thundering Herd•int Problem
xchg(addr, value){
• %eax = value
• The Solution xchg %eax, (addr)

– When entering critical •}

section, push into a • void lock(int


queue before blocking *locked){ while(1){

– When exiting critical • if(xchg(locked, 1) ==


0) break;
section, wake up only • else{
the first process in the • // add this process to
Queue
queue
• sleep();
void unlock(int *locked){
• }locked = 0;
•//}remove process P from queue
wakeup(P)
•}} 45
pthread
Mutex
• pthread_mutex_lock
• pthread_mutex_unlock

46
Locks and
Priorities
• What happens when a high priority task requests
a lock, while a low priority task is in the critical
section
– Priority Inversion
– Possible solution
• Priority Inheritance

Interesting Read : Mass Pathfinder


http://research.microsoft.com/en-us/um/people/mbj/mars_pathfinder/mars_pathfinder.html
47
Producer – Consumer
Problems
• Also known as Bounded buffer Problem
• Producer produces and stores in buffer, Consumer consumes from
buffer
• Trouble when
– Producer produces, but buffer is full
– Consumer consumes, but buffer is empty

Buffer (of size N)

Producer Consumer
Producer-Consumer
Code
Buffer of size N
int count=0;
Mutex mutex,
1 void producer(){ empty, full;
1 void consumer(){
2 while(TRUE){ 2 while(TRUE){
3 item = 3 if (count == 0) sleep(full);
4 produce_ite 4 lock(mutex);
5 m(); 5 item = remove_item(); // from buffer
6 if (count == N) sleep(empty); 6 count--;
7 lock(mutex); 7 unlock(mutex);
8 insert_item(item); // into buffer 8 if (count == N-1) wakeup(empty);
9 count++; 9 consume_item(item);
10 unlock(mutex); 10 }
if (count == 1) wakeup(full); }
}
}
Lost
Wakeups
• Consider the following 3 read count value // count  0
3 item = produce_item();
context of 5 lock(mutex);
• instructions 6insert_item(item); // into buffer
7 count++; // count = 1
Assume buffer is initially 8 unlock(mutex)
empty 9 test (count == 1) // yes
context switch 9 signal(full);
3 test (count == 0) // yes
3 wait();

Note, the wakeup is lost.


Consumer waits even though buffer is not empty.
Eventually producer and consumer will wait infinitely
consumer
still uses the old value of count (ie 0)
Semaphor
es
• Proposed by Dijkstra in 1965 void down(int *S){
• general facility provided by while( *S <= 0);
operating system *S--;
}
• Functions down and up must be
atomic void up(int *S){
• down also called P (Proberen Dutch *S++;
for try) }
• up also called V (Verhogen, Dutch
form make higher)
• Can have different variants If S is initially set to 1,

– Such as blocking, non-blocking
– Blocking semaphore similar to a Mutex
Non-blocking semaphore similar to a
spinlock
Producer-
Consumer with
Semaphores
Buffer of size N
int count; full = 0, empty = N
void producer(){ void consumer(){
while(TRUE){ while(TRUE){
item = produce_item(); down(full);
down(empty); wait(mutex);
wait(mutex); item = remove_item(); // from buffer
insert_item(item); // into buffer signal(mutex);
signal(mutex); up(empty);
up(full); consume_item(item);
} }
} }
locked
locked

Threads T1, T2, T3


Monitor With Condition
Variables
Dining Philosophers
Problem
E

• Philosophers either think or eat

• To eat, a philosopher needs to hold


5 1
both forks (the one on his left and the
A
D one on his right)

• If the philosopher is not eating, he is


4 2 thinking.

• Problem Statement : Develop an


3
algorithm where no philosopher
C B starves.
E
First
Try
#define N 5
5 1
A void philosopher(int i){
D while(TRUE){
think(); // for some_time
take_fork(i);
4 2 take_fork((i + 1) % N);
eat();
put_fork(i); put_fork((i
3 + 1) % N);
C B
}
}

What happens if only philosophers A and C are always given the priority?
B, D, and E starves… so scheme needs to be fair
E
First
Try
#define N 5
5 1
A void philosopher(int i){
D while(TRUE){
think(); // for some_time
take_fork(i);
4 2 take_fork((i + 1) % N);
eat();
put_fork(i); put_fork((i
3 + 1) % N);
C B
}
}

What happens if all philosophers decide to pick up their right forks at the same
time? Possible starvation due to deadlock
Deadlock
s

• A situation where programs continue to run
• indefinitely without making any progress
• Each program is waiting for an event that
another process can cause
Second
try
• Take fork i, check if fork (i+1)%N is #define N 5
available
• Imagine, void philosopher(int i){
– All philosophers start at the same time
while(TRUE){
– Run simultaneously
– And think for the same time
think();
take_fork(i);
• This could lead to philosophers taking
fork and putting it down continuously. a if (available((i+1)%N)
deadlock. { take_fork((i + 1) %
N); eat();
• A better alternative }else{
– Philosophers wait a random time before put_fork(i);
take_fork(i) }
– Less likelihood of deadlock.
– Used in schemes such as Ethernet
}
Solution using Mutex
• Protect critical sections with a #define N 5
mutex
void philosopher(int i){
• Prevents deadlock while(TRUE){
• But has performance issues think(); // for some_time
wait(mutex);
– Only one philosopher can eat at a
take_fork(i);
time
take_fork((i + 1) % N);
eat();
put_fork(i); put_fork((i
+ 1) % N);
signal(mutex);
}
}
Solution to Dining
Philosophers
Uses N semaphores (s[0], s[1], …., s[N]) all initialized to 0, and a mutex
Philosopher has 3 states: HUNGRY, EATING, THINKING
A philosopher can only move to EATING state if neither neighbor is
eating
void philosopher(int i){
while(TRUE){ void take_forks(int i) void put_forks(int i){
think(); { lock(mutex); lock(mutex);
take_forks(i); state[i] = HUNGRY; state[i] = THINKING;
eat(); test(i); test(LEFT);
put_forks(); unlock(mutex); test(RIGHT)
} down(s[i]); unlock(mutex);
} } }

void test(int i){


if (state[i] = HUNGRY && state[LEFT] != EATING && state[RIGHT] != EATING){
state[i] = EATING;
up(s[i]);
}
}
Deadlock
s
R1
olds 1
h R
A urce
o
es
r

A B

olds R2
h
R2 B urce
o
es
r

Consider this situation:


Deadlock
s
Resource Allocation Graph
R1 Bw
res ait
olds 1 ou s fo
h R rce r
A urce R
o 1
es
r

A B
Aw
res aits
ou
rce for olds R2
R h
2 R2 B urce
o
es
r

A Deadlock Arises:
Deadlock : A set of processes is deadlocked if each process in the set is
waiting for an event that only another process in the set can cause.
Conditions for Resource
Deadlocks
1. Mutual Exclusion
– Each resource is either available or currently assigned to exactly one
process
2. Hold and wait
– A process holding a resource, can request another resource
3. No preemption
– Resources previously granted cannot be forcibly taken away from a
process
4. Circular wait
– There must be a circular chain of two or more processes, each of
which is waiting for a resouce held by the next member of the chain

All four of these conditions must be present for a resource deadlock


to occur!!
Deadlocks :
(A Chanced

Event)
Ordering of resource requests and allocations are probabilistic, thus
deadlock occurrence is also probabilistic

Deadlock occurs
No dead lock occurrence
(B can be granted S
after step q)
Should Deadlocks be
handled?
• Preventing / detecting deadlocks could be tedious
• Can we live without detecting / preventing deadlocks?
– What is the probability of occurrence?
– What are the consequences of a deadlock? (How critical is a
deadlock?)
Handling
Deadlocks
• Detection and Recovery
• Avoidance
• Prevention
Deadlock
detection
• How can an OS detect when there is a
deadlock?
• OS needs to keep track of
– Current resource allocation
• Which process has which resource
– Current request allocation
• Which process is waiting for which resource
• Use this informaiton to detect
deadlocks
Deadlock
Detection
• Deadlock detection with one resource of each type
• Find cycles in resource graph
Deadlock
Detection
• Deadlock detection with multiple resources of each type

Existing Resource Vector Resources Available

P1

P2
Current Allocation Matrix Request Matrix
P3 Who has what!! Who is waiting for what!!
Process Pi holds Ci resources and requests Ri resources, where i = 1 to 3
Goal is to check if there is any sequence of allocations by which all current
requests can be met. If so, there is no deadlock.
Deadlock
Detection
• Deadlock detection with multiple resources of each type

Existing Resource Vector Resources Available

P1 P1 cannot be satisfied
P2 P2 cannot be satisfied

P3 can be satisfied
P3 Current Allocation Matrix Request Matrix

Process Pi holds Ci resources and requests Ri resources, where i = 1 to 3


Deadlock
Detection
• Deadlock detection with multiple resources of each type

Existing Resource Vector Resources Available

P1

P2

P3

Current Allocation Matrix Request Matrix

P3 runs and its allocation is (2, 2, 2, 0)


72
Deadlock
Detection
• Deadlock detection with multiple resources of each type

Existing Resource Vector Resources Available

P1 P1 cannot be satisfied
P2 P2 cannot be satisfied
2 1 1 0
P3 cannot be satisfied
P3 Current Allocation Matrix Request Matrix
deadlock
Process Pi holds Ci resources and requests Ri resources, where i = 1 to
3 Deadlock detected as none of the requests can be satisfied
Deadlock
Recovery
What should the OS do when it detects a deadlock?
• Raise an alarm
– Tell users and administrator
• Preemption
– Take away a resource temporarily (frequently not possible)
• Rollback
– Checkpoint states and then rollback
• Kill low priority process
– Keep killing processes until deadlock is broken
– (or reset the entire system)
Deadlock
Avoidance
• System decides in advance if allocating a resource to a
process will lead to a deadlock Both processes request
process 2 instructions Resource R1

R1 Unsafe state
(may cause a deadlock)

R2 Both processes
request
Resource R2

Note: unsafe state is process 1 instructions


not a deadlocked state R1
R2
Deadlock
Avoidance
Is there an algorithm that can always avoid deadlocks by
conservatively make the right choice.
• Ensures system never reaches an unsafe state

• Safe state : A state is said to be safe, if there is some


scheduling order in which every process can run to
completion even if all of them suddenly requests their
maximum number of resources immediately
• An unsafe state does not have to lead to a deadlock;
it
could lead to a deadlock
Example with a
Banker
• Consider a banker with 4 clients (P1, P2, P3, P4).
– Each client has certain credit limits (totaling 20 units)
– The banker knows that max credits will not be used at once, so
he keeps only 10 units
Has Max
A 3 9
B 2 4
C 2 7

Total : 10 units free : 3 units

– Clients declare maximum credits in advance. The banker can


allocate credits provided no unsafe state is reached.
Safe
State
Allocate 2 units to B B completes
Has Max Has Max Has Max
A 3 9 A 3 9 A 3 9
B 2 4 B 4 4 B 0 -
C 2 7 C 2 7 C 2 7
free : 3 units free : 1 units free : 5 units Allocate 5 to C
Has Max
A 3 9
B 0 -
C 7 7
free : 0 units
Allocate 6 units to A C completes
Has Max Has Max
A 9 9 A 3 9
B 0 - B 0 -
C 0 - C 0 -

free : 0 units free : 7 units


This is a safe state because there is some scheduling
order in which every process executes 78
Unsafe
State
Allocate 2 units to B B completes
Has Max Has Max Has Max
A 4 9 A 4 9 A 4 9
B 2 4 B 4 4 B 0 -
C 2 7 C 2 7 C 2 7
free : 2 units free : 0 units free : 4 units

This is an unsafe state because there exists NO scheduling


order in which every process executes
Banker’s Algorithm (with
a single resource)
When a request occurs
– If(is_system_in_a_safe_state)
• Grant request
– else
• postpone until later

Deadlock unsafe

safe

Please read Banker’s Algorithm with multiple resources from


Modern Operating Systems, Tanenbaum
Deadlock
Prevention
• Deadlock avoidance not practical, need to
know maximum requests of a process
• Deadlock prevention
– Prevent at-least one of the 4 conditions
1. Mutual Exclusion
2. Hold and wait
3. No preemption
4. Circular wait
Preventio
n
1. Preventing Mutual Exclusion
– Not feasible in practice
– But OS can ensure that resources are optimally allocated

2. Hold and wait


– One way is to achieve this is to require all processes to request resources
before starting execution
• May not lead to optimal usage
• May not be feasible to know resource requirements

3. No preemption
– Pre-empt the resources, such as by virtualization of resources (eg. Printer
spools)
4. Circular wait
– One way, process holding a resource cannot hold a resource and request for
another one
– Ordering requests in a sequential / hierarchical order.
Hierarchical Ordering of
Resources
• Group resources into levels
(i.e. prioritize resources numerically)
• A process may only request resources at higher levels
than any resource it currently holds
• Resource may be released in any order
• eg.
– Semaphore s1, s2, s3 (with priorities in
increasing order) down(S1); down(S2); down(S3) ;
 allowed
down(S1); down(S3); down(S2);  n o t allowed
Thank You!!!

You might also like