Unit - IV New
Unit - IV New
1
2
3
Operating System Types
• hard time constraints as opposed to soft time
constraints
– Real Time Operating System
– Non Real Time operating System
6
What is Interrupt?
• Break is program execution
– An interrupt is a hardware generated subroutine call.
– usually in response to some external event
– can be called at any point in the code execution
– CPU status at the time of the interrupt is saved
8
Example for Interrupt
• Usage
voidNMI_enable(void)
{
outb(0x70,inb(0x70)&0x7F);
}
voidNMI_disable(void)
{
outb(0x70,inb(0x70)|0x80);
}
9
Software Interrupt
• A software
interrupt,
also called
an exception,
trap
10
Direct Call to an ISR
11
RTOS first interrupting on an interrupt
12
• ISR code can send into a mailbox or message
queue but the task waiting for a mailbox or
message queue does not start before the
return from the ISR
• When ISR finishes, it send s Exit message to
OS
13
14
15
16
Multiprocessing
• Two or more CPUs share full access to a
common RAM
• Basis of Inter-Processor Communication
• These include
– process synchronization
– resource management
– Scheduling
17
18
Each CPU Has Its Own Operating System
Master-Slave Multiprocessors
Symmetric Multiprocessors
19
Multiprocessor Synchronization
20
Multiprocessor Scheduling
• Timesharing
• Space Sharing
• Gang Scheduling
21
Timesharing
• single scheduling data structure used by all CPUs
• they would be in a uniprocessor system
• It also provides automatic load balancing
• Two disadvantages of this approach are the
• potential contention for the scheduling data structure as
the numbers of CPUs grows
• the usual overhead in doing a context switch when a
process blocks for I/O.
22
Space Sharing
• Scheduling multiple threads at the same time
across multiple CPUs
• shortest job first is a well-known algorithm for
batch scheduling
23
Gang Scheduling
three parts:
1. Groups of related threads are scheduled as a unit, a gang.
2. All members of a gang run simultaneously, on different
timeshared CPUs.
3. All gang members start and end their time slices together.
24
What is Pre-emptive and
Non Pre-emptive scheduling?
• Pre-emptive: once a process has been given the CPU
can taken away.
– short jobs are made to wait by longer jobs but the overall
treatment of all processes is fair.
– response times are more predictable
– scheduler executes jobs in the following two situations.
• When a process switches from running state to the waiting state.
• When a process terminates.
• Non Pre-emptive : once a process has been given the
CPU, the CPU cannot be taken away from that process.
– Once a task starts executing, it completes its full execution
– Offers lower schedulability
– Less overhead due to less context switching
25
26
27
28
29
30
31
32
Rate-monotonic scheduling
• a static-priority scheduling class
• basis of the cycle duration of the job, cycle
duration is, the higher is the job's priority
• Assume a set of periodic tasks: (Ci,Ti)
33
Fixed-priority pre-emptive scheduling
• the scheduler ensures that
at any given time,
the processor executes the
highest priority task of all
those tasks that are
currently ready to execute.
34
Interprocess communication (IPC)
mechanisms
• Mailboxes
• Message queues
Semaphores
binary
counting
• Pipes
• Signals
• Event registers
Inter-process communication
• Tasks must be able to communicate with one
another to coordinate their activities to share the
data.
• IPC means that a process (scheduler, task or ISR)
generates some information by setting or resetting a
flag or value, or generates an output so that it lets
another process take note of it or use it.
• IPCs in multiprocessor system are used to generate
information about certain sets of computations
finishing on one processor and let the other
processors take note of it.
Interprocess communication
Shared memory:
styles
processes have some memory in common;
cooperate to avoid destroying/missing messages.
Message passing:
processes send messages along a communication channel
---no common address space.
comm.channel may be physical or virtual
Two ways for IPC
1. global data ,
2. sending messages
Global data: by setting global variables.
Usage of global variables create two problems.
(i) Shared data problem
(ii) It cannot prevent a message from being
accessed by other tasks
Shared memory
CPUs could be separate processors and/or cores within a processor
Multiple processes on the same CPU may also share memory
memory
CPU 1 CPU 2
Race condition in shared
memory
Assume a “flag” used to synchronize access to shared
memory
Flag = 1 when shared item is being used
Flag = 0 when shared item is not being used
To access the item: CPU must see flag = 0 and write flag = 1
Problem when two CPUs try to write the same location:
CPU 1 reads flag and sees 0.
CPU 2 reads flag and sees 0. Flag
Post a
mailbox Read the
message message
Task 1 Task 2
Mailboxes types at the different RTOSs
Mail Box type
permitted by
an RTOS
One
Multiple message per Multiple
unlimited Mail Box message
Messages s with a
Queuing Up priority
paramete
r for each
message
Mailbox services provided by the kernel
• Initialize the contents of the mailbox. The mailbox
may or may not contain a message.
• Deposit a message into the mailbox (POST)
• Wait for a message to be deposited into the mailbox
(PEND)
• Get a message from a mailbox if existing (ACCEPT)
• A provision for one message-write per box and the
next message to be accepted only if the previous one
is read.
• A mailbox with a provision for only one message per
box becomes full with one message.
A mobile phone LCD display example for
mailbox
• When the time message from a clock process
arrives, the time is displayed at the right
cornet of the last line.
• When the message from another task is to
display a phone number, it is displayed at the
middle line.
Mailbox management functions
• Create a mailbox -- RTOS_BoxCreate
• Delete a mailbox -- RTOS_BoxDelete
• Query a mailbox – RTOS_BoxQuery
• Wait for a mailbox – RTOS_BoxWait
• Post a message into the mail box --
RTOS_BoxWrite
• Read a message from the mailbox –
RTOS_BoxAccept
Message
passing
Message passing on a network:
CPU 1 CPU 2
message message
message
Message passing via
mailboxes
Mailbox = message buffer between two processes (FIFO)
full Process B
(receiving)
full
Process A filling
(sending)
empty
Message queues
Array of mailboxes
Message queue
Task 1 Task 2
semaphore
Two way communication
Message queue
Task 1 Task 2
Broadcasting
Message queue
Task
Task
Task
Task
Kernel message queue services
• Initialize a queue. It is always assumed to be
empty after initialization.
• Deposit a message into the queue (POST)
• Wait for a message to be deposited into the
queue (PEND)
• Get a message from a queue (ACCEPT)
Message queue management functions
Task Task
Pipe
Write data Reads data
to pipe from pipe
Task
Task
#define BUFFER_SIZE 10
typedef struct {
DATA data;
} item;
item buffer[BUFFER_SIZE];
int in = 0; // Location of next input to buffer
int out = 0; // Location of next removal from
buffer
int counter = 0; // Number of buffers currently
full
78
PROCESS The Producer
Consumer Problem
SYNCHRONIZATION
#define BUFFER_SIZE 10
A producer process "produces" information typedef struct {
"consumed" by a consumer process. DATA data;
} item;
item nextProduced; item buffer[BUFFER_SIZE];
PRODUCER int in = 0;
int out = 0;
while (TRUE) { int counter = 0;
while (counter ==
BUFFER_SIZE);
buffer[in] = nextProduced;
in = (in + 1) % BUFFER_SIZE; item nextConsumed;
CONSUMER
counter++;
while (TRUE) {
}
while (counter == 0);
nextConsumed =
buffer[out];
out = (out + 1) %
BUFFER_SIZE;
counter--;
producer consumer }
79
PROCESS The Producer
Consumer Problem
SYNCHRONIZATION
Note that counter++; this line is NOT what it seems!!
At a micro level, the following scenario could occur using this code:
80
PROCESS
Critical Sections
SYNCHRONIZATION
Critical Section Code in which only one process can execute at any one time.
Exit Section The end of the critical section, releasing or allowing others in.
81
PROCESS
Critical Sections
SYNCHRONIZATION
The critical section must ENFORCE ALL THREE of the following rules:
Mutual Exclusion: No more than one process can execute in its critical section
at one time.
Bounded Wait: All requesters must eventually be let into the critical
section.
82
83
84
85
PROCESS
Two Processes
SYNCHRONIZATION Software
Here’s an example of a simple piece of code containing the components
required in a critical section.
do {
while ( turn ^= i ); Entry Section
/* critical section */
turn = j; Critical Section
/* remainder section */
} while(TRUE); Exit Section
Remainder Section
86
PROCESS
Two Processes
SYNCHRONIZATION Software
Here we try a succession of increasingly complicated solutions to the problem of creating valid entry sections.
NOTE: In all examples, i is the current process, j the "other" process. In these examples, envision the same
code running on two processors at the same time.
TOGGLED ACCESS:
do {
while ( turn ^= i );
Algorithm 1
/* critical section */
turn = j;
/* remainder section */
Are the three Critical Section
} while(TRUE);
Requirements Met?
87
PROCESS
Two Processes
SYNCHRONIZATION Software
FLAG FOR EACH PROCESS GIVES STATE:
Each process maintains a flag indicating that it wants to get into the critical section. It checks the flag
of the other process and doesn’t enter the critical section if that other process wants to get in.
Shared variables
boolean flag[2];
initially flag [0] = flag [1] = false. Algorithm 2
flag [i] = true Pi ready to enter its critical section
do {
flag[i] := true; Are the three Critical Section
while (flag[j]) ; Requirements Met?
critical section
flag [i] = false;
remainder section
} while (1);
88
PROCESS
Two Processes
SYNCHRONIZATION Software
FLAG TO REQUEST ENTRY:
• Each processes sets a flag to request entry. Then each process toggles a bit to allow the
other in first.
• This code is executed for each process i.
Algorithm 3
Shared variables
boolean flag[2];
initially flag [0] = flag [1] = false.
flag [i] = true Pi ready to enter its critical section
Are the three Critical Section
Requirements Met?
do {
flag [i]:= true;
turn = j;
while (flag [j] and turn == j) ; This is Peterson’s
critical section Solution
flag [i] = false;
remainder section
} while (1);
89
PROCESS Critical Sections
SYNCHRONIZATION
• Atomic load, store, test instruction. For instance, if a store and test
occur simultaneously, the test gets EITHER the old or the new, but not
some combination.
90
PROCESS Hardware
SYNCHRONIZATION Solutions
Disabling Interrupts: Works for the Uni Processor case only. WHY?
Atomic test and set: Returns parameter and sets parameter to true atomically.
Must be careful if these approaches are to satisfy a bounded wait condition - must use
round robin - requires code built around the lock instructions.
91
PROCESS Hardware
SYNCHRONIZATION Solutions
Boolean waiting[N];
int j; /* Takes on values from 0 to N - 1 */
Boolean key;
do {
waiting[i] = TRUE;
key = TRUE;
while( waiting[i] && key )
key = test_and_set( lock ); /* Spin lock */
waiting[ i ] = FALSE;
/****** CRITICAL SECTION ********/
j = ( i + 1 ) mod N;
while ( ( j != i ) && ( ! waiting[ j ] ) )
j = ( j + 1 ) % N;
if ( j == i )
Using Hardware
lock = FALSE;
Test_and_set.
else
waiting[ j ] = FALSE;
/******* REMAINDER SECTION *******/
} while (TRUE);
92
PROCESS Current Hardware
SYNCHRONIZATION Dilemmas
caches,
shared memory (for storage of lock variables),
write through cache,
write pipes.
The last software solution we did ( the one we thought was correct ) may not work
on a cached multiprocessor. Why? { Hint, is the write by one processor visible
immediately to all other processors?}
What changes must be made to the hardware for this program to work?
93
PROCESS Current Hardware
SYNCHRONIZATION Dilemmas
Does the sequence below work on a cached multiprocessor?
a: A0 b: B0
94
PROCESS Current Hardware
SYNCHRONIZATION Dilemmas
We need to discuss:
Write Ordering: The first write by a processor will be visible before the second
write is visible. This requires a write through cache.
The software solutions discussed earlier should be avoided since they require write
ordering and/or sequential consistency.
95
PROCESS Current Hardware
SYNCHRONIZATION Dilemmas
Imagine needing to write all shared data straight through the cache.
With test and set, only lock locations are written out explicitly.
In not too many years, hardware will no longer support software solutions because of the
performance impact of doing so.
96
PROCESS Semaphores
SYNCHRONIZATION
PURPOSE:
We want to be able to write more complex constructs and so need a language to do so. We
thus define semaphores which we assume are atomic operations:
WAIT ( S ): SIGNAL ( S ):
while ( S <= 0 ); S = S + 1;
S = S - 1;
As given here, these are not atomic as written in "macro code". We define these operations,
however, to be atomic (Protected by a hardware lock.)
FORMAT:
wait ( mutex ); <-- Mutual exclusion: mutex init to 1.
CRITICAL SECTION
signal( mutex );
REMAINDER
97
PROCESS Semaphores
SYNCHRONIZATION
P1: P2:
statement 1; wait ( synch );
signal ( synch ); statement 2;
98
PROCESS Semaphores
SYNCHRONIZATION
We don't want to loop on busy, so will suspend instead:
typedef struct {
int value;
struct process *list; /* linked list of PTBL waiting on S */
} SEMAPHORE;
99
PROCESS Semaphores
SYNCHRONIZATION
typedef struct {
int value;
struct process *list; /* linked list of PTBL waiting on S */
} SEMAPHORE;
SEMAPHORE s; SEMAPHORE s;
wait(s) { signal(s) {
s.value = s.value - 1; s.value = s.value + 1;
if ( s.value < 0 ) { if ( s.value <= 0 ) {
add this process to s.L; remove a process P from s.L;
block; wakeup(P);
} }
} }
• It's critical that these be atomic - in uniprocessors we can disable interrupts, but in
multiprocessors other mechanisms for atomicity are needed.
• Popular incarnations of semaphores are as "event counts" and "lock managers".
(We'll talk about these in the next chapter.)
100
PROCESS Semaphores
SYNCHRONIZATION
DEADLOCKS:
· May occur when two or more processes try to get the same multiple resources at the
same time.
P1: P2:
wait(S); wait(Q);
wait(Q); wait(S);
..... .....
signal(S); signal(Q);
signal(Q); signal(S);
101
PROCESS Railways in the Andes;
SYNCHRONIZATION A Practical Problem
High in the Andes mountains, there are two circular railway lines. One line is in Peru, the
other in Bolivia. They share a common section of track where the lines cross a mountain
pass that lies on the international border (near Lake Titicaca?).
Unfortunately, the Peruvian and Bolivian trains occasionally collide when simultaneously entering the
common section of track (the mountain pass). The trouble is, alas, that the drivers of the two
trains are both blind and deaf, so they can neither see nor hear each other.
102
The two drivers agreed on the following method of preventing collisions. They set up a
large bowl at the entrance to the pass. Before entering the pass, a driver must stop his
train, walk over to the bowl, and reach into it to see it it contains a rock. If the bowl is
empty, the driver finds a rock and drops it in the bowl, indicating that his train is
entering the pass; once his train has cleared the pass, he must walk back to the bowl
and remove his rock, indicating that the pass in no longer being used. Finally, he walks
back to the train and continues down the line.
If a driver arriving at the pass finds a rock in the bowl, he leaves the rock there; he
repeatedly takes a siesta and rechecks the bowl until he finds it empty. Then he drops a
rock in the bowl and drives his train into the pass. A smart graduate from the University
of La Paz (Bolivia) claimed that subversive train schedules made up by Peruvian officials
could block the train forever.
Explain
The Bolivian driver just laughed and said that could not be true because it never
happened.
Explain
Explain
103
Following the crash, the graduate was called in as a consultant to ensure that no more
crashes would occur. He explained that the bowl was being used in the wrong way. The
Bolivian driver must wait at the entry to the pass until the bowl is empty, drive through
the pass and walk back to put a rock in the bowl. The Peruvian driver must wait at the
entry until the bowl contains a rock, drive through the pass and walk back to remove the
rock from the bowl. Sure enough, his method prevented crashes.
Prior to this arrangement, the Peruvian train ran twice a day and the Bolivian train ran
once a day. The Peruvians were very unhappy with the new arrangement.
Explain
The graduate was called in again and was told to prevent crashes while avoiding the
problem of his previous method. He suggested that two bowls be used, one for each
driver. When a driver reaches the entry, he first drops a rock in his bowl, then checks the
other bowl to see if it is empty. If so, he drives his train through the pass. Stops and
walks back to remove his rock. But if he finds a rock in the other bowl, he goes back to
his bowl and removes his rock. Then he takes a siesta, again drops a rock in his bowl and
re-checks the other bowl, and so on, until he finds the other bowl empty. This method
worked fine until late in May, when the two trains were simultaneously blocked at the
entry for many siestas.
Explain
104
PROCESS Some Interesting
SYNCHRONIZATION Problems
THE BOUNDED BUFFER ( PRODUCER / CONSUMER ) PROBLEM:
This is the same producer / consumer problem as before. But now we'll do it with signals and waits.
Remember: a wait decreases its argument and a signal increases its argument.
producer: consumer:
do { do {
/* produce an item in nextp */ wait (full);
wait (empty); /* Do action */ wait (mutex);
wait (mutex); /* Buffer guard*/ /* remove an item from buffer to nextc */
/* add nextp to buffer */ signal (mutex);
signal (mutex); signal (empty);
signal (full); /* consume an item in nextc */
} while(TRUE); } while(TRUE);
105
PROCESS Some Interesting
SYNCHRONIZATION Problems
This is the same as the Producer / Consumer problem except - we now can have many concurrent
readers and one exclusive writer.
Locks: are shared (for the readers) and exclusive (for the writer).
• No reader is kept waiting unless a writer holds the lock (the readers have precedence).
• If a writer is waiting for access, no new reader gains access (writer has precedence).
( NOTE: starvation can occur on either of these rules if they are followed rigorously.)
106
PROCESS Some Interesting
SYNCHRONIZATION Problems
THE READERS/WRITERS PROBLEM: Writer:
do {
BINARY_SEMAPHORE wrt = 1; wait( wrt );
BINARY_SEMAPHORE mutex = 1; /* writing is performed */
int readcount = 0;
signal( wrt );
} while(TRUE);
Reader:
do {
wait( mutex ); /* Allow 1 reader in entry*/ WAIT ( S ):
readcount = readcount + 1; while ( S <= 0 );
if readcount == 1 then wait(wrt ); /* 1st reader locks writer */ S = S - 1;
signal( mutex ); SIGNAL ( S ):
/* reading is performed */ S = S + 1;
wait( mutex );
readcount = readcount - 1;
if readcount == 0 then signal(wrt ); /*last reader frees writer */
signal( mutex );
} while(TRUE);
107
PROCESS Some Interesting
SYNCHRONIZATION Problems
THE DINING PHILOSOPHERS PROBLEM:
5 philosophers with 5 chopsticks sit around a circular table. They each want to eat at random times
and must pick up the chopsticks on their right and on their left.
108
PROCESS
Critical Regions
SYNCHRONIZATION
High Level synchronization construct implemented in a
programming language.
A shared variable v of type T, is declared as:
var v; shared T Entry Section
Variable v is accessed only inside a statement:
region v when B do S
where B is a Boolean expression.
While statement S is being executed, no other process can Shared Data
access variable v.
Regions referring to the same shared variable exclude
each other in time.
When a process tries to execute the region statement, the Exit Section
Boolean expression B is evaluated.
If B is true, statement S is executed.
If it is false, the process is delayed until B is true and no
other process is in the region associated with v.
Critical Region
109
PROCESS Critical Regions
SYNCHRONIZATION
EXAMPLE: Bounded Buffer:
Producer process inserts nextp into Consumer process removes an item from
the shared buffer: the shared buffer and puts it in nextc.
region buffer when( count < n) { Region buffer when (count > 0) {
pool[in] = nextp; nextc = pool[out];
in:= (in+1) % n; out = (out+1) % n;
count++; count--;
} }
110
PROCESS Monitors
SYNCHRONIZATION
High-level synchronization construct that allows the safe sharing of an abstract data type
among concurrent processes.
monitor monitor-name
{
shared variable declarations
procedure body P1 (…) {
...
}
procedure body P2 (…) {
...
}
procedure body Pn (…) {
...
}
{
initialization code
}
}
111
PROCESS Monitors
SYNCHRONIZATION
• To allow a process to wait within the monitor, a condition variable must
be declared, as
condition x, y;
• Condition variable can only be used with the operations wait and signal.
– The operation
x.wait();
means that the process invoking this operation is suspended until
another process invokes
x.signal();
– The x.signal operation resumes exactly one suspended process. If no
process is suspended, then the signal operation has no effect.
112
PROCESS Monitors
SYNCHRONIZATION
Schematic View of a
Monitor
113
PROCESS Monitors
SYNCHRONIZATION
Monitor With
Condition Variables
114
PROCESS Monitors
SYNCHRONIZATION Dining Philosophers Example
monitor dp {
enum {thinking, hungry, eating} state[5];
condition self[5]; void putdown(int i) {
} state[i] = thinking;
// test left & right neighbors
initializationCode() { test((i+4) % 5);
for ( int i = 0; i < 5; i++ ) test((i+1) % 5);
state[i] = thinking;
}
}
void test(int i) { void pickup(int i) {
if ( (state[(I + 4) % 5] != eating) && state[i] = hungry;
(state[i] == hungry) && test[i];
(state[(i + 1) % 5] != eating)) { if (state[i] != eating)
state[i] = eating; self[i].wait();
self[i].signal(); }
}
}
115
PROCESS How Is This
SYNCHRONIZATION Really Used?
Windows XP Synchronization
• Uses interrupt masks to protect access to global resources on uniprocessor
systems.
• Dispatcher objects may also provide events. An event acts much like a
condition variable.
116
PROCESS SYNCHRONIZATION
Wrap up
Synchronization IS used in real life. Generally programmers don’t use the really primitive
hardware locks, but use higher level mechanisms as we’ve demonstrated.
117
Priority Inversion
• A challenging scheduling problem arises when a high-priority
process gets blocked waiting for a resource that is currently held by
a low-priority process.
• If the low-priority process gets pre-empted by one or more
medium-priority processes, then the high-priority process is
essentially made to wait for the medium priority processes to finish
before the low-priority process can release the needed resource,
causing a priority inversion.
• If there are enough medium-priority processes, then the high-
priority process may be forced to wait for a very long time.
• One solution is a priority-inheritance protocol, in which a low-
priority process holding a resource for which a high-priority process
is waiting will temporarily inherit the high priority from the waiting
process.
• This prevents the medium-priority processes from preempting the
low-priority process until it releases the resource, blocking the
priority inversion problem.
118
Priority inheritance protocol
Simple priority inheritance
119
• Task L receives control of the processor and begins executing.
– The task makes a request for Resource A.
• Task L is granted ownership of Resource A and enters its critical
region.
• Task L is preempted by Task H, a higher-priority task.
– Task H begins executing and requests ownership of Resource A, which is
owned by Task L.
• Task L is hoisted to a priority above Task H and resumes executing its
critical region.
• Task L releases Resource A and is lowered back to its original priority.
– Task H acquires ownership of Resource A and begins executing its critical
region.
• Task H releases Resource A and continues executing normally.
• Task H finishes executing and Task L continues executing normally.
• Task L finishes executing.
120
Three-task, one-resource priority inheritance
121
• Task 3 gets control of the processor and begins executing.
– The task requests ownership of Resource A.
• Task 3 acquires Resource A and begins executing its critical region.
• Task 3 is preempted by Task 2, a higher-priority task.
– Task 2 begins executing normally and requests Resource A, which is owned by
Task 3.
• Task 3 is hoisted to a priority above Task 2 and resumes executing its critical
region.
• Task 3 is preempted by Task 1, a higher-priority task.
– Task 1 begins executing and requests Resource A, which is owned by Task 3.
• Task 3 is hoisted to a priority above Task 1.
– Task 3 resumes executing its critical region.
• Task 3 releases Resource A and is lowered back to its original priority.
– Task 1 acquires ownership of Resource A and begins executing its critical region.
• Task 1 releases Resource A and continues executing normally.
• Task 1 finishes executing. Task 2 acquires Resource A and begins executing its
critical region.
• Task 2 releases Resource A and continues executing normally.
• Task 2 finishes executing. Task 3 resumes and continues executing normally.
• Task 3 finishes executing.
122
Three-task, two-resource priority inheritance
123
• Task 3 is given control of the processor and begins executing. The task requests
Resource A.
• Task 3 acquires ownership of Resource A and begins executing its critical region.
• Task 3 is preempted by Task 2, a higher-priority task. Task 2 requests ownership of
Resource B.
• Task 2 is granted ownership of Resource B and begins executing its critical region.
– The task requests ownership of Resource A, which is owned by Task 3.
• Task 3 is hoisted to a priority above Task 2 and resumes executing its critical region.
• Task 3 is preempted by Task 1, a higher-priority task.
– Task 1 requests Resource B, which is owned by Task 2.
• Task 2 is hoisted to a priority above Task 1. However, Task 2 still can't execute because it
must wait for Resource A, which is owned by Task 3.
– Task 3 is hoisted to a priority above Task 2 and continues executing its critical region.
• Task 3 releases Resource A and is lowered back to its original priority.
– Task 2 acquires ownership of Resource A and resumes executing its critical region.
• Task 2 releases Resource A and then releases Resource B. The task is lowered back to its
original priority.
– Task 1 acquires ownership of Resource B and begins executing its critical region.
• Task 1 releases Resource B and continues executing normally.
• Task 1 finishes executing. Task 2 resumes and continues executing normally.
• Task 2 finishes executing. Task 3 resumes and continues executing normally.
• Task 3 finishes executing.
124