0% found this document useful (0 votes)
34 views

CSC 553 Operating Systems: Lecture 4 - Concurrency: Mutual Exclusion and Synchronization

This document discusses concurrency in operating systems. It covers multiple contexts where concurrency arises, including multiple applications, structured applications, and operating system structure. The key difficulties of concurrency are sharing global resources, race conditions, and making results deterministic. The document also discusses mutual exclusion, deadlock, starvation, and hardware and software mechanisms for synchronization like semaphores, mutexes, and condition variables.

Uploaded by

hiba ilmain
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views

CSC 553 Operating Systems: Lecture 4 - Concurrency: Mutual Exclusion and Synchronization

This document discusses concurrency in operating systems. It covers multiple contexts where concurrency arises, including multiple applications, structured applications, and operating system structure. The key difficulties of concurrency are sharing global resources, race conditions, and making results deterministic. The document also discusses mutual exclusion, deadlock, starvation, and hardware and software mechanisms for synchronization like semaphores, mutexes, and condition variables.

Uploaded by

hiba ilmain
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

CSC 553 Operating Systems

Lecture 4 - Concurrency: Mutual


Exclusion and Synchronization

Multiple Processes
• Operating System design is concerned
with the management of processes and
threads:
• Multiprogramming
• Multiprocessing
• Distributed Processing
Concurrency
Arises in Three Different Contexts:
• Multiple Applications
– invented to allow processing time to be shared
among active applications
• Structured Applications
– extension of modular design and structured
programming
• Operating System Structure
– OS themselves implemented as a set of
processes or threads

Key Terms
Related
to
Concurrency
Principles of Concurrency
• Interleaving and overlapping
• can be viewed as examples of concurrent processing
• both present the same problems

• Uniprocessor – the relative speed of execution of


processes cannot be predicted
• depends on activities of other processes
• the way the OS handles interrupts
• scheduling policies of the OS

Difficulties of Concurrency
• Sharing of global resources
• Difficult for the OS to manage the
allocation of resources optimally
• Difficult to locate programming errors as
results are not deterministic and
reproducible
Race Condition
• Occurs when multiple processes or threads
read and write data items
• The final result depends on the order of
execution
– the “loser” of the race is the process that
updates last and will determine the final
value of the variable

Operating System Concerns


• Design and management issues raised by the
existence of concurrency:
• The OS must:
– be able to keep track of various processes
– allocate and de-allocate resources for each active
process
– protect the data and physical resources of each process
against interference by other processes
– ensure that the processes and outputs are independent
of the processing speed
Degree of Awareness Relationship Influence that One Potential Control
Process Has on the Problems
Other

Processes unaware of Competition •Results of one •Mutual exclusion


each other process independent
of the action of •Deadlock (renewable
others resource)

•Timing of process •Starvation Process


may be affected
Interaction
Processes indirectly Cooperation by •Results of one •Mutual exclusion
aware of each other sharing process may depend
(e.g., shared object) on information •Deadlock (renewable
obtained from others resource)

•Timing of process •Starvation


may be affected
•Data coherence

Processes directly Cooperation by •Results of one •Deadlock


aware of each other communication process may depend (consumable
(have communication on information resource)
primitives available to obtained from others
them) •Starvation
•Timing of process
may be affected

Resource Competition
• Concurrent processes come into conflict when
they are competing for use of the same resource
– for example: I/O devices, memory, processor time,
clock
– In the case of competing processes three control
problems must be faced
• the need for mutual exclusion
• deadlock
• starvation
Illustration of Mutual Exclusion

PROCESS 1 */ /* PROCESS 2 */ /* PROCESS n */

void P1 void P2 void Pn


{ { {
while (true) { while (true) { while (true) {
/* preceding code */; /* preceding code */; /* preceding code */;
• • •
entercritical (Ra); entercritical (Ra); entercritical (Ra);
/* critical section */; /* critical section */; /* critical section */;
exitcritical (Ra); exitcritical (Ra); exitcritical (Ra);
/* following code */; /* following code */; /* following code */;
} } }
} } }

Requirements for Mutual Exclusion


• Must be enforced
• A process that halts must do so without
interfering with other processes
• No deadlock or starvation
• A process must not be denied access to a critical
section when there is no other process using it
• No assumptions are made about relative process
speeds or number of processes
• A process remains inside its critical section for a
finite time only
Mutual Exclusion: Hardware Support

– Interrupt Disabling
– uniprocessor system
– disabling interrupts guarantees mutual exclusion
– Disadvantages:
– the efficiency of execution could be noticeably
degraded
– this approach will not work in a multiprocessor
architecture

Mutual Exclusion: Hardware


Support
• Compare & Swap Instruction
– also called a “compare and exchange
instruction”
– a compare is made between a memory value
and a test value
– if the values are the same a swap occurs
– carried out atomically
Hardware Support for Mutual Exclusion
/* program mutualexclusion */ /* program mutualexclusion */
const int n = /* number of processes */; int const n = /* number of processes*/;
int bolt; int bolt;
void P(int i) void P(int i)
{ {
while (true) { while (true) {
while (compare_and_swap(&bolt, 0, 1) == 1) int keyi = 1;
/* do nothing */; do exchange (&keyi, &bolt)
/* critical section */; while (keyi != 0);
bolt = 0; /* critical section */;
/* remainder */; bolt = 0;
} /* remainder */;
} }
void main() }
{ void main()
bolt = 0; {
parbegin (P(1), P(2), . . . ,P(n)); bolt = 0;
parbegin (P(1), P(2), . . ., P(n));
} }

Compare and swap instruction (b) Exchange instruction

Special Machine Instructions:


Advantages
• Applicable to any number of processes on
either a single processor or multiple
processors sharing main memory
• Simple and easy to verify
• It can be used to support multiple critical
sections; each critical section can be defined
by its own variable
Special Machine Instructions:
Disadvantages
• Busy-waiting is employed, thus while a
process is waiting for access to a critical
section it continues to consume processor
time
• Starvation is possible when a process leaves
a critical section and more than one process
is waiting
• Deadlock is possible

Semaphore An integer value used for signaling among processes. Only three
operations may be performed on a semaphore, all of which are
atomic: initialize, decrement, and increment. The decrement
operation may result in the blocking of a process, and the increment
operation may result in the unblocking of a process. Also known as a
counting semaphore or a general semaphore
Binary Semaphore A semaphore that takes on only the values 0 and 1.
Mutex Similar to a binary semaphore. A key difference between the two is
that the process that locks the mutex (sets the value to zero) must be
the one to unlock it (sets the value to 1).
Common
Condition Variable A data type that is used to block a process or thread until a particular Concurrency
condition is true.
Monitor A programming language construct that encapsulates variables,
Mechanisms
access procedures and initialization code within an abstract data type.
The monitor's variable may only be accessed via its access
procedures and only one process may be actively accessing the
monitor at any one time. The access procedures are critical sections.
A monitor may have a queue of processes that are waiting to access
it.
Event Flags A memory word used as a synchronization mechanism. Application
code may associate a different event with each bit in a flag. A thread
can wait for either a single event or a combination of events by
checking one or multiple bits in the corresponding flag. The thread is
blocked until all of the required bits are set (AND) or until at least
one of the bits is set (OR).
Mailboxes/Messages A means for two processes to exchange information and that may be
used for synchronization.
Spinlocks Mutual exclusion mechanism in which a process executes in an
infinite loop waiting for the value of a lock variable to indicate
availability.
Semaphore
• A variable that has an integer value upon
which only three operations are defined:
– There is no way to inspect or manipulate
semaphores other than these three operations
1. May be initialized to a nonnegative integer
value
2. The semWait operation decrements the value
3. The semSignal operation increments the value

Consequences

There is no way to
know which process You don’t know
There is no way to
will continue whether another
know before a
immediately on a process is waiting so
process decrements a
uniprocessor system the number of
semaphore whether it
when two processes unblocked processes
will block or not
are running may be zero or one
concurrently
A
Definition
of
Semaphore
Primitives

struct binary_semaphore {
enum {zero, one} value; A
queueType queue;
};
void semWaitB(binary_semaphore s)
Definition
{
if (s.value == one) of Binary
s.value = zero;
else {
/* place this process in s.queue */;
Semaphor
}
/* block this process */;
e
}
void semSignalB(semaphore s)
{
if (s.queue is empty())
s.value = one;
else {
/* remove a process P from s.queue */;
/* place process P on ready list */;
}
}
Strong/Weak Semaphores
A queue is used to hold processes waiting on
the semaphore
• Strong Semaphores - the process that has been
blocked the longest is released from the queue first
(FIFO)
• Weak Semaphores - the order in which processes
are removed from the queue is not specified

A issues semWait, later times out


1 Processor
C D B A D B A C
Ready queue Processor Ready queue
s=1 s=0 5

C issues semWait
Blocked queue Blocked queue

Processor
A C D B
Ready queue
s=0 2

B issues semWait
Blocked queue

Processor Processor
AC D D
Ready queue D issues semSignal Ready queue D issues semSignal
s = –1 3
s = –3 6

B C A B
Blocked queue Blocked queue
D issues semSignal, later times out
4 Processor
B AC D C D
Ready queue Processor Ready queue D issues semSignal
s=0 s = –2
7

A B
Blocked queue Blocked queue

Figure 5.5 Example of Semaphore Mechanism


Mutual Exclusion Using Semaphores

Queue for Value of


semaphore lock semaphore lock A B C
Critical
1 region

Normal
semWait(lock)
execution
0
Blocked on
semWait(lock)
semaphore
B –1 lock

semWait(lock)

C B –2
semSignal(lock)

C –1

semSignal(lock)

semSignal(lock)

1
Note that normal
execution can
proceed in parallel
but that critical
regions are serialized.

Figure 5.7 Processes Accessing Shared Data


Protected by a Semaphore
Producer/Consumer Problem
General one or more producers are generating data and placing
Statement: these in a buffer
a single consumer is taking items out of the buffer one at
a time
only one producer or consumer may access the buffer at
any one time
The
Problem: ensure that the producer can’t add
data into full buffer and consumer
can’t remove data from an empty
buffer

0 1 2 3 4

b[1] b[2] b[3] b[4] b[5]

out in

Note: shaded area indicates portion of buffer that is occupied

Figure 5.8 Infinite Buffer for the Producer/Consumer Problem


/* program producerconsumer */
int n;
binary_semaphore s = 1, delay = 0;
void producer()
{ An Incorrect
while (true) {
produce(); Solution
semWaitB(s);
append(); to the
n++;
if (n==1) semSignalB(delay); Infinite-
semSignalB(s);
} Buffer
}
void consumer() Producer/
{
semWaitB(delay); Consumer
while (true) {
semWaitB(s); Problem
take();
n--; Using
semSignalB(s);
consume(); Binary
if (n==0) semWaitB(delay);
} Semaphores
}
void main()
{
n = 0;
parbegin (producer, consumer);
}

Producer Consumer s n Delay


1 1 0 0
2 semWaitB(s) 0 0 0
3 n++ 0 1 0
4 if (n==1) 0 1 1
5
(semSignalB(delay))
semSignalB(s) 1 1 1
Possible
6 semWaitB(delay) 1 1 0 Scenario
7 semWaitB(s) 0 1 0
n--
for the
8 0 0 0
9 semSignalB(s) 1 0 0 Program
semWaitB(s)
10
n++
0 0 0 of
11 0 1 0
12 if (n==1)
(semSignalB(delay))
0 1 1 Previous
13 semSignalB(s) 1 1 1 slide
14 if (n==0) (semWaitB(delay)) 1 1 1
15 semWaitB(s) 0 1 1
16 n-- 0 0 1
17 semSignalB(s) 1 0 1
18 if (n==0) (semWaitB(delay)) 1 0 0
19 semWaitB(s) 0 0 0
20 n-- 0 –1 0
21 semSignalB(s) 1 –1 0
A Correct
Solution to the
Infinite-Buffer
Producer/Consu
mer Problem
Using Binary
Semaphores

/* program producerconsumer */
semaphore n = 0, s = 1;
void producer() A Solution
{
while (true) { to the
produce();
semWait(s); Infinite-
append();
semSignal(s);
Buffer
semSignal(n); Producer/C
}
} onsumer
void consumer()
{ Problem
while (true) { Using
semWait(n);
semWait(s); Semaphores
take();
semSignal(s);
consume();
}
}
void main()
{
parbegin (producer, consumer);
}
b[1] b[2] b[3] b[4] b[5] b[n]

out in

(a)

b[1] b[2] b[3] b[4] b[5] b[n]

in out

(b)

Figure 5.12 Finite Circular Buffer for the Producer/Consumer Problem

A Solution
to the
Bounded-
Buffer
Producer/
Consumer
Problem
Using
Semaphores
Implementation of Semaphores
• Imperative that the semWait and
semSignal operations be implemented as
atomic primitives
• Can be implemented in hardware or
firmware
• Software schemes such as Dekker’s or
Peterson’s algorithms can be used
• Use one of the hardware-supported
schemes for mutual exclusion

Two Possible Implementations of


Semaphores

semWait(s) semWait(s)
{ {
while (compare_and_swap(s.flag, 0 , 1) == 1) inhibit interrupts;
/* do nothing */; s.count--;
s.count--; if (s.count < 0) {
if (s.count < 0) { /* place this process in s.queue */;
/* place this process in s.queue*/; /* block this process and allow interrupts */;
/* block this process (must also set s.flag to 0) }
*/; else
} allow interrupts;
s.flag = 0; }
}
semSignal(s)
semSignal(s) {
{ inhibit interrupts;
while (compare_and_swap(s.flag, 0 , 1) == 1) s.count++;
/* do nothing */; if (s.count <= 0) {
s.count++; /* remove a process P from s.queue */;
if (s.count <= 0) { /* place process P on ready list */;
/* remove a process P from s.queue */; }
/* place process P on ready list */; allow interrupts;
} }
s.flag = 0;
}

Compare and Swap Instruction Interrupts


Monitors
• Programming language construct that provides
equivalent functionality to that of semaphores and
is easier to control
• Implemented in a number of programming
languages
– including Concurrent Pascal, Pascal-Plus,
Modula-2, Modula-3, and Java
• Has also been implemented as a program library
• Software module consisting of one or more
procedures, an initialization sequence, and local
data

Monitor Characteristics

Local data variables are accessible only by the monitor’s


procedures and not by any external procedure

Process enters monitor by invoking one of its procedures

Only one process may be executing in the monitor at a time


Synchronization
• Achieved by the use of condition variables that
are contained within the monitor and accessible
only within the monitor
– Condition variables are operated on by two
functions:
• cwait(c): suspend execution of the calling process
on condition c
• csignal(c): resume execution of some process
blocked after a cwait on the same condition

queue of
entering
processes

monitor waiting area Entrance

MONITOR

condition c1 local data

cwait(c1)
condition variables

Procedure 1

condition cn

cwait(cn)
Procedure k

urgent queue

csignal
initialization code

Exit

Figure 5.15 Structure of a Monitor


A Solution
to the
Bounded-
Buffer
Producer/
Consumer
Problem
Using a
Monitor

void append (char x)


{
while(count == N) cwait(notfull);/* buffer is full; avoid overflow
*/
buffer[nextin] = x;
nextin = (nextin + 1) % N;
count++;........................../* one more item in buffer */
cnotify(notempty);............/* notify any waiting consumer */
}

void take (char x)


{
while(count == 0) cwait(notempty); .../* buffer is empty; avoid
underflow */
x = buffer[nextout];
nextout = (nextout + 1) % N;
count--; ......................./* one fewer item in buffer */
cnotify(notfull); ............/* notify any waiting producer */
}

Figure 5.17 Bounded Buffer Monitor Code for Mesa Monitor


Message Passing
• When processes interact with one another two
fundamental requirements must be satisfied:
– Synchronization - to enforce mutual exclusion
– Communication – to exchange information
• Message Passing is one approach to providing
both of these functions
• works with distributed systems and shared memory
multiprocessor and uniprocessor systems

Message Passing
• The actual function is normally provided in the
form of a pair of primitives:
send (destination, message)
receive (source, message)
• A process sends information in the form of a
message to another process designated by a
destination
• A process receives information by executing the
receive primitive, indicating the source and the
message
Design Characteristics of Message Systems for
Interprocess Communication and
Synchronization
Synchronization Format
Send Content
blocking Length
nonblocking fixed
Receive variable
blocking
nonblocking Queueing Discipline
test for arrival FIFO
Priority
Addressing
Direct
send
receive
explicit
implicit
Indirect
static
dynamic
ownership

Synchronization
Blocking Send, Blocking Receive
• Both sender and receiver are blocked until
the message is delivered
• Sometimes referred to as a rendezvous
• Allows for tight synchronization between
processes

Nonblocking Send
• Nonblocking send. blocking receive
– sender continues on but receiver is blocked until the
requested message arrives
– most useful combination
– sends one or more messages to a variety of destinations
as quickly as possible
– example -- a service process that exists to provide a
service or resource to other processes
• Nonblocking send, nonblocking receive
– neither party is required to wait
Addressing
• Schemes for specifying processes in
send and receive primitives fall into
two categories:

Direct Indirect
addressing addressing

Direct Addressing
• Send primitive includes a specific identifier of
the destination process
• Receive primitive can be handled in one of two
ways:
• require that the process explicitly designate a
sending process
• effective for cooperating concurrent processes
• implicit addressing
• source parameter of the receive primitive
possesses a value returned when the receive
operation has been performed
Indirect Addressing

Messages are sent to a Queues are


shared data structure
consisting of queues that can referred to as
temporarily hold messages mailboxes

One process sends a


Allows for greater message to the mailbox
flexibility in the and the other process picks
use of messages up the message from the
mailbox

S1

S1 Mailbox R1 Port R1

Sn

(a) One to one (b) Many to one

R1 S1 R1

S1 Mailbox Mailbox

Rm Sn Rm

(c) One to many (d) Many to many

Figure 5.18 Indirect Process Communication


Message Type
Destination ID
Header Source ID
Message Length
Control Information

Body Message Contents

Figure 5.19 General Message Format

Mutual Exclusion Using Messages


A Solution
to the Bounded
Buffer Producer
/Consumer
Problem Using
Messages

Readers/Writers Problem
• A data area is shared among many processes
• some processes only read the data area, (readers)
and some only write to the data area (writers)
• Conditions that must be satisfied:
1. any number of readers may simultaneously read
the file
2. only one writer at a time may write to the file
3. if a writer is writing to the file, no reader may
read it
A Solution to the
Readers/Writers
Problem Using
Semaphores:
Readers
Have
Priority

Readers only in the system •wsem set


•no queues
State of the
Writers only in the system •wsem and rsem set
•writers queue on wsem
Process
Queues for
Both readers and writers with read first •wsem set by reader
•rsem set by writer
Program of
•all writers queue on wsem
•one reader queues on rsem Figure
•other readers queue on z

Both readers and writers with write first •wsem set by writer
•rsem set by writer
•writers queue on wsem
•one reader queues on rsem
•other readers queue on z
A Solution to the
Readers/Writers
Problem Using
Semaphores: Writers
Have Priority

void reader(int i) void controller()


{ {
message rmsg; while (true)
while (true) { {
rmsg = i; if (count > 0) {
send (readrequest, rmsg); if (!empty (finished)) {
receive (mbox[i], rmsg); receive (finished, msg);
READUNIT (); count++;
rmsg = i; }
send (finished, rmsg); else if (!empty (writerequest)) {
} receive (writerequest, msg);
} writer_id = msg.id;
void writer(int j) count = count – 100;
{ }
message rmsg; else if (!empty (readrequest)) {
while(true) { receive (readrequest, msg);
rmsg = j; count--;
send (writerequest, rmsg); send (msg.id, "OK");
receive (mbox[j], rmsg); }
WRITEUNIT (); }
rmsg = j; if (count == 0) {
send (finished, rmsg); send (writer_id, "OK");
} receive (finished, msg);
} count = 100;
}
while (count < 0) {
receive (finished, msg);
count++;
}
}
}

A Solution to the Readers/Writers Problem Using Message Passing


Summary
• Principles of concurrency
– Race condition
– OS concerns
– Process interaction
– Requirements for mutual exclusion
• Mutual exclusion: hardware support
– Interrupt disabling
– Special machine instructions

Summary
• Semaphores
– Mutual exclusion
– Producer/consumer problem
– Implementation of semaphores
• Monitors
– Monitor with signal
– Alternate model of monitors with notify and broadcast
Summary
• Message passing
– Synchronization
– Addressing
– Message format
– Queueing discipline
– Mutual exclusion
• Readers/writers problem
– Readers have priority
– Writers have priority

You might also like