0% found this document useful (0 votes)
20 views

CH 5

The document introduces solutions to synchronization problems like the critical section problem and readers-writers problem. It aims to present software and hardware solutions to the critical section problem to ensure orderly execution and consistency of shared data between processes. It also describes the readers-writers problem and solutions that use semaphores to ensure mutual exclusion between readers and writers accessing a shared database.

Uploaded by

Arish Imtiaz
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

CH 5

The document introduces solutions to synchronization problems like the critical section problem and readers-writers problem. It aims to present software and hardware solutions to the critical section problem to ensure orderly execution and consistency of shared data between processes. It also describes the readers-writers problem and solutions that use semaphores to ensure mutual exclusion between readers and writers accessing a shared database.

Uploaded by

Arish Imtiaz
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 20

Objectives

 To introduce the critical-section problem, whose solutions can be used


to ensure the consistency of shared data

 To present both software and hardware solutions of the critical-section


problem

 To introduce the concept of an atomic transaction and describe


mechanisms to ensure atomicity
Background

 Concurrent access to shared data may result in


data inconsistency
 Maintaining data consistency requires mechanisms
to ensure the orderly execution of cooperating
processes
 Suppose that we wanted to provide a solution to the
consumer-producer problem that fills all the
buffers. We can do so by having an integer count
that keeps track of the number of full buffers.
Initially, count is set to 0. It is incremented by the
producer after it produces a new buffer and is
decremented by the consumer after it consumes a
buffer.
Producer

while (true) {
/* produce
an item and
put in
nextProduced */
while (count ==
BUFFER_SIZE)
; // do nothing
buffer [in] = nextProduced;
in = (in + 1) %
BUFFER_SIZE;
count++;
Consumer

while (true) {
while (count == 0)
; // do nothing
nextConsumed =
buffer[out];
out = (out + 1) %
BUFFER_SIZE;
count--;
/* consume the item in
nextConsumed
}
Race Condition
 count++ could be implemented as
register1 = count
register1 = register1 + 1
count = register1

 count-- could be
implemented as
register2 = count
register2 = register2 - 1
count = register2

 Consider this execution


interleaving with “count =
5” initially:
S0: producer
execute register1 =
count

{register1 = 5}
S1: producer
Race Condition
Notice that we have arrived at the incorrect state "counter == 4", indicating
that four buffers are full, when, in fact, five buffers are full. If we reversed the
order of the statements at T4 and T5, we would arrive at the incorrect state
"counter== 6".
 We would arrive at this incorrect state because we allowed both
processes
to manipulate the variable counter concurrently. A situation like this, where
several processes access and manipulate the same data concurrently and the
outcome of the execution depends on the particular order in which the access
takes place, is called race condition.
 To guard against the race condition above, we need to ensure that only one
process at a time can be manipulating the variable counter. To make such a
guarantee, we require that the processes be synchronized in some way.
Critical Section Problem
 Consider a system consisting of n processes {Po, P1 , ... , P11 _ I}. Each
process has a segment of code, called a critical section in which the
process may be changing common variables, updating a table, writing a file,
and so on.
 The important feature of the system is that, when one process is executing
in its critical section, no other process is to be allowed to execute in its
critical section. That is, no two processes are executing in their critical
sections at the same time.
 The critical-section problem is to design a protocol that the processes
 can use to cooperate. Each process must request permission to enter its
critical section.
 The section of code implementing this request is the entry section
 The critical section may be followed by an exit section.
 The remaining code is the remainder section.
Critical Section Problem
The general structure of a typical process Pi is shown below
Critical Section Problem
 The entry section and exit section are enclosed in boxes to highlight
these important segments of code.
 A solution to the critical-section problem must satisfy the following three
requirements:
Solution to Critical-Section Problem
Requirements:
1. Mutual Exclusion - If process Pi is executing in its critical
section, then no other processes can be executing in their
critical sections.
2. Progress - If no process is executing in its critical section and
there exist some processes that wish to enter their critical
section, then the selection of the processes that will enter the
critical section next cannot be postponed indefinitely.
3. Bounded Waiting - A bound must exist on the number of
times that other processes are allowed to enter their critical
sections after a process has made a request to enter its
critical section and before that request is granted.
 Assume that each process executes at a nonzero speed
 No assumption concerning relative speed of the N
processes
Solution to Critical-Section Problem
 Two general approaches are used to handle critical sections in
operating systems:
(1) preemptive kernels and
(2) nonpreemptive kernels.
 A preemptive kernel allows a process to be preempted
while it is running in kernel mode.
 A non-preemptive kernel does not allow a process running in
kernel mode to be preempted; a kernel-mode process will run
until it exits kernel mode,blocks, or voluntarily yields control
of the CPU.
 Obviously, a non-preemptive kernel is essentially free from race
conditions on kernel data structures, as only one process is
active in the kernel at a time.
Readers-Writers Problem
 Suppose that a database is to be shared among several
concurrent processes.
 Readers – only read the data set; they do not perform any
updates
 Writers – can both read and write
 Obviously, if two readers access the shared data
simultaneously, no adverse effects will result. However, if a
writer and some other process (either a reader or a writer)
access the database simultaneously,chaos may ensue.
 To ensure that these difficulties do not arise, we require that the
writers have exclusive access to the shared database while
writing to the database. This synchronization problem is
referred to as the readers-writers problem
Readers-Writers Problem
 The readers-writers problem has several variations, all involving
priorities.
1. the first readers-writers problem, requires that no reader be
kept waiting unless a writer has already obtained permission to
use the shared object. In other words, no reader should wait
for other readers to finish simply because a writer is waiting.
2.The second readers-writers problem requires that, once a writer
is ready, that writer performs its write as soon as possible. In other
words, if a writer is waiting to access the object, no new readers
may start reading.
 A solution to either problem may result in starvation. In the first
case, writers may starve; in the second case, readers may
starve.
Solution
 In the solution to the first readers-writers problem, the reader processes
share the following data structures:
semaphore mutex, wrt;
int readcount;
 The semaphores mutex and wrt are initialized to 1;
 readcount is initialized to 0.
 The semaphore wrt is common to both reader and writer processes.
 The mutex semaphore is used to ensure mutual exclusion when the
variable readcount is updated. The readcount variable keeps track of how
many processes are currently reading the object.
 The semaphore wrt functions as a mutual-exclusion semaphore for
the writers. It is also used by the first or last reader that enters or exits
the critical section. It is not used by readers who enter or exit while
other readers are in their critical sections.
Readers-Writers Problem (Cont.)

 The structure of a writer process


do {
wait (wrt) ;
// writing is performed
signal (wrt) ;
} while (TRUE);
Readers-Writers Problem (Cont.)
 The structure of a reader process
do {
wait (mutex) ;
readcount ++ ;
if (readcount == 1)
wait (wrt) ;
signal (mutex)
// reading is performed
wait (mutex) ;
readcount -
-;
if (readcount

== 0)
signal (wrt) ;
signal (mutex) ;
Readers-Writers Problem (Cont.)
• If a writer is in the critical section and n readers are waiting, then one reader is
queued on wrt, and n- 1 readers are queued on mutex. Also observe that,
when a writer executes signal ( wrt), we may resume the execution of either
the waiting readers or a single waiting writer. The selection is made by the
scheduler.
• The readers-writers problem and its solutions have been generalized to
provide reader-writer locks on some systems.
• Acquiring a reader-writer lock requires specifying the mode of the lock
either read or write access. When a process wishes only to read shared
data, it requests the reader-writer lock in read mode; a process wishing to
modify the shared data must request the lock in write mode.
• Multiple processes are permitted to concurrently acquire a reader-writer lock
in read mode, but only one process may acquire the lock for writing, as
exclusive access is required for writers.
Semaphore v/s Mutex
 Semaphore is simply a variable. This variable is used to solve the critical
section problem and to achieve process synchronization in the
multiprocessing environment. The two most common kinds of semaphores
are counting semaphores and binary semaphores. Counting semaphore can
take non-negative integer values and Binary semaphore can take the value
0 & 1 only.
 Mutex is a mutual exclusion object that synchronizes access to a resource.
It is created with a unique name at the start of a program. The Mutex is a
locking mechanism that makes sure only one thread can acquire the Mutex
at a time and enter the critical section. This thread only releases the Mutex
when it exits the critical section.
 A Mutex is different than a semaphore as it is a locking mechanism while a
semaphore is a signalling mechanism.
 A semaphore uses two atomic operations, wait and signal for process
synchronization.
Semaphore v/s Mutex
 The wait operation decrements the value of its argument S, if it is positive. If
S is negative or zero, then no operation is performed.
wait(S)
{
while (S<=0);
S--;
}

 The signal operation increments the value of its argument S.


signal(S)
{
S++;
}
Readers-Writers Problem (Cont.)
• Reader-writer locks are most useful in the following situations:
1. In applications where it is easy to identify which processes only read shared
data and which processes only write shared data.
2.In applications that have more readers than writers. This is because reader
writer locks generally require more overhead to establish than semaphores
or mutual-exclusion locks. The increased concurrency of allowing multiple
readers compensates for the overhead involved in setting up the reader
Writer lock.

You might also like