0% found this document useful (0 votes)
2 views

Chap4_OS

Chapter 4 discusses process synchronization in operating systems, focusing on the critical-section problem and various solutions including Dekker's and Peterson's algorithms. It highlights the importance of mutual exclusion, progress, and bounded waiting to maintain data consistency during concurrent access. Additionally, it covers interprocess communication methods such as shared memory and message passing, along with classical synchronization problems like the readers-writers and dining-philosophers problems.

Uploaded by

Muazam Qayyum
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Chap4_OS

Chapter 4 discusses process synchronization in operating systems, focusing on the critical-section problem and various solutions including Dekker's and Peterson's algorithms. It highlights the importance of mutual exclusion, progress, and bounded waiting to maintain data consistency during concurrent access. Additionally, it covers interprocess communication methods such as shared memory and message passing, along with classical synchronization problems like the readers-writers and dining-philosophers problems.

Uploaded by

Muazam Qayyum
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 20

Chapter 4: Parallel Processing

(Process Synchronization)
Module 4: Process Synchronization

 Background
 The Critical-Section Problem
 Software solutions
 Dekker’s solution
 Peterson’s Solution
 Hardware Solution
 Interprocess Communication

Operating System Concepts 6.2 Silberschatz, Galvin and Gagne ©2005


Background

 Concurrent access to shared data may result in data


inconsistency
 Maintaining data consistency requires mechanisms to
ensure the orderly execution of cooperating processes

Operating System Concepts 6.3 Silberschatz, Galvin and Gagne ©2005


Solution to Critical-Section Problem

1. Mutual Exclusion - If process Pi is executing in its critical section,


then no other processes can be executing in their critical sections
2. Progress - If no process is executing in its critical section and
there exist some processes that wish to enter their critical section,
then the selection of the processes that will enter the critical
section next cannot be postponed indefinitely
3. Bounded Waiting - A bound must exist on the number of times
that other processes are allowed to enter their critical sections
after a process has made a request to enter its critical section and
before that request is granted

Operating System Concepts 6.4 Silberschatz, Galvin and Gagne ©2005


Dekker’s Solution
 Algorithm#1:
Repeat
While turn!= i do no-op
……….
Critical Section
………….
turn=j;
Remainder Section
…………….
Until false;

Operating System Concepts 6.5 Silberschatz, Galvin and Gagne ©2005


Dekker’s Algorithm (Continues…)

Algorithm#2:
Repeat
flag[i]=true;
While flag[j]……..do no-op
Critical Section
…………..
flag[i]=false;
Remainder Section
Until false

Operating System Concepts 6.6 Silberschatz, Galvin and Gagne ©2005


Peterson’s Solution
 Two process solution
 The two processes share two variables:
 int turn;
 Boolean flag[2]
 The variable turn indicates whose turn it is to enter the
critical section.
 The flag array is used to indicate if a process is ready to
enter the critical section. flag[i] = true implies that process Pi
is ready!

Operating System Concepts 6.7 Silberschatz, Galvin and Gagne ©2005


Algorithm for Process Pi

do {
flag[i] = TRUE;
turn = j;
while ( flag[j] && turn == j);

CRITICAL SECTION

flag[i] = FALSE;

REMAINDER SECTION

} while (TRUE);

Operating System Concepts 6.8 Silberschatz, Galvin and Gagne ©2005


Synchronization Hardware
Solution using TestAndSet
 Shared boolean variable lock., initialized to false.
 Solution:
do {
while ( TestAndSet (&lock ))
; /* do nothing

// critical section

lock = FALSE;

// remainder section

} while ( TRUE);

Operating System Concepts 6.9 Silberschatz, Galvin and Gagne ©2005


Classical Problems of Synchronization
 Bounded-Buffer Problem
 Readers and Writers Problem
 Dining-Philosophers Problem (Read yourself)

Operating System Concepts 6.10 Silberschatz, Galvin and Gagne ©2005


Readers-Writers Problem
 A data set is shared among a number of concurrent processes
 Readers – only read the data set; they do not perform any
updates
 Writers – can both read and write.

 Problem – allow multiple readers to read at the same time. Only


one single writer can access the shared data at the same time.

Operating System Concepts 6.11 Silberschatz, Galvin and Gagne ©2005


Interprocess Communications

- Exchange of data between two or more separate,


independent processes/threads.
- Operating systems provide facilities/resources for
inter-process communications (IPC), such as
message queues, semaphores, and shared
memory.
- Distributed computing systems make use of these
facilities/resources to provide application
programming interface (API) which allows IPC
to be programmed at a higher level of abstraction.
(e.g., send and receive)
- Distributed computing requires information to be
exchanged among independent processes

Operating System Concepts 6.12 Silberschatz, Galvin and Gagne ©2005


IPC – unicast and multicast

 In distributed computing, two or more


processes engage in IPC using a protocol
agreed upon by the processes. A process
may be a sender at some points during a
protocol, a receiver at other points.
 When communication is from one process
to a single other process, the IPC is said to
be a unicast, e.g., Socket
communication. When communication is
from one process to a group of processes,
the IPC is said to be a multicast, e.g.,
Publish/Subscribe Message model,

Operating System Concepts 6.13 Silberschatz, Galvin and Gagne ©2005


Unicast vs. Multicast

P2 P2 P3 ... P4

m
m m m

P1 P1

uni cast m ul ti cast

Operating System Concepts 6.14 Silberschatz, Galvin and Gagne ©2005


Interprocess communication (Continues…)

 Can be done through


 Shared Memory
 Message Passing
 Signals

Operating System Concepts 6.15 Silberschatz, Galvin and Gagne ©2005


Shared Memory

Process 1 Process 2 Process 3

Shared memory

Operating System Concepts 6.16 Silberschatz, Galvin and Gagne ©2005


Message Passing

message

Operating System Concepts 6.17 Silberschatz, Galvin and Gagne ©2005


Message Passing
Message Passing
Many possible naming schemes. One is direct naming:
send(process_id, message)
receive(process_id, buffer)
 Example

process P1: process P2:


declare x integer declare y integer
. .
send(P2, x) receive(P1, y)
. .
end process end process

 Effect of this communication is


y := x
| \
local var local var
of P2 of P1

Operating System Concepts 6.18 Silberschatz, Galvin and Gagne ©2005


Mailboxes
 Also known as message queues, ports
 The explicit and symmetric naming of processes in direct naming
 Limited modularity since changing the name of a process
requires changes elsewhere, i.e., in definitions of other processes

P
mbox R
Q

 P or Q call
send(mbox-id, message)
 R calls
receive(mbox-id, message)

Operating System Concepts 6.19 Silberschatz, Galvin and Gagne ©2005


End of Chapter 4

You might also like