0% found this document useful (0 votes)
9 views

Unit - IV New

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Unit - IV New

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 124

Unit – IV

RTOS Based Embedded System


Design

1
2
3
Operating System Types
• hard time constraints as opposed to soft time
constraints
– Real Time Operating System
– Non Real Time operating System

• A static operating system is one where the


number of tasks can be accurately defined before
run time.
• A dynamic operating system on the other is one
in which the number of tasks is not known a-
priori.
4
• Program: a sequence of instructions which
when executed carry out some activity
• Task: a task is an action which is carried out by
executing a sequence of instructions
• Thread: is the basic unit to which the
operating system allocates processor time, use
far less resources than processes, since they
share the same address space are
interdependent.
• Process: is an executing program,
independent of each other, can consist of
multiple threads 5
Onion Structure of OS

6
What is Interrupt?
• Break is program execution
– An interrupt is a hardware generated subroutine call.
– usually in response to some external event
– can be called at any point in the code execution
– CPU status at the time of the interrupt is saved

How it is differ from Calling


Subroutine?
• Determine, (user coded)
• It won’t save CPU status
7
Why Interrupts are
crucial in OS?

8
Example for Interrupt

• Usage
voidNMI_enable(void)
{
outb(0x70,inb(0x70)&0x7F);
}
voidNMI_disable(void)
{
outb(0x70,inb(0x70)|0x80);
}
9
Software Interrupt
• A software
interrupt,
also called
an exception,
trap

10
Direct Call to an ISR

11
RTOS first interrupting on an interrupt

12
• ISR code can send into a mailbox or message
queue but the task waiting for a mailbox or
message queue does not start before the
return from the ISR
• When ISR finishes, it send s Exit message to
OS

13
14
15
16
Multiprocessing
• Two or more CPUs share full access to a
common RAM
• Basis of Inter-Processor Communication
• These include
– process synchronization
– resource management
– Scheduling

17
18
Each CPU Has Its Own Operating System

Master-Slave Multiprocessors

Symmetric Multiprocessors

19
Multiprocessor Synchronization

20
Multiprocessor Scheduling
• Timesharing
• Space Sharing
• Gang Scheduling

21
Timesharing
• single scheduling data structure used by all CPUs
• they would be in a uniprocessor system
• It also provides automatic load balancing
• Two disadvantages of this approach are the
• potential contention for the scheduling data structure as
the numbers of CPUs grows
• the usual overhead in doing a context switch when a
process blocks for I/O.

22
Space Sharing
• Scheduling multiple threads at the same time
across multiple CPUs
• shortest job first is a well-known algorithm for
batch scheduling

23
Gang Scheduling
three parts:
1. Groups of related threads are scheduled as a unit, a gang.
2. All members of a gang run simultaneously, on different
timeshared CPUs.
3. All gang members start and end their time slices together.

24
What is Pre-emptive and
Non Pre-emptive scheduling?
• Pre-emptive: once a process has been given the CPU
can taken away.
– short jobs are made to wait by longer jobs but the overall
treatment of all processes is fair.
– response times are more predictable
– scheduler executes jobs in the following two situations.
• When a process switches from running state to the waiting state.
• When a process terminates.
• Non Pre-emptive : once a process has been given the
CPU, the CPU cannot be taken away from that process.
– Once a task starts executing, it completes its full execution
– Offers lower schedulability
– Less overhead due to less context switching

25
26
27
28
29
30
31
32
Rate-monotonic scheduling
• a static-priority scheduling class
• basis of the cycle duration of the job, cycle
duration is, the higher is the job's priority
• Assume a set of periodic tasks: (Ci,Ti)

33
Fixed-priority pre-emptive scheduling
• the scheduler ensures that
at any given time,
the processor executes the
highest priority task of all
those tasks that are
currently ready to execute.

34
Interprocess communication (IPC)
mechanisms
• Mailboxes
• Message queues
 Semaphores
 binary
 counting
• Pipes
• Signals
• Event registers
Inter-process communication
• Tasks must be able to communicate with one
another to coordinate their activities to share the
data.
• IPC means that a process (scheduler, task or ISR)
generates some information by setting or resetting a
flag or value, or generates an output so that it lets
another process take note of it or use it.
• IPCs in multiprocessor system are used to generate
information about certain sets of computations
finishing on one processor and let the other
processors take note of it.
Interprocess communication
 Shared memory:
styles
 processes have some memory in common;
 cooperate to avoid destroying/missing messages.
 Message passing:
 processes send messages along a communication channel
---no common address space.
 comm.channel may be physical or virtual
Two ways for IPC
1. global data ,
2. sending messages
Global data: by setting global variables.
Usage of global variables create two problems.
(i) Shared data problem
(ii) It cannot prevent a message from being
accessed by other tasks
Shared memory
 CPUs could be separate processors and/or cores within a processor
 Multiple processes on the same CPU may also share memory

 Shared memory on a bus:

memory
CPU 1 CPU 2
Race condition in shared
memory
 Assume a “flag” used to synchronize access to shared
memory
 Flag = 1 when shared item is being used
 Flag = 0 when shared item is not being used
 To access the item: CPU must see flag = 0 and write flag = 1
 Problem when two CPUs try to write the same location:
 CPU 1 reads flag and sees 0.
 CPU 2 reads flag and sees 0. Flag

 CPU 1 sets flag to one and writes location.


 CPU 2 sets flag to one and overwrites location. Shared
Memory
Sending messages
i) Semaphores and mutexes
ii) Queues, pipes and mailboxes
iii) Signals
iv) Event registers etc.
Printer example for IPC
• A task that runs a print function.
• The scheduler should let the other tasks share
this task.
• When the printer becomes available, an IPC
from the print task is generated and the
scheduler takes note of it.
• Other tasks take note of it through the
scheduler.
Multi-line display example for IPC
• It is a task for a multi-line display of outputs and
another task for displaying current time on the last
line.
• When the multi-line display task finishes the display
of the last line, an IPC from the display task is
generated and the scheduler takes note of it.
• Another task continuously updating time-can take
and generate an IPC output for the current time.
• The need for IPC also arises in a client-server
network.
Mailboxes
• Mailboxes are also called as message exchange, is
typically a pointer size variable
• A message mailbox is an IPC message-block that can
be used only by a single destined task.
• Through a service provided by the kernel, a task or
an ISR can deposit a message into the mail box.
• Similarly, one or more tasks can receive messages
through a service provided by the kernel.
• A waiting list is associated with each mailbox in case
more than one task wants to receive messages
through the mailbox.
Mailboxes
• Mailbox object is just like postal mailbox.

Post a
mailbox Read the
message message

Task 1 Task 2
Mailboxes types at the different RTOSs
Mail Box type
permitted by
an RTOS

One
Multiple message per Multiple
unlimited Mail Box message
Messages s with a
Queuing Up priority
paramete
r for each
message
Mailbox services provided by the kernel
• Initialize the contents of the mailbox. The mailbox
may or may not contain a message.
• Deposit a message into the mailbox (POST)
• Wait for a message to be deposited into the mailbox
(PEND)
• Get a message from a mailbox if existing (ACCEPT)
• A provision for one message-write per box and the
next message to be accepted only if the previous one
is read.
• A mailbox with a provision for only one message per
box becomes full with one message.
A mobile phone LCD display example for
mailbox
• When the time message from a clock process
arrives, the time is displayed at the right
cornet of the last line.
• When the message from another task is to
display a phone number, it is displayed at the
middle line.
Mailbox management functions
• Create a mailbox -- RTOS_BoxCreate
• Delete a mailbox -- RTOS_BoxDelete
• Query a mailbox – RTOS_BoxQuery
• Wait for a mailbox – RTOS_BoxWait
• Post a message into the mail box --
RTOS_BoxWrite
• Read a message from the mailbox –
RTOS_BoxAccept
Message
passing
 Message passing on a network:

CPU 1 CPU 2

message message

message
Message passing via
mailboxes
 Mailbox = message buffer between two processes (FIFO)

full Process B
(receiving)
full

Process A filling
(sending)
empty

Use semaphore to lock buffer during read/write


Message queues put
Message Queue
get

“Message” = information to be sent


 osMessageQueueId q_id; // ID of queue object
 q_id = osMessageQueueNew( msg-count,msg-size,attr);
 Create and initialize a message queue,return queue ID
 Specify:max #msgs,max msg size,attributes (or NULL for defaults)
 status = osMessageQueuePut(q_id,msg-ptr,msg-priority,timeout );
 Add message to queue;wait for“timeout” if queue full
 msg-ptr = pointer to message data structure
 Status = osOK :msg was put into the queue
= osErrorResource :not enough space for msg
= osErrorTimeout :no memory available at timeout
 status = osMessageQueueGet(q_id,msg-ptr,msg-priority,timeout);
 Get msg from queue,put it in *msg-ptr and put priority in *msg-priority;
 Wait for“timeout” if no message
 Status = osOK :message was retrieved from the queue
= osErrorResource :no message available and timeout=0
= osErrorTimeout :no message available before timeout
osMessageQueuePut(mq_id, *msg_ptr, msg_prio, timeout)
osMessageQueueGet(mq_id, *msg_ptr, *msg_prio, timeout)

osMessageQueueGetCapacity(mq_id) - max #msgs in the queue


osMessageQueueGetMsgSize(mq_id) - max msg size in memory pool
osMessageQueueGetCount(mq_id) - # queued msgs in the queue
osMessageQueueGetSpace(mq_id) - # available slots in the queue
osMessageQueueReset(mq_id) - reset to empty
Message queue
• Message queue is used to send one or more
messages to a task.
• It is basically an array of mailboxes.
• Waiting
• Timeout option
Features of message queues
1. Either one process (task) or a set of
processes (tasks) can use the queue.
2. A read (deletion) operation from a queue is
in a FIFO mode.
3. Generally the queue is a circular queue and
the read is like a ring of bytes in a buffer.
4. For reading, one can assign priorities.
Message queues

Message queues

Array of mailboxes

Task 1 Task 2 Task 3 Task 1 Task 2 Task 3

Sending Tasks Receiving Tasks


Continued..
• At the time of creating a queue, the queue is
given a
• Name or ID
• Queue length
• Sending task waiting list
• Receiving task waiting list
Common RTOS features of message queues
• Most RTOSs require that you initialize your
queues before you use them, by calling a function
provided for this purpose.
• An identification should be provided to which you
want to write or read from the queue.
• Each queue either has a user defined size or
prefixed size assigned by the scheduler.
• When the queue is full, error might occur
• When the queue is empty, error might occur
• Type and amount of data you can write into the
message queue.
Applications of message queues
• Taking the input from a keyboard
• To display the output
• Reading voltages from sensors or transducers
• Data packet transmission in a network
Types
• One way comm. With Ack
• Two-way comm.
• Broadcasting
One way communication with ack

Message queue

Task 1 Task 2

semaphore
Two way communication

Message queue
Task 1 Task 2
Broadcasting

Message queue
Task

Task
Task

Task
Kernel message queue services
• Initialize a queue. It is always assumed to be
empty after initialization.
• Deposit a message into the queue (POST)
• Wait for a message to be deposited into the
queue (PEND)
• Get a message from a queue (ACCEPT)
Message queue management functions

• Create a queue -- RTOS_Qcreate


• Delete a queue
• Flush a queue --RTOS_Qflush
• Post a message in queue – RTOS_Qwrite
• Post a message in front of queue -- RTOS_Qpostfront
• Read message from queue -- RTOS_Qaccept
• Broadcast a message
• Show queue information
• Show queue waiting list – RTOS_Qwait
Pipes
• Pipes are like message-queue functions.
• A message pipe or simply pipe in strict sense is
an IPC queue between two given inter-
connected tasks or two sets of tasks.
• Unlike queues and mailboxes, pipes have
unlimited size.
• In a pipe there may be no fixed number of
bytes per message.
• Example: client-server network.
Pipes

Task Task

Pipe
Write data Reads data
to pipe from pipe

• Task-to-task and ISR-to-task data transfer can


take place using pipes.
Pipes for inter-process communication

Task
Task

• Pipes can be used for inter-process communication.


• One task may send the data packets through one
pipe and the other task may send acknowledgements
through the other
While using a pipe
• You have to define
1. Pipe ID
2. Length
3. Maximum length (not defined in some RTOS)
4. Initial values
Common RTOS features of pipes
• Each pipe for a message may need
initialization before using the functions in the
scheduler for the message pipe.
• There may be a provision for multiple
connections of the pipes for the multiple
destinations of the messages. A number of
clients can thus connect to a server.
Pipes
• Pipes are used as shell commands in
Unix/Linux systems
• Symbol is |
• Create a pipe
• Open a pipe
• Close a pipe
• Read from a pipe
• Write to the pipe
Signals
• Signals can be passed to indicate an event
• A ‘signal’ provides the shortest message.
• It is a one-bit output from a process for IPC.
• What are signals in IPC?
Signals are the flags that are used for the
IPC functions for synchronizing. It is the s/w
equivalent of the flag at a register that sets on
a h/w interrupt.
• Many RTOS do not support signals
Continued..
• A signal is just a flag that is shared and used
by another interrupt servicing process for
signal handling function.
• A signal raised by one process forces another
process to interrupt and catch that signal.
• Hence, signals are handled only by very high
priority processes as it may disrupt the usual
schedule and priority inheritance.
Exception handling by signals
• An important use of signal is to handle
exceptions.
• A signal reports an error during running a task
and then lets the scheduler initiate an error-
handling process or function.
• The handling is through the use of the ISR
handling functions.
• Unix and Linux OSs have 31 different types of
signals for handling various events.
Signal management functions
• Install a signal handler
• Remove an installed signal handler
• Send a signal to another task
• Block a signal from being delivered
• Unblock a blocked signal
• Ignore a signal
Event Registers
• A task can have an event register in which the
bits correspond to different events
• Each of the bits in the event register is an
event flag.
Event Register management function calls

• Create an event register


• Delete an event register
• Query an event register
• Set an event flag
• Clear an event flag
PROCESS The Producer
Consumer Problem
SYNCHRONIZATION
A producer process "produces" information "consumed" by a consumer process.
Here are the variables needed to define the problem:

#define BUFFER_SIZE 10
typedef struct {
DATA data;
} item;
item buffer[BUFFER_SIZE];
int in = 0; // Location of next input to buffer
int out = 0; // Location of next removal from
buffer
int counter = 0; // Number of buffers currently
full

Consider the code segments on the next page:


• Does it work?
• Are all buffers utilized?

78
PROCESS The Producer
Consumer Problem
SYNCHRONIZATION
#define BUFFER_SIZE 10
A producer process "produces" information typedef struct {
"consumed" by a consumer process. DATA data;
} item;
item nextProduced; item buffer[BUFFER_SIZE];
PRODUCER int in = 0;
int out = 0;
while (TRUE) { int counter = 0;
while (counter ==
BUFFER_SIZE);
buffer[in] = nextProduced;
in = (in + 1) % BUFFER_SIZE; item nextConsumed;
CONSUMER
counter++;
while (TRUE) {
}
while (counter == 0);
nextConsumed =
buffer[out];
out = (out + 1) %
BUFFER_SIZE;
counter--;
producer consumer }

79
PROCESS The Producer
Consumer Problem
SYNCHRONIZATION
Note that counter++;  this line is NOT what it seems!!

is really --> register = counter


register = register + 1
counter = register

At a micro level, the following scenario could occur using this code:

TO; Producer Execute register1 = counter register1 = 5


T1; Producer Execute register1 = register1 + 1 register1 = 6
T2; Consumer Execute register2 = counter register2 = 5
T3; Consumer Execute register2 = register2 - 1 register2 = 4
T4; Producer Execute counter = register1 counter = 6
T5; Consumer Execute counter = register2 counter = 4

80
PROCESS
Critical Sections
SYNCHRONIZATION

A section of code, common to n cooperating processes, in which the processes may be


accessing common variables.

A Critical Section Environment contains:

Entry Section Code requesting entry into the critical section.

Critical Section Code in which only one process can execute at any one time.

Exit Section The end of the critical section, releasing or allowing others in.

Remainder Section Rest of the code AFTER the critical section.

81
PROCESS
Critical Sections
SYNCHRONIZATION

The critical section must ENFORCE ALL THREE of the following rules:

Mutual Exclusion: No more than one process can execute in its critical section
at one time.

Progress: If no one is in the critical section and someone wants in,


then those processes not in their remainder section must
be able to decide in a finite time who should go in.

Bounded Wait: All requesters must eventually be let into the critical
section.

82
83
84
85
PROCESS
Two Processes
SYNCHRONIZATION Software
Here’s an example of a simple piece of code containing the components
required in a critical section.

do {
while ( turn ^= i ); Entry Section
/* critical section */
turn = j; Critical Section
/* remainder section */
} while(TRUE); Exit Section

Remainder Section

86
PROCESS
Two Processes
SYNCHRONIZATION Software
Here we try a succession of increasingly complicated solutions to the problem of creating valid entry sections.

NOTE: In all examples, i is the current process, j the "other" process. In these examples, envision the same
code running on two processors at the same time.

TOGGLED ACCESS:

do {
while ( turn ^= i );
Algorithm 1
/* critical section */
turn = j;
/* remainder section */
Are the three Critical Section
} while(TRUE);
Requirements Met?

87
PROCESS
Two Processes
SYNCHRONIZATION Software
FLAG FOR EACH PROCESS GIVES STATE:
Each process maintains a flag indicating that it wants to get into the critical section. It checks the flag
of the other process and doesn’t enter the critical section if that other process wants to get in.

Shared variables
boolean flag[2];
initially flag [0] = flag [1] = false. Algorithm 2
flag [i] = true  Pi ready to enter its critical section

do {
flag[i] := true; Are the three Critical Section
while (flag[j]) ; Requirements Met?
critical section
flag [i] = false;
remainder section
} while (1);

88
PROCESS
Two Processes
SYNCHRONIZATION Software
FLAG TO REQUEST ENTRY:
• Each processes sets a flag to request entry. Then each process toggles a bit to allow the
other in first.
• This code is executed for each process i.
Algorithm 3
Shared variables
boolean flag[2];
initially flag [0] = flag [1] = false.
flag [i] = true  Pi ready to enter its critical section
Are the three Critical Section
Requirements Met?
do {
flag [i]:= true;
turn = j;
while (flag [j] and turn == j) ; This is Peterson’s
critical section Solution
flag [i] = false;
remainder section
} while (1);
89
PROCESS Critical Sections
SYNCHRONIZATION

The hardware required to support critical sections must have (minimally):

• Indivisible instructions (what are they?)

• Atomic load, store, test instruction. For instance, if a store and test
occur simultaneously, the test gets EITHER the old or the new, but not
some combination.

• Two atomic instructions, if executed simultaneously, behave as if


executed sequentially.

90
PROCESS Hardware
SYNCHRONIZATION Solutions
Disabling Interrupts: Works for the Uni Processor case only. WHY?

Atomic test and set: Returns parameter and sets parameter to true atomically.

while ( test_and_set ( lock ) );


/* critical section */
lock = false;

Example of Assembler code:

GET_LOCK: IF_CLEAR_THEN_SET_BIT_AND_SKIP <bit_address>


BRANCH GET_LOCK /* set failed */
------- /* set succeeded */

Must be careful if these approaches are to satisfy a bounded wait condition - must use
round robin - requires code built around the lock instructions.

91
PROCESS Hardware
SYNCHRONIZATION Solutions
Boolean waiting[N];
int j; /* Takes on values from 0 to N - 1 */
Boolean key;
do {
waiting[i] = TRUE;
key = TRUE;
while( waiting[i] && key )
key = test_and_set( lock ); /* Spin lock */
waiting[ i ] = FALSE;
/****** CRITICAL SECTION ********/
j = ( i + 1 ) mod N;
while ( ( j != i ) && ( ! waiting[ j ] ) )
j = ( j + 1 ) % N;
if ( j == i )
Using Hardware
lock = FALSE;
Test_and_set.
else
waiting[ j ] = FALSE;
/******* REMAINDER SECTION *******/
} while (TRUE);
92
PROCESS Current Hardware
SYNCHRONIZATION Dilemmas

We first need to define, for multiprocessors:

caches,
shared memory (for storage of lock variables),
write through cache,
write pipes.

The last software solution we did ( the one we thought was correct ) may not work
on a cached multiprocessor. Why? { Hint, is the write by one processor visible
immediately to all other processors?}

What changes must be made to the hardware for this program to work?

93
PROCESS Current Hardware
SYNCHRONIZATION Dilemmas
Does the sequence below work on a cached multiprocessor?

Initially, location a contains A0 and location b contains B0.

a) Processor 1 writes data A1 to location a.


b) Processor 1 sets b to B1 indicating data at a is valid.
c) Processor 2 waits for b to take on value B1 and loops until that change
occurs.
d) Processor 2 reads the value from a.

What value is seen by Processor 2 when it reads a?

How must hardware be specified to guarantee the value seen?

a: A0 b: B0

94
PROCESS Current Hardware
SYNCHRONIZATION Dilemmas

We need to discuss:

Write Ordering: The first write by a processor will be visible before the second
write is visible. This requires a write through cache.

Sequential Consistency: If Processor 1 writes to Location a "before" Processor 2


writes to Location b, then a is visible to ALL processors before b is. To do this
requires NOT caching shared data.

The software solutions discussed earlier should be avoided since they require write
ordering and/or sequential consistency.

95
PROCESS Current Hardware
SYNCHRONIZATION Dilemmas

Hardware test and set on a multiprocessor causes

•an explicit flush of the write to main memory and


•the update of all other processor's caches.

Imagine needing to write all shared data straight through the cache.

With test and set, only lock locations are written out explicitly.

In not too many years, hardware will no longer support software solutions because of the
performance impact of doing so.

96
PROCESS Semaphores
SYNCHRONIZATION
PURPOSE:
We want to be able to write more complex constructs and so need a language to do so. We
thus define semaphores which we assume are atomic operations:

WAIT ( S ): SIGNAL ( S ):
while ( S <= 0 ); S = S + 1;
S = S - 1;

As given here, these are not atomic as written in "macro code". We define these operations,
however, to be atomic (Protected by a hardware lock.)

FORMAT:
wait ( mutex ); <-- Mutual exclusion: mutex init to 1.
CRITICAL SECTION
signal( mutex );
REMAINDER

97
PROCESS Semaphores
SYNCHRONIZATION

Semaphores can be used to force synchronization ( precedence ) if the preceeder


does a signal at the end, and the follower does wait at beginning. For example,
here we want P1 to execute before P2.

P1: P2:
statement 1; wait ( synch );
signal ( synch ); statement 2;

98
PROCESS Semaphores
SYNCHRONIZATION
We don't want to loop on busy, so will suspend instead:

• Block on semaphore == False,


• Wakeup on signal ( semaphore becomes True),
• There may be numerous processes waiting for the semaphore, so keep a list of
blocked processes,
• Wakeup one of the blocked processes upon getting a signal ( choice of who depends
on strategy ).

To PREVENT looping, we redefine the semaphore structure as:

typedef struct {
int value;
struct process *list; /* linked list of PTBL waiting on S */
} SEMAPHORE;

99
PROCESS Semaphores
SYNCHRONIZATION
typedef struct {
int value;
struct process *list; /* linked list of PTBL waiting on S */
} SEMAPHORE;
SEMAPHORE s; SEMAPHORE s;
wait(s) { signal(s) {
s.value = s.value - 1; s.value = s.value + 1;
if ( s.value < 0 ) { if ( s.value <= 0 ) {
add this process to s.L; remove a process P from s.L;
block; wakeup(P);
} }
} }

• It's critical that these be atomic - in uniprocessors we can disable interrupts, but in
multiprocessors other mechanisms for atomicity are needed.
• Popular incarnations of semaphores are as "event counts" and "lock managers".
(We'll talk about these in the next chapter.)

100
PROCESS Semaphores
SYNCHRONIZATION
DEADLOCKS:

· May occur when two or more processes try to get the same multiple resources at the
same time.

P1: P2:
wait(S); wait(Q);
wait(Q); wait(S);
..... .....
signal(S); signal(Q);
signal(Q); signal(S);

· How can this be fixed?

101
PROCESS Railways in the Andes;
SYNCHRONIZATION A Practical Problem
High in the Andes mountains, there are two circular railway lines. One line is in Peru, the
other in Bolivia. They share a common section of track where the lines cross a mountain
pass that lies on the international border (near Lake Titicaca?).

Unfortunately, the Peruvian and Bolivian trains occasionally collide when simultaneously entering the
common section of track (the mountain pass). The trouble is, alas, that the drivers of the two
trains are both blind and deaf, so they can neither see nor hear each other.

102
The two drivers agreed on the following method of preventing collisions. They set up a
large bowl at the entrance to the pass. Before entering the pass, a driver must stop his
train, walk over to the bowl, and reach into it to see it it contains a rock. If the bowl is
empty, the driver finds a rock and drops it in the bowl, indicating that his train is
entering the pass; once his train has cleared the pass, he must walk back to the bowl
and remove his rock, indicating that the pass in no longer being used. Finally, he walks
back to the train and continues down the line.
If a driver arriving at the pass finds a rock in the bowl, he leaves the rock there; he
repeatedly takes a siesta and rechecks the bowl until he finds it empty. Then he drops a
rock in the bowl and drives his train into the pass. A smart graduate from the University
of La Paz (Bolivia) claimed that subversive train schedules made up by Peruvian officials
could block the train forever.
Explain
The Bolivian driver just laughed and said that could not be true because it never
happened.
Explain

Unfortunately, one day the two trains crashed.

Explain

103
Following the crash, the graduate was called in as a consultant to ensure that no more
crashes would occur. He explained that the bowl was being used in the wrong way. The
Bolivian driver must wait at the entry to the pass until the bowl is empty, drive through
the pass and walk back to put a rock in the bowl. The Peruvian driver must wait at the
entry until the bowl contains a rock, drive through the pass and walk back to remove the
rock from the bowl. Sure enough, his method prevented crashes.
Prior to this arrangement, the Peruvian train ran twice a day and the Bolivian train ran
once a day. The Peruvians were very unhappy with the new arrangement.
Explain

The graduate was called in again and was told to prevent crashes while avoiding the
problem of his previous method. He suggested that two bowls be used, one for each
driver. When a driver reaches the entry, he first drops a rock in his bowl, then checks the
other bowl to see if it is empty. If so, he drives his train through the pass. Stops and
walks back to remove his rock. But if he finds a rock in the other bowl, he goes back to
his bowl and removes his rock. Then he takes a siesta, again drops a rock in his bowl and
re-checks the other bowl, and so on, until he finds the other bowl empty. This method
worked fine until late in May, when the two trains were simultaneously blocked at the
entry for many siestas.

Explain

104
PROCESS Some Interesting
SYNCHRONIZATION Problems
THE BOUNDED BUFFER ( PRODUCER / CONSUMER ) PROBLEM:

This is the same producer / consumer problem as before. But now we'll do it with signals and waits.
Remember: a wait decreases its argument and a signal increases its argument.

BINARY_SEMAPHORE mutex = 1; // Can only be 0 or 1


COUNTING_SEMAPHORE empty = n; full = 0; // Can take on any integer value

producer: consumer:
do { do {
/* produce an item in nextp */ wait (full);
wait (empty); /* Do action */ wait (mutex);
wait (mutex); /* Buffer guard*/ /* remove an item from buffer to nextc */
/* add nextp to buffer */ signal (mutex);
signal (mutex); signal (empty);
signal (full); /* consume an item in nextc */
} while(TRUE); } while(TRUE);

105
PROCESS Some Interesting
SYNCHRONIZATION Problems

THE READERS/WRITERS PROBLEM:

This is the same as the Producer / Consumer problem except - we now can have many concurrent
readers and one exclusive writer.

Locks: are shared (for the readers) and exclusive (for the writer).

Two possible ( contradictory ) guidelines can be used:

• No reader is kept waiting unless a writer holds the lock (the readers have precedence).

• If a writer is waiting for access, no new reader gains access (writer has precedence).

( NOTE: starvation can occur on either of these rules if they are followed rigorously.)

106
PROCESS Some Interesting
SYNCHRONIZATION Problems
THE READERS/WRITERS PROBLEM: Writer:
do {
BINARY_SEMAPHORE wrt = 1; wait( wrt );
BINARY_SEMAPHORE mutex = 1; /* writing is performed */
int readcount = 0;
signal( wrt );
} while(TRUE);
Reader:
do {
wait( mutex ); /* Allow 1 reader in entry*/ WAIT ( S ):
readcount = readcount + 1; while ( S <= 0 );
if readcount == 1 then wait(wrt ); /* 1st reader locks writer */ S = S - 1;
signal( mutex ); SIGNAL ( S ):
/* reading is performed */ S = S + 1;
wait( mutex );
readcount = readcount - 1;
if readcount == 0 then signal(wrt ); /*last reader frees writer */
signal( mutex );
} while(TRUE);

107
PROCESS Some Interesting
SYNCHRONIZATION Problems
THE DINING PHILOSOPHERS PROBLEM:

5 philosophers with 5 chopsticks sit around a circular table. They each want to eat at random times
and must pick up the chopsticks on their right and on their left.

Clearly deadlock is rampant ( and starvation possible.)

Several solutions are possible:

• Allow only 4 philosophers to be hungry at a


time.

• Allow pickup only if both chopsticks are


available. ( Done in critical section )

• Odd # philosopher always picks up left


chopstick 1st, even # philosopher always picks
up right chopstick 1st.

108
PROCESS
Critical Regions
SYNCHRONIZATION
High Level synchronization construct implemented in a
programming language.
A shared variable v of type T, is declared as:
var v; shared T Entry Section
Variable v is accessed only inside a statement:
region v when B do S
where B is a Boolean expression.
While statement S is being executed, no other process can Shared Data
access variable v.
Regions referring to the same shared variable exclude
each other in time.
When a process tries to execute the region statement, the Exit Section
Boolean expression B is evaluated.
If B is true, statement S is executed.
If it is false, the process is delayed until B is true and no
other process is in the region associated with v.
Critical Region

109
PROCESS Critical Regions
SYNCHRONIZATION
EXAMPLE: Bounded Buffer:

Shared variables declared as:


struct buffer {
int pool[n];
int count, in, out;
}

Producer process inserts nextp into Consumer process removes an item from
the shared buffer: the shared buffer and puts it in nextc.

region buffer when( count < n) { Region buffer when (count > 0) {
pool[in] = nextp; nextc = pool[out];
in:= (in+1) % n; out = (out+1) % n;
count++; count--;
} }

110
PROCESS Monitors
SYNCHRONIZATION
High-level synchronization construct that allows the safe sharing of an abstract data type
among concurrent processes.

monitor monitor-name
{
shared variable declarations
procedure body P1 (…) {
...
}
procedure body P2 (…) {
...
}
procedure body Pn (…) {
...
}
{
initialization code
}
}
111
PROCESS Monitors
SYNCHRONIZATION
• To allow a process to wait within the monitor, a condition variable must
be declared, as
condition x, y;
• Condition variable can only be used with the operations wait and signal.
– The operation
x.wait();
means that the process invoking this operation is suspended until
another process invokes
x.signal();
– The x.signal operation resumes exactly one suspended process. If no
process is suspended, then the signal operation has no effect.

112
PROCESS Monitors
SYNCHRONIZATION

Schematic View of a
Monitor

113
PROCESS Monitors
SYNCHRONIZATION

Monitor With
Condition Variables

114
PROCESS Monitors
SYNCHRONIZATION Dining Philosophers Example
monitor dp {
enum {thinking, hungry, eating} state[5];
condition self[5]; void putdown(int i) {
} state[i] = thinking;
// test left & right neighbors
initializationCode() { test((i+4) % 5);
for ( int i = 0; i < 5; i++ ) test((i+1) % 5);
state[i] = thinking;
}
}
void test(int i) { void pickup(int i) {
if ( (state[(I + 4) % 5] != eating) && state[i] = hungry;
(state[i] == hungry) && test[i];
(state[(i + 1) % 5] != eating)) { if (state[i] != eating)
state[i] = eating; self[i].wait();
self[i].signal(); }
}
}
115
PROCESS How Is This
SYNCHRONIZATION Really Used?

Windows XP Synchronization
• Uses interrupt masks to protect access to global resources on uniprocessor
systems.

• Uses spinlocks on multiprocessor systems.

• Also provides dispatcher objects which may act as either mutexes or


semaphores.

• Dispatcher objects may also provide events. An event acts much like a
condition variable.

116
PROCESS SYNCHRONIZATION
Wrap up

In this chapter we have:

Looked at many incarnations of the producer consumer problem.

Understood how to use critical sections and their use in semaphores.

Synchronization IS used in real life. Generally programmers don’t use the really primitive
hardware locks, but use higher level mechanisms as we’ve demonstrated.

117
Priority Inversion
• A challenging scheduling problem arises when a high-priority
process gets blocked waiting for a resource that is currently held by
a low-priority process.
• If the low-priority process gets pre-empted by one or more
medium-priority processes, then the high-priority process is
essentially made to wait for the medium priority processes to finish
before the low-priority process can release the needed resource,
causing a priority inversion.
• If there are enough medium-priority processes, then the high-
priority process may be forced to wait for a very long time.
• One solution is a priority-inheritance protocol, in which a low-
priority process holding a resource for which a high-priority process
is waiting will temporarily inherit the high priority from the waiting
process.
• This prevents the medium-priority processes from preempting the
low-priority process until it releases the resource, blocking the
priority inversion problem.
118
Priority inheritance protocol
Simple priority inheritance

119
• Task L receives control of the processor and begins executing.
– The task makes a request for Resource A.
• Task L is granted ownership of Resource A and enters its critical
region.
• Task L is preempted by Task H, a higher-priority task.
– Task H begins executing and requests ownership of Resource A, which is
owned by Task L.
• Task L is hoisted to a priority above Task H and resumes executing its
critical region.
• Task L releases Resource A and is lowered back to its original priority.
– Task H acquires ownership of Resource A and begins executing its critical
region.
• Task H releases Resource A and continues executing normally.
• Task H finishes executing and Task L continues executing normally.
• Task L finishes executing.

120
Three-task, one-resource priority inheritance

121
• Task 3 gets control of the processor and begins executing.
– The task requests ownership of Resource A.
• Task 3 acquires Resource A and begins executing its critical region.
• Task 3 is preempted by Task 2, a higher-priority task.
– Task 2 begins executing normally and requests Resource A, which is owned by
Task 3.
• Task 3 is hoisted to a priority above Task 2 and resumes executing its critical
region.
• Task 3 is preempted by Task 1, a higher-priority task.
– Task 1 begins executing and requests Resource A, which is owned by Task 3.
• Task 3 is hoisted to a priority above Task 1.
– Task 3 resumes executing its critical region.
• Task 3 releases Resource A and is lowered back to its original priority.
– Task 1 acquires ownership of Resource A and begins executing its critical region.
• Task 1 releases Resource A and continues executing normally.
• Task 1 finishes executing. Task 2 acquires Resource A and begins executing its
critical region.
• Task 2 releases Resource A and continues executing normally.
• Task 2 finishes executing. Task 3 resumes and continues executing normally.
• Task 3 finishes executing.
122
Three-task, two-resource priority inheritance

123
• Task 3 is given control of the processor and begins executing. The task requests
Resource A.
• Task 3 acquires ownership of Resource A and begins executing its critical region.
• Task 3 is preempted by Task 2, a higher-priority task. Task 2 requests ownership of
Resource B.
• Task 2 is granted ownership of Resource B and begins executing its critical region.
– The task requests ownership of Resource A, which is owned by Task 3.
• Task 3 is hoisted to a priority above Task 2 and resumes executing its critical region.
• Task 3 is preempted by Task 1, a higher-priority task.
– Task 1 requests Resource B, which is owned by Task 2.
• Task 2 is hoisted to a priority above Task 1. However, Task 2 still can't execute because it
must wait for Resource A, which is owned by Task 3.
– Task 3 is hoisted to a priority above Task 2 and continues executing its critical region.
• Task 3 releases Resource A and is lowered back to its original priority.
– Task 2 acquires ownership of Resource A and resumes executing its critical region.
• Task 2 releases Resource A and then releases Resource B. The task is lowered back to its
original priority.
– Task 1 acquires ownership of Resource B and begins executing its critical region.
• Task 1 releases Resource B and continues executing normally.
• Task 1 finishes executing. Task 2 resumes and continues executing normally.
• Task 2 finishes executing. Task 3 resumes and continues executing normally.
• Task 3 finishes executing.

124

You might also like