Module 3 DS
Module 3 DS
1
SYNCHRONIZATION
Every computer needs a timer mechanism (called a
computer clock) to keep track of current time and also for
various accounting purposes such as calculating the time
spent by a process in CPU utilization, disk I/(), and so on,
so that the corresponding user can be charged properly.
1. The value in the constant register is chosen so that 60 clock ticks occur in
a second.
2. The computer clock is synchronized with real time (external clock). For
this, two more values are stored in the system-a fixed starting date and
time and the number of ticks. For example, in UNIX, time begins at 0000 on
January 1, 1970.
SYNCHRONIZATION
• At the time of initial booting, the system asks the operator to
enter the current date and time. The system converts the entered
value to the number of ticks after the fixed starting date and time.
• Thus TAI is just the mean number of ticks of the cesium 133
clocks since midnight on Jan. 1, 1958 (the beginning of time)
divided by 9,192,631,770
SYNCHRONIZATION
Drifting of clock
• A clock always runs at a constant rate because its quartz crystal
oscillates at a well-defined frequency.
• Logical time allows the order in which the messages are presented
to be inferred without recourse to clocks.
Logical clock
• Lamport invented a simple mechanism by which the happened before
ordering can be captured numerically, called a logical clock.
• A Lamport logical clock is a monotonically increasing software
counter, whose value need bear no particular relationship to any
physical clock. Each process pi Keeps its own logical clock, Li , which
it uses to apply so called Lamport timestamps to events.
• We denote the timestamp of event e at pi by Li(e) , and by L(e) we
denote the timestamp of event e at whatever process it occurred
at.
Lamport’s Algorithm
Lamport’s Algorithm
To implement Lamport’s logical clocks, each process Pi maintains a
local counter Ci. These counters are updated according to the
following steps :
• Periodically, each node exchanges its clock time with its neighbors
in the ring, grid, or other structure and then sets its clock time to
the average of its own clock time and the clock times of its
neighbors.
Event Ordering
• Keeping the clocks in a distributed system synchronized to within 5
or 10msec is an expensive and nontrivial task.
1. Mutual exclusion.
• Given a shared resource accessed by multiple concurrent
processes, at any time only one process should access the
resource. That is, a process that has been granted the resource
must release it before it can be granted to another process.
2. No starvation.
• If every process that is granted the resource eventually releases
it, every request must be eventually granted.
Mutual Exclusion
• Distributed processes often need to coordinate their activities.
If a collection of processes share a resource or collection of
resources, then often mutual exclusion is required to prevent
interference and ensure consistency when accessing the
resources.
A ring-based algorithm
• One of the simplest ways to arrange mutual exclusion between
the N processes without requiring an additional process is to
arrange them in a logical ring.
Mutual Exclusion
Centralized Approach
Mutual Exclusion
Distributed
Approach
Mutual Exclusion
Distributed
Approach
Mutual Exclusion
Distributed
Approach
Mutual Exclusion
Distributed
Approach
Deadlock
• Deadlock is the state of permanent blocking of a set of
processes each of which is waiting for an event that only
another process in the set can cause.
1. P1 requests for one tape drive and the system allocates T1 to it.
2. P2 requests for one tape drive and the system allocates T2 to it.
3. P1 requests for one more tape drive and enters a waiting state
because no tape drive is presently available.
4. P2 requests for one more tape drive and it also enters a waiting
state because no tape drive is presently available.
Necessary Conditions for Deadlock
following conditions are necessary for a deadlock situation to occur in a
system:
1. Mutual-exclusion condition. If a resource is held by a process, any
other process requesting for that resource must wait until the resource
has been released.
2. Hold-and-wait condition. Processes are allowed to request for new
resources without releasing the resources that they are currently
holding.
3. No-preemption condition. A resource that has been allocated to a
process becomes available for allocation to another process only after it
has been voluntarily released by the process holding it.
4. Circular-wait condition. Two or more processes must form a circular
chain in which each process is waiting for a resource that is held by the
next member of the chain.
Deadlock Handling
The three commonly used strategies to handle
deadlocks are
1. Avoidance,
2. Prevention,
3. Detection and Recovery.
Deadlock Handling
• Deadlock avoidance methods use some advance knowledge of
the resource usage of processes to predict the future state of the
system for avoiding allocations that can eventually lead to a deadlock.
• For recovery from a detected deadlock, a system may use one of the
following methods: asking for operator intervention, termination of
process(es), or rollback of process( es).
Election Algorithms
• Several distributed algorithms require that there be a coordinator
process in the entire system.
• Election algorithms are meant for electing a coordinator process
from among the currently running processes.
• An algorithm for choosing a unique process to play a particular role is
called an election algorithm.
• Ex: In a variant of central server algorithm for mutual exclusion, the
server is chosen from among the process .
• The different algorithms are
• Ring based election algorithm • Bully algorithm
Election Algorithms
• Election algorithms are based on the following assumptions:
1. Each process in the system has a unique priority number.