0% found this document useful (0 votes)
68 views27 pages

WINSEM2022-23 CSE4001 ETH VL2022230503162 ReferenceMaterialI TueFeb1400 00 00IST2023 Module4DistributedSystemsLecture2

Distributed shared memory (DSM) is an abstraction that allows processes on different computers to share memory even though they do not have physical shared memory. DSM aims to overcome memory size limitations and support better programming paradigms. It manages replicated data and caches data locally to reduce communication overhead. However, DSM implementations cannot be more efficient than message passing due to asynchronous communication. Message passing makes the communication between processes explicit by exchanging messages. It requires decomposing computations into parallel tasks and minimizing access to non-local data. Care must be taken to avoid deadlocks in blocking message passing.

Uploaded by

Naveed Shaik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views27 pages

WINSEM2022-23 CSE4001 ETH VL2022230503162 ReferenceMaterialI TueFeb1400 00 00IST2023 Module4DistributedSystemsLecture2

Distributed shared memory (DSM) is an abstraction that allows processes on different computers to share memory even though they do not have physical shared memory. DSM aims to overcome memory size limitations and support better programming paradigms. It manages replicated data and caches data locally to reduce communication overhead. However, DSM implementations cannot be more efficient than message passing due to asynchronous communication. Message passing makes the communication between processes explicit by exchanging messages. It requires decomposing computations into parallel tasks and minimizing access to non-local data. Care must be taken to avoid deadlocks in blocking message passing.

Uploaded by

Naveed Shaik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 27

DISTRIBUTED SYSTEMS

MODULE IV
Distributed Shared Memory (DSM)
Distributed Shared Memory (DSM)

- an indirect communication mechanism


Distributed Shared Memory

• an abstraction used for sharing data between computers that do not share physical memory
• main goals of DSM:
(1) to overcome the architectural limitations (memory size) and
(2) to support a better programming paradigm

• read and update/write operation

• a tool for parallel applications or for any distributed application


Distributed Shared Memory
• Message passing cannot be avoided altogether in a distributed system
• DSM systems manage replicated data
• Eg. Apollo Domain file system in 1983
• It grew alongside the development of
shared-memory multiprocessors on these
• Research on investigating algorithms suitable for parallel
• computation multiprocessors
Caching strategies and fast processor-memory interconnections
• Bus contention issue
• Processors sharing memory are commonly constructed in groups of four, sharing a memory
module over a bus on a single circuit board.
• Multiprocessors with up to 64 processors in total are constructed from such boards in a Non-
Uniform Memory Access (NUMA) architecture.
• Hierarchical architecture – 4 boards are connected using high performance switch
Distributed Shared Memory
Distributed Shared Memory

Advantages
• Message passing is hidden
• Programming is much easier
• Provides large virtual memory space
• Can handle complex and large databases
• Scales well with a large number of nodes
• Cheaper than using a multiprocessor system
• Shield programmers from sending or receiving primitives
• Programs for shared memory multiprocessors can easily be ported to DSMs

Disadvantages
• Generally slower to access than non-distributed shared memory
• Must provide additional protection against simultaneous accesses to shared data
• DSM implementations use asynchronous message-passing, and hence cannot be
more efficient than message passing implementations
Challenges in Distributed Shared Memory

1. How to keep track of the location of remote data?

2. How to overcome the communication delays and high overhead associated with the
references to remote data?

3. How to allow "controlled" concurrent accesses to shared data?


Algorithms for implementing Distributed Shared Memory

1. Central-server algorithm

• a central-server maintains all the shared data.


• for read: the server just return the data
• for write: update the data and send acknowledge to
the client
• a simple working solution to provide shared memory
for distributed applications
• low efficiency -- bottleneck at the server,
long memory access latency
• Data can be distributed -- need a directory to store
the location of a page.
Algorithms for implementing Distributed Shared Memory

2. Migration algorithm

• Data is shipped to the location of the data


access request -- subsequent accesses are local
• For both read/write: get the remote page to the local
machine, then perform the operation.
• Keeping track of memory location: location service,
home machine for each page, broadcast.
• Problems: thrashing -- pages move between nodes
frequently, false sharing
• Multiple reads can be costly
Algorithms for implementing Distributed Shared Memory

3. The Read-Replication algorithm

• On read, a page is replicated (mark the page


as multiple reader)
• On write, all copies except one must be updated or
invalidated
• multiple read one write
• Allowing multiple readers to a page
• All the location must be kept track of:
location service/home machine
Algorithms for implementing Distributed Shared Memory

4. Full Replication algorithm

• Allow multiple read and multiple write


• Must control the access to shared memory
• Can use a gap free sequences in which all nodes
wishing to modify shared data will send the
modification to sequencer which will then assign a
sequence number and multicast the modification
with sequence number to all nodes that have a copy
of shared data item
Methods for achieving Distributed Shared Memory

Hardware based
• cache coherence circuits and network interfaces

Software based
• page based approach using the system’s virtual memory;
• shared variable approach using some routines to access shared variables;
• object based approach ideally accessing shared data through object-oriented
discipline.
Memory Coherence
• The set of allowable memory access orderings forms the memory consistency model.
• A memory is coherent if the value returned by a read operation is always the value that the
programmer expected.
Two types of basic cache coherence protocols
• Write-Invalidate Protocol: a write to a shared data causes the invalidation of all copies except one
before the write.
• Write-Update Protocol: A write to a shared data causes all copies of that data to be updated.
Consistency Models
DSM must follow certain rules to maintain consistency over how read and write order is
viewed among nodes, called the system's consistency model
Memory Coherence
• Strict consistency model is typical in uniprocessor: a read returns the most recent written
value.
• it is very costly to enforce the strict consistency model in distributed systems: how
to determine last write?
• To improve performance, we need relax memory consistency model.

Relax memory consistency model


• sequential consistency: the result of any execution of the operations of all the processors is
the same as if they were executed in a sequential order.
• General Consistency: All the copies of a memory location eventually contain the same data
when all the writes issues by every processor have completed.
• Other consistency models: general consistency, processor consistency, release consistency
Message Passing
Message Passing Example:
Fig. 1 - Distributed Memory System Fig. 2 Message communication
Message Passing Example:
Programming using the message passing paradigm

Each process has its own address space


Supports explicit parallelism

Two implications in regard to address


space are
• each data element must belong to one
of the partitions of the space
• all interactions require cooperation of two processes – the process that has the data and the
process that wants to access the data

Challenges
Programmers has the complete responsibility - decompose the computations and extract
concurrency – achieve parallelism
Build algorithms that reduce the access of non-local resources to minimize costs
Structure of Message Passing Programs

Asynchronous or loosely synchronous paradigms

Synchronous Paradigm
Tasks or subsets of tasks synchronize to perform interactions.
However, between these interactions, tasks execute completely asynchronously.

Asynchronous Paradigm
All concurrent tasks execute asynchronously - makes it possible to implement any parallel
algorithm.
However, such programs can be harder to reason about, and can have nondeterministic
behavior due to race conditions.

Single Program Multiple Data (SPMD) approach


Message-passing paradigm supports execution of a different program on each of the processes
SPMD programs can be loosely synchronous or completely asynchronous.
Send and Receive Operations

Syntax:

Example:

Ensure the semantics of the send and receive operations


Uses additional hardware support (eg. Buffer)
Blocking Message Passing Operations
1. Blocking Non-Buffered Send/Receive
Blocking Message Passing Operations
1. Blocking Non-Buffered Send/Receive

Deadlocks in Blocking Non-Buffered Operations

Deadlocks are very easy in blocking protocols and care must be taken to break cyclic waits
Blocking Message Passing Operations
2. Blocking Buffered Send/Receive
Blocking Message Passing Operations

Another situation cause deadlock


Non-Blocking Message Passing Operations
Figure. Non-blocking non-buffered send and receive operations (a) in absence
of
communication hardware; (b) in presence of communication hardware.

Non-blocking operations are generally accompanied by a check-status operation


DSM vs Message Passing

You might also like