Distributed Computing Imp Questions
Distributed Computing Imp Questions
PART-A
1. How peer to peer systems differ from client server system?
The primary difference between these two terms is that, in a peer-to-peer network, every node can ask for assistance and deliver
services. While in a client-server network, the client nodes demand services and the server node answers with assistance.
2. What do you meant by snapshot recording algorithm?
A snapshot algorithm is used to create a consistent snapshot of the global state of a distributed system. Due to the lack of globally
shared memory and a global clock, this is not trivially possible
3. Differentiate between snapshot recording and back-up.
The main difference between snapshots and backups is that the latter is isolated from the mainframe. That means snapshot copies are
stored on the original network, unlike backup copies. Snapshots work well for frequent protection measured in minutes or hours while
backups are used for regular protection.
4. State the principle of message ordering with example.
Message ordering is important to some asynchronous messaging applications; that is, it is important to consume messages in the same
order that the producer sends them. If this type of message ordering is important to your application, your design must take it into
account. Ex. if send(M1) -> send(M2) then every recipient of both the messages M1 and M2 must receive the message M1 before
receiving the message M2
5.Define casual ordering execution.
Casual ordering: multicast(g,m') then any correct process that delivers m' will have already delivered m. Total ordering:Causal
ordering: If multicast(g,m) If a correct process delivers message m before m' (independent of the senders), then any other correct
process that delivers m' will have already delivered m
6.Point out the principle of logical time.
The logical time in distributed systems is used to maintain the consistent ordering of events. The concept of causality, i.e. the causal
precedence relationship, is fundamental for distributed systems
7. List out the various message ordering paradigms used in distributed systems
Several orderings on messages have been defined: (i) non-FIFO, (ii) FIFO, (iii) causal order, and (iv) synchronous order.
PART-B
8.a.i.How do you classify a parallel system and brief them
Parallel applications are typically classified as either fine-grained parallelism, in which subtasks will communicate several times per
second; coarse-grained parallelism, in which subtasks do not communicate several times per second; or embarrassing parallelism, in
which subtasks rarely or never communicate. There are multiple types of parallel processing; two of the most commonly used types
include SIMD and MIMD. SIMD, or single instruction multiple data, is a form of parallel processing in which a computer will have
two or more processors follow the same instruction set while each processor handles different data
8.a.ii.Compare synchronous versus asynchronous execution.
The differences between asynchronous and synchronous include: Async is multi-thread, which means operations or programs can run
in parallel. Sync is a single-thread, so only one operation or program will run at a time. Async is non-blocking, which means it will
send multiple requests to a server. Synchronous execution means the first task in a program must finish processing before moving on
to executing the next task whereas asynchronous execution means a second task can begin executing in parallel, without waiting for an
earlier task to finish.
8.b. Explain briefly about the functions must be addressed while designing & building a distributed system.
The most important functions of distributed computing are: Resource sharing - whether it's the hardware, software or data that can be
shared. Openness - how open is the software designed to be developed and shared with each other. Concurrency multiple machines
can process the same function at the same time.
Elements of a Distributed System. Concurrency multiple machines can process the same function at the same time. Scalability how do
the computing and processing capabilities multiply when extended to many machines. Fault tolerance - how easy and quickly can
failures in parts of the system be detected and recovered.
9.a. Illustrate the necessary and sufficient conditions for casual ordering with two examples.
Causal ordering of messages is one of the four semantics of multicast communication namely unordered, totally ordered, causal, and
sync-ordered communication. Multicast communication methods vary according to the message’s reliability guarantee and ordering
guarantee. The causal ordering of messages describes the causal relationship between a message send event and a message receive
event. For example, if send(M1) -> send(M2) then every recipient of both the messages M1 and M2 must receive the message M1
before receiving the message M2. In Distributed Systems the causal ordering of messages is not automatically guaranteed.
9.b.Summarise the snapshot algorithm for FIFO channels.
Chandy and Lamport were the first to propose a algorithm to capture consistent global state of a distributed system. The main idea
behind proposed algorithm is that if we know that all message that have been sent by one process have been received by another then
we can record the global state of the system.
Algorithm:
Marker sending rule for a process P :
Process p records its own local state
For each outgoing channel C from process P, P sends marker along Cbefore sending any other messages along C.
Marker receiving rule for a process Q :
If process Q has not yet recorded its own local state then
Record the state of incoming channel C1 as an empty sequence or null.
After recording the state of incoming channel C1, process Q Follows the marker sending rule
If process Q has already recorded its state
Record the state of incoming channel C1 as the sequence of messages received along channel C1 after the
state of Q was recorded and before Q received the marker along C1 from process P.
10.a. Explain briefly about group communication with example.
Group Communication occurs when a single source process simultaneously attempts to communicate with numerous functions. A
group is an abstract collection of interrelated operations. This abstraction hides the message passing such that the communication
seems to be a standard procedure call.
Group Communication occurs when a single source process simultaneously attempts to
communicate with numerous functions.
Types of group communication
As illustrated above, there are three types of group communication. They are defined below:
Broadcast communication
This occurs when the host simultaneously attempts to communicate with all the processes in a distributed system. It is helpful when a
consistent information stream must be supplied to all methods effectively. Communication is highly rapid compared to other means of
communication since it does not require processing. It doesn't, however, support many operations and cannot address each function
independently.
10.b. Discuss about a model of distributed computations with an example.
Models of distributed computing can be classified into five categories: —Minicomputer model, —Workstation model, —Workstation-
server model, —Processor-pool model, —Hybrid model. These models are briefly described in the following. A few minicomputers
interconnected by a communication network.
Networks.
Telecommunication networks.
Distributed Real-time Systems.
Parallel Processing.
Distributed artificial intelligence.
Distributed Database Systems