0% found this document useful (0 votes)
22 views

CC assignment 3

CC assignment 3

Uploaded by

TUSHAR AHUJA
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

CC assignment 3

CC assignment 3

Uploaded by

TUSHAR AHUJA
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Q1. Write short notes : a) Parallel vs.

Distributed Computing b)
Elements of Parallel Computing c) Hardware Architectures for
Parallel Processing d) Approaches to Parallel Programming e)
Laws of Caution
ANS. a) Parallel vs. Distributed Computing
 Parallel Computing:
o Involves the simultaneous execution of multiple tasks on multiple processors
within a single system. The goal is to perform computations faster by dividing
tasks into smaller sub-tasks that can be processed concurrently.
o Key Feature: All processors share a common memory space.
o Example: Multi-core processors running parallel tasks to speed up complex
simulations.
 Distributed Computing:
o Involves a network of independent computers that work together to perform a
task. These systems are physically separate but communicate via a network.
o Key Feature: Each computer has its own memory and may not be aware of
the others' computations.
o Example: Cloud computing systems where tasks are distributed across
multiple servers to handle large-scale data processing.

b) Elements of Parallel Computing


1. Decomposition:
o Dividing a problem into smaller, manageable sub-problems that can be solved
concurrently.
2. Concurrency:
o The ability to execute multiple operations simultaneously, utilizing multiple
processors.
3. Synchronization:
o Coordinating the execution of tasks to ensure proper sequencing and data
sharing between processors.
4. Communication:
o Transferring data between processors or memory units to exchange
intermediate results or data.
5. Load Balancing:
o Distributing tasks efficiently across processors to ensure that each processor
is utilized to its full potential.
c) Hardware Architectures for Parallel Processing
1. Shared Memory Architecture:

o Multiple processors share a single, global memory space, making it easier to


exchange data.
o Example: Multi-core processors in modern computers.
2. Distributed Memory Architecture:
o Each processor has its own private memory, and processors communicate
via a network to exchange data.
o Example: Supercomputers and clusters used in large-scale computations.
3. Hybrid Architecture:

o Combines elements of both shared and distributed memory architectures to


optimize performance and scalability.
o Example: Systems using clusters of multi-core machines.
4. Dataflow Architecture:
o Computation is driven by the flow of data through the system, with tasks
being triggered when required data becomes available.
o Example: Special-purpose systems for real-time signal processing.

d) Approaches to Parallel Programming


1. Thread-Based Parallelism:
o Involves splitting tasks into threads, with each thread running on a different
processor or core.
o Example: OpenMP for shared memory parallelism.
2. Data Parallelism:
o Involves breaking down data into chunks that can be processed
independently in parallel.
o Example: SIMD (Single Instruction, Multiple Data) in vector processors.
3. Task Parallelism:
o Decomposes the program into independent tasks that can run concurrently.
o Example: MapReduce framework used in distributed computing.
4. Message Passing:
o Communicating between independent processors by sending and receiving
messages.
o Example: MPI (Message Passing Interface) for distributed memory systems.
e) Laws of Caution
1. Amdahl’s Law:
o States that the speedup of a program using parallel computing is limited by
the sequential portion of the program.
o Formula: S=1(1−P)+PNS = \frac{1}{(1 - P) + \frac{P}{N}}S=(1−P)+NP1
2. Gustafson’s Law:
o Focuses on the scalability of parallel computing by considering that as the
problem size increases, the parallel portion also increases, leading to better
utilization of multiple processors.
o Formula: S=N−(1−P)×NS = N - (1 - P) \times NS=N−(1−P)×N
3. Brevity’s Law:
o Suggests that "the time spent on computation is inversely proportional to the
time spent on communication."

Q2. Compare and contrast parallel and distributed computing in


table. a) Define and differentiate between parallel and distributed
computing. b) Provide examples where each approach is beneficial.
ANS.

a) Definition and Differentiation


 Parallel Computing:
o Involves breaking a large task into smaller sub-tasks that can be processed
concurrently, using multiple processors or cores in the same machine. These
processors share the same memory, and tasks are synchronized to exchange
data.
o Key Feature: All processing units are closely linked, either in a single
machine or within a closely-knit system.
 Distributed Computing:
o Involves a system of independent computers (nodes) that collaborate over a
network to complete a task. Each node has its own memory and processing
power. Communication happens over a network, and nodes can operate
independently.
o Key Feature: Systems are physically separated and work together via
network communication, often used for tasks that require large-scale resource
allocation.

b) Examples of Where Each Approach is Beneficial


 Parallel Computing:
o Example 1: Weather Forecasting: Simulating weather patterns often
requires complex mathematical models that benefit from parallel computing to
process large amounts of data concurrently across multiple processors.
o Example 2: Scientific Simulations: In physics simulations (like molecular
dynamics or fluid dynamics), large computational models are solved more
efficiently using parallelism, as multiple variables or equations can be
processed simultaneously.
 Distributed Computing:
o Example 1: Cloud Services: A cloud provider like AWS or Google Cloud
uses distributed computing to manage vast amounts of data and provide
services such as storage, computing, and applications across a global
network of data centers.
o Example 2: Big Data Processing (Hadoop): Distributed computing is
essential in big data frameworks like Hadoop or Apache Spark, where data is
spread across many machines, and each machine performs computations on
portions of the data.
Q3. Discuss hardware architectures for parallel processing. a)
Explain different types of hardware architecture (e.g., SIMD, MIMD).
b) Compare scalability and performance across these
architectures.
ANS. a) Different Types of Hardware Architectures for Parallel Processing
1. SIMD (Single Instruction, Multiple Data)
o Definition: SIMD architecture allows a single instruction to be applied to
multiple data elements simultaneously. It is used when the same operation is
performed on large sets of data (data parallelism).
o Example: Vector processors, GPU processing, and SIMD units in modern
CPUs.
o Characteristics:
 All processors execute the same instruction, but on different data
elements.
2. MIMD (Multiple Instruction, Multiple Data)
o Definition: MIMD systems allow multiple processors to execute different
instructions on different sets of data concurrently. It provides flexibility for a
wide range of parallel applications (task parallelism).
o Example: Multi-core CPUs, distributed systems, and clusters.
o Characteristics:
 Allows a more general-purpose approach to parallel processing.
3. SISD (Single Instruction, Single Data)
o Definition: Traditional sequential computing, where a single processor
executes a single instruction on a single piece of data at a time.
o Example: Classical single-core processors.
o Characteristics:
 Not a parallel architecture, used in simple sequential tasks with no
parallel processing.
4. MISD (Multiple Instruction, Single Data)
o Definition: This architecture involves multiple processors executing different
instructions on the same data. It is rarely used in practice.
o Example: Fault tolerance systems, where multiple operations are applied to
the same data to ensure accuracy and reliability.
o Characteristics:
 Very limited use, mainly in specialized applications like real-time error
detection or fault tolerance.
Q4. Analyze the laws of caution in parallel computing. a) Define the
laws of caution in parallel computing. b) Explain how these laws
impact performance and design of parallel systems.
ANS. a) Definition of the Laws of Caution in Parallel Computing
The laws of caution in parallel computing are key principles that guide the design and
optimization of parallel systems. These laws help in understanding the inherent limitations
and trade-offs when using parallel computing to solve problems, especially in terms of
performance, scalability, and efficiency. The laws are derived from mathematical models and
practical observations in parallel computing.
There are several key laws of caution:
1. Amdahl's Law
o Definition: Amdahl's Law states that the maximum speedup of a parallel
program is limited by the sequential portion of the program. In other words,
even if we add more processors, the performance improvement will be
constrained by the part of the task that cannot be parallelized.
o Formula: S=1(1−P)+PNS = \frac{1}{(1 - P) + \frac{P}{N}}S=(1−P)+NP1
Where:
 SSS is the speedup,
 PPP is the parallelizable portion of the task,
 NNN is the number of processors.
o Implication: As the number of processors increases, the speedup begins to
level off, especially if a significant portion of the task remains sequential. For
example, if 90% of the task can be parallelized, adding more processors
beyond a certain point yields diminishing returns.
2. Gustafson's Law
o Definition: Gustafson's Law suggests that the scaling of parallel computing
systems is better represented by the increase in problem size. As problem
size grows, more of the task can be parallelized, allowing better scalability.
o Formula: S=N−(1−P)×NS = N - (1 - P) \times NS=N−(1−P)×N Where:
 SSS is the speedup,
 NNN is the number of processors,
 PPP is the parallelizable portion of the task.
o Implication: This law emphasizes that with larger problem sizes, more work
can be parallelized, and thus, the performance improvement scales better
with more processors. Unlike Amdahl's Law, Gustafson’s Law suggests that
increasing the problem size leads to more parallelism, making it easier to
scale performance with additional resources.
3. Brevity’s Law
o Definition: Brevity's Law states that "the time spent on computation is
inversely proportional to the time spent on communication." It highlights the
importance of reducing communication overhead in parallel computing, as
excessive data exchange between processors can limit the overall
performance of the system.
o Implication: Communication overhead becomes a bottleneck in parallel
systems, especially in distributed computing. Efficient data management and
minimizing the need for inter-processor communication can significantly
improve system performance.
4. Law of Diminishing Returns
o Definition: This law suggests that as the number of processors increases,
the marginal benefit or performance gain from adding additional processors
decreases. Essentially, there is a point where adding more processors results
in little or no additional performance improvement.
o Implication: This law affects the efficiency of parallel systems, as overheads
like synchronization, communication, and resource contention begin to
outweigh the benefits of adding more processors. System design must
account for these diminishing returns when scaling up.

b) Impact of the Laws on Performance and Design of Parallel Systems


1. Amdahl’s Law and Its Impact
o Performance: Amdahl’s Law shows that no matter how many processors are
added, the speedup is fundamentally limited by the sequential portion of the
code. For tasks with a large sequential portion, scaling performance with
parallelization is limited.
o Design: When designing parallel systems, it is crucial to minimize the
sequential part of the program or algorithm to maximize scalability. Efforts
should focus on optimizing the parallelizable portions to increase the
effectiveness of parallel processing.
2. Gustafson’s Law and Its Impact
o Performance: Gustafson’s Law allows for a more optimistic view of scaling
performance. By increasing the problem size, parallelism can be exploited
more fully, and performance improves as the number of processors grows.
o Design: This law suggests that large-scale parallel systems should be
designed with large problem sizes in mind, especially for applications like
simulations or data analytics, where the problem size can be adjusted to
better utilize parallelism.
3. Brevity’s Law and Its Impact
o Performance: Communication overhead can significantly degrade the
performance of parallel systems. Minimizing the time spent on communication
between processors is essential for improving overall performance.
o Design: Efficient data-sharing mechanisms and minimizing the need for inter-
process communication are crucial for the success of parallel systems.
Technologies such as shared memory, direct communication protocols, and
minimizing synchronization overheads can help address this issue.
4. Law of Diminishing Returns and Its Impact
o Performance: As more processors are added, the increase in performance
becomes less pronounced due to synchronization and communication
overheads. After a certain point, adding more processors can even hurt
performance if overheads outweigh the benefits.
o Design: This law stresses the importance of balancing the number of
processors with the problem’s size and the system’s ability to handle
communication and synchronization. Over-scaling in a system that isn’t
designed to handle large numbers of processors may lead to inefficiencies.

You might also like