BCSE412L - Parallel Computing 01
BCSE412L - Parallel Computing 01
5. Analyse the efficiency of a parallel processing system and evaluate the types of
application for which parallel programming is useful
Motivation for Parallelism
• To address various challenges associated with increasing computational
demands and the limitations of traditional sequential processing.
Task Decomposition:
Breaking down a large problem into smaller, independent tasks that can be executed
concurrently.
Concurrency:
Data Parallelism:
Involves processing multiple data elements simultaneously.
Task Parallelism:
Concurrent execution of independent tasks or processes, where each task
can be executed in parallel.
Key Concepts of Parallelism:
Parallel Architectures:
Different configurations of hardware that support parallel processing,
including shared-memory and distributed-memory architectures.
Load Balancing:
Distributing the workload evenly among processors to maximize system
efficiency and prevent some processors from idling while others are
overloaded.
Scalability:
The ability of a parallel system to efficiently handle an increasing number
of processors or workload.
Challenges of Parallelism:
Synchronization Overhead:
Ensuring proper coordination among parallel tasks may introduce overhead,
as synchronization mechanisms like locks and barriers can impact performance.
Load Imbalance:
Uneven distribution of workload among processors can lead to inefficiencies,
where some processors are underutilized while others are overloaded.
Communication Overhead:
In distributed-memory systems, the communication between processors
introduces overhead. Minimizing communication overhead is crucial for optimal
performance.
Challenges of Parallelism:
Dependency Management:
Managing dependencies between tasks and ensuring that they execute in
the correct order without data inconsistencies.
Granularity Issues:
Determining the appropriate size of tasks (granularity) is challenging.
Fine-grained tasks may lead to overhead, while coarse-grained tasks may limit
parallelism.
Challenges of Parallelism:
Scalability Limits:
Programming Complexity:
Heterogeneous Architectures:
The presence of diverse hardware architectures, such as CPUs, GPUs, and
accelerators, requires specialized programming approaches to harness their
parallel processing capabilities.
Overview of Parallel computing
Parallel computing is a type of computation in which many
calculations or processes are carried out simultaneously, with
the goal of solving a problem more quickly.
Types of parallel computing
There are several types of parallel computing architectures and models, each with its own
characteristics. Here are some common types:
Bit-level Parallelism:
• Involves processing multiple bits of data simultaneously.
• Primarily used in specialized processors and hardware.
Instruction-level Parallelism (ILP):
• Exploits parallelism at the instruction level.
• Pipelining and superscalar architectures are examples of ILP.
Data-level Parallelism:
• Focuses on dividing data into independent chunks for parallel processing.
• SIMD (Single Instruction, Multiple Data) and vector processing are examples.
Types of parallel computing
Task-level Parallelism:
• Divides a program into independent tasks that can be executed in parallel.
• Commonly used in parallel programming.
Process-level Parallelism:
• Involves the simultaneous execution of multiple processes.
• Often used in distributed computing and clusters.
Types of parallel computing
Memory-level Parallelism (MLP):
• Exploits parallelism in accessing memory.
• Techniques like out-of-order execution.
Pipeline Parallelism:
• Divides a task into stages, and each stage is processed concurrently.
• Common in modern CPU architectures.
Cluster Computing:
• Connects multiple computers (nodes) to work together on a task.
• Often used in scientific research and large-scale data processing.
Types of parallel computing
SIMD (Single Instruction, Multiple Data):
•Comparing with Serial Computing, parallel computing can solve larger problems in a
short time.
•There are multiple problems that are very large and may impractical or impossible to
solve them on a single computer; the concept of parallel computing helps to remove these
kinds of issues.
•One of the best advantages of parallel computing is that it allows you to do several things
in a time by using multiple computing resources.
Limitation of Parallel computing
•It addresses such as communication and synchronization between multiple sub-tasks and
processes which is difficult to achieve.
•The algorithms must be managed in such a way that they can be handled in a parallel
mechanism.
•More technically skilled and expert programmers can code a parallelism-based program
well.