0% found this document useful (0 votes)
21 views

CS ELEC 2 Introduce Parallel Computing

This document discusses a course on parallel and distributed computing. It covers key concepts like concurrency, consistency, communication and coordination in parallel and distributed systems. It also discusses evaluation criteria for the course including class standing, major examinations and other outputs.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

CS ELEC 2 Introduce Parallel Computing

This document discusses a course on parallel and distributed computing. It covers key concepts like concurrency, consistency, communication and coordination in parallel and distributed systems. It also discusses evaluation criteria for the course including class standing, major examinations and other outputs.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 28

CS ELEC 2

PARALLEL AND DISTRIBUTED COMPUTING


COURSE CREDIT: 3 UNITS
COURSE OVERVIEW

This course gives the vastly increased importance of parallel and distributed computing which
seemed crucial to identify essential concepts in the area and to promote those topics to the
core. Both parallel and distributed computing entail the logically simultaneous execution of
multiple process, whose operations have the potential to interleave in complex ways. Parallel
and distributed computing builds on foundations in many areas, including an understanding of
fundamental systems concepts such as: concurrency and parallel execution, consistency in
state/memory manipulation, and latency. Communication and coordination among process is
rooted in the message-passing and shared-memory models of computing and such algorithmic
concepts as atomicity, consensus, and conditional waiting. Achieving speedup in practice
requires an understanding of parallel algorithms, strategies for problem decomposition, system
architecture, detailed implementation strategies, and performance analysis and tuning.
Distributed systems highlight the problems of security and fault tolerance, emphasize the
maintenance of replicated state, and introduce additional issues that bridge to computer
networking.
SUMMATIVE EVALUATION

Class Standing : 40%


Major Examination : 30%
Other Outputs : 30%
PARALLEL AND DISTRIBUTED COMPUTING

PARALLEL - extending in the same direction, equidistant at all points, and


never converging or diverging; very similar and often happening at the same
time.

DISTRIBUTED - to divide among several or many; shared or spread out.

COMPUTING - the act of calculating something; doing more complex math


functions
PARALLEL AND DISTRIBUTED COMPUTING

Parallel computing and distributed computing are both methods of


solving computationally intensive problems by breaking them down into
smaller tasks that can be executed simultaneously. However, there are
some key differences between the two approaches.
PARALLEL COMPUTING

Parallel computing involves using multiple processors to


execute tasks simultaneously on a single computer.
DISTRIBUTED COMPUTING

Distributed computing involves dividing a single task into


smaller tasks that are executed on multiple computers.
Distributed computing is often used to solve problems that
require a lot of data, such as web search and machine learning.
COMPUTING POWER IN PARALLEL AND DISTRIBUTED COMPUTING

The computing power of parallel and distributed computing


systems is determined by the number of processors and the
speed of each processor.
In parallel computing, the processors are all located on the same
computer.
In distributed computing, the processors are located on different
computers.
DIFFERENCES BETWEEN PARALLEL AND DISTRIBUTED COMPUTING
Terms related to Parallel and Distributed Computing

● Concurrency is the ability of multiple tasks to run simultaneously.


● Parallelism is the use of multiple processors to execute tasks simultaneously.
● Distributed system is a system of computers that are connected together and
work together to solve a problem.
● Message passing is the way that computers in a distributed system communicate
with each other.
● Shared memory is a memory space that is accessible to all processors in a
parallel system.
Introduce Parallel Computing
What is Parallel Computing?

Parallel computing is a type of computation in which multiple


processors work together to solve a single problem.
Forms of Parallel Computing

● Bit-level parallelism is the use of multiple processors to perform operations


on individual bits of data. This is often used in digital circuits.
● Instruction-level parallelism is the use of multiple processors to execute
instructions simultaneously. This is often used in CPUs.
● Data parallelism is the use of multiple processors to operate on the same
data set. This is often used in scientific computing.
● Task parallelism is the use of multiple processors to execute different tasks.
This is often used in operating systems.
Examples of Parallel Computing

● A supercomputer is a computer with many processors that are used


to solve very large and complex problems.
● A graphics processing unit (GPU) is a specialized processor that is
used to render graphics. GPUs can also be used for parallel
computing tasks.
● A cluster is a group of computers that are connected together and
work together to solve a problem.
Advantages of Parallel Computing

● Increased speed
● Increased scalability
● Increased flexibility
Disadvantages of Parallel Computing

● Increased complexity
● Increased communication overhead
● Increased synchronization overhead
Why use Parallel Computing?

Parallel computing is used to solve computationally


intensive problems by dividing them into smaller tasks that
can be executed simultaneously on multiple processors.
Reasons why parallel computing is used:

● To speed up the execution of computationally intensive tasks


● To solve problems that are too large or complex for a single
processor
● To improve the scalability of applications
● To reduce energy consumption
Specific examples of how parallel computing is used today

● Supercomputers: Supercomputers are used to solve the world's most challenging


problems, such as climate modeling, nuclear fusion research, and protein folding.
● Graphics processing units (GPUs): GPUs are specialized processors that are used to
render graphics.
● Cloud computing: Cloud computing services use parallel computing to provide users
with access to a large pool of computing resources.
● Machine learning: Machine learning algorithms are often trained on large datasets.
● Virtual reality (VR) and augmented reality (AR): VR and AR applications require a lot of
processing power to render realistic images.
Who is using Parallel Computing?

● Research institutions: Research institutions use parallel computing to solve


complex problems.
● Government agencies: Government agencies use parallel computing to solve
problems.
● Commercial companies: Commercial companies use parallel computing to
improve the performance of their products and services.
● Educational institutions: Educational institutions use parallel computing to teach
students.
● Individuals: Individuals use parallel computing to run computationally intensive
applications on their personal computers
Parallel computing is a rapidly growing field, and the number of
organizations using parallel computing is expected to continue to
grow in the future.
Specific examples of organizations using parallel computing

● NASA: NASA uses parallel computing to simulate the atmosphere, to design


spacecraft, and to control spacecraft.
● CERN: CERN uses parallel computing to simulate the Large Hadron Collider, to
analyze data from the Large Hadron Collider, and to develop new particle physics
theories.
● Pfizer: Pfizer uses parallel computing to design new drugs, to test the safety and
efficacy of new drugs, and to manufacture new drugs.
● Google: Google uses parallel computing to run its search engine, to analyze data from
its users, and to develop new machine learning algorithms.
● Amazon: Amazon uses parallel computing to run its e-commerce platform, to process
payments, and to deliver packages.
How does Parallel Computing
save time in execution of an
applications?
Parallel computing saves time in the execution of an application by
dividing the application into smaller tasks that can be executed
simultaneously on multiple processors. This can significantly reduce
the amount of time it takes to execute the application.
Parallel Computing save time in execution of an applications in terms of:

● By reducing the amount of time spent waiting for I/O


operations
● By reducing the amount of time spent on synchronization
● By using specialized hardware

You might also like