SlideShare a Scribd company logo
Parallel Computing


                      Submitted by:
              •Virendra Singh Yadav
•
Overview
 What is Parallel Computing
 Why Use Parallel Computing
 Concepts and Terminology
 Parallel Computer Memory Architectures
 Parallel Programming Models
 Examples
 Conclusion
What is Parallel Computing?

 Parallel computing is the simultaneous use of multiple
 compute resources to solve a computational problem.
Parallel computing
Why Use Parallel Computing?
 Saves time

 Solve larger problems

 Cost savings

 Provide concurrency
Concepts and Terminology:
           Types Of Parallelism

   Data Parallelism



   Task Parallelism
Concepts and Terminology:
    Flynn’s Classical Taxonomy
 Distinguishes multi-processor architecture by
 instruction and data

 SISD – Single Instruction, Single Data

 SIMD – Single Instruction, Multiple Data

 MISD – Multiple Instruction, Single Data

 MIMD – Multiple Instruction, Multiple Data
Flynn’s Classical Taxonomy:
            SISD
               Serial

               Only one instruction
               and data stream is
               acted on during any
               one clock cycle
Flynn’s Classical Taxonomy:
           SIMD
               All processing units
                execute the same
                instruction at any given
                clock cycle.
               Each processing unit
                operates on a different
                data element.
Flynn’s Classical Taxonomy:
               MISD
 Different instructions operated on a single data
 element.

 Example: Multiple cryptography algorithms
 attempting to crack a single coded message.
Flynn’s Classical Taxonomy:
           MIMD
               Can execute different
                instructions on
                different data
                elements.
               Most common type of
                parallel computer.
Parallel Computer Memory Architectures:
        Shared Memory Architecture
 All processors access
  all memory as a single
  global address space.
 Data sharing is fast.
 Lack of scalability
  between memory and
  CPUs
Parallel Computer Memory Architectures:
            Distributed Memory
 Each processor has its
  own memory.
 Is scalable, no
  overhead for cache
  coherency.
 Programmer is
  responsible for many
  details of
  communication
  between processors.
Parallel Programming Models
  Shared Memory Model


  Messaging Passing Model


  Data Parallel Model
Parallel Programming Models:
Shared Memory Model
 In the shared-memory programming model, tasks
 share a common address space, which they read and
 write asynchronously.
 Locks may be used to control shared memory access.

 Program development can be simplified since there is
 no need to explicitly specify communication between
 tasks.
Parallel Programming Models:
   Message Passing Model
Parallel Programming Models:
     Data Parallel Model
Designing Parallel Programs
 Patitioning -
         Domain Decomposition
      •   Functional Decomposition


• Communication


• Synchronization
Partition :
            Domain Decomposition
 Each task handles a portion of the data set.
Partition
          Functional Decomposition
 Each task performs a function of the overall work
Designing Parallel Programs
              Communication

 Synchronous communications are often referred to as
 blocking communications since other work must wait
 until the communications have completed.

 Asynchronous communications allow tasks to transfer
 data independently from one another.
Designing Parallel Programs
                 Synchronization
Types of Synchronization:

 Barrier

 Lock / semaphore

 Synchronous communication operations
Example:
 As a simple example, if we are running code on a 2-
 processor system (CPUs "a" & "b") in a parallel
 environment and we wish to do tasks "A" and "B" , it is
 possible to tell CPU "a" to do task "A" and CPU "b" to
 do task 'B" simultaneously, thereby reducing the
 runtime of the execution.
Example:
Array Processing
 Serial Solution
    Perform a function on a 2D array.
    Single processor iterates through each element in the
     array
 Possible Parallel Solution
    Assign each processor a partition of the array.
    Each process iterates through its own partition.
Conclusion
 Parallel computing is fast.
 There are many different approaches and models of
  parallel computing.
References
 https://computing.llnl.gov/tutorials/parallel_comp
 Introduction to Parallel
  Computing, www.llnl.gov/computing/tutorials/paralle
  l_comp/#Whatis
 www.cs.berkeley.edu/~yelick/cs267-
  sp04/lectures/01/lect01-intro

 http://www-users.cs.umn.edu/~karypis/parbook/
Thank You
Ad

More Related Content

What's hot (20)

Microsoft Palladium
Microsoft PalladiumMicrosoft Palladium
Microsoft Palladium
Suryakanta Rout
 
Lecture 1 introduction to parallel and distributed computing
Lecture 1   introduction to parallel and distributed computingLecture 1   introduction to parallel and distributed computing
Lecture 1 introduction to parallel and distributed computing
Vajira Thambawita
 
M2M and IoT Design Methodologies
M2M and IoT Design MethodologiesM2M and IoT Design Methodologies
M2M and IoT Design Methodologies
Selvaraj Seerangan
 
Parallel Programming
Parallel ProgrammingParallel Programming
Parallel Programming
Uday Sharma
 
Ch03
Ch03Ch03
Ch03
Joe Christensen
 
Parallel Programing Model
Parallel Programing ModelParallel Programing Model
Parallel Programing Model
Adlin Jeena
 
6LoWPAN: An open IoT Networking Protocol
6LoWPAN: An open IoT Networking Protocol6LoWPAN: An open IoT Networking Protocol
6LoWPAN: An open IoT Networking Protocol
Samsung Open Source Group
 
Fog Computing
Fog ComputingFog Computing
Fog Computing
Joud Khattab
 
Introduction to Parallel and Distributed Computing
Introduction to Parallel and Distributed ComputingIntroduction to Parallel and Distributed Computing
Introduction to Parallel and Distributed Computing
Sayed Chhattan Shah
 
Introduction to OpenMP
Introduction to OpenMPIntroduction to OpenMP
Introduction to OpenMP
Akhila Prabhakaran
 
Distributed Shared Memory
Distributed Shared MemoryDistributed Shared Memory
Distributed Shared Memory
Prakhar Rastogi
 
Synchronous vs Asynchronous Programming
Synchronous vs Asynchronous ProgrammingSynchronous vs Asynchronous Programming
Synchronous vs Asynchronous Programming
jeetendra mandal
 
Chap 2 classification of parralel architecture and introduction to parllel p...
Chap 2  classification of parralel architecture and introduction to parllel p...Chap 2  classification of parralel architecture and introduction to parllel p...
Chap 2 classification of parralel architecture and introduction to parllel p...
Malobe Lottin Cyrille Marcel
 
file sharing semantics by Umar Danjuma Maiwada
file sharing semantics by Umar Danjuma Maiwada file sharing semantics by Umar Danjuma Maiwada
file sharing semantics by Umar Danjuma Maiwada
umardanjumamaiwada
 
Communication costs in parallel machines
Communication costs in parallel machinesCommunication costs in parallel machines
Communication costs in parallel machines
Syed Zaid Irshad
 
Minmax Algorithm In Artificial Intelligence slides
Minmax Algorithm In Artificial Intelligence slidesMinmax Algorithm In Artificial Intelligence slides
Minmax Algorithm In Artificial Intelligence slides
SamiaAziz4
 
Parallel programming model
Parallel programming modelParallel programming model
Parallel programming model
Illuru Phani Kumar
 
Physical organization of parallel platforms
Physical organization of parallel platformsPhysical organization of parallel platforms
Physical organization of parallel platforms
Syed Zaid Irshad
 
Internet Architecture and Design Philosophy
Internet Architecture and Design PhilosophyInternet Architecture and Design Philosophy
Internet Architecture and Design Philosophy
Dilum Bandara
 
Sources of IoT (JNTUK - UNIT 1)
Sources of IoT (JNTUK - UNIT 1)Sources of IoT (JNTUK - UNIT 1)
Sources of IoT (JNTUK - UNIT 1)
FabMinds
 
Lecture 1 introduction to parallel and distributed computing
Lecture 1   introduction to parallel and distributed computingLecture 1   introduction to parallel and distributed computing
Lecture 1 introduction to parallel and distributed computing
Vajira Thambawita
 
M2M and IoT Design Methodologies
M2M and IoT Design MethodologiesM2M and IoT Design Methodologies
M2M and IoT Design Methodologies
Selvaraj Seerangan
 
Parallel Programming
Parallel ProgrammingParallel Programming
Parallel Programming
Uday Sharma
 
Parallel Programing Model
Parallel Programing ModelParallel Programing Model
Parallel Programing Model
Adlin Jeena
 
Introduction to Parallel and Distributed Computing
Introduction to Parallel and Distributed ComputingIntroduction to Parallel and Distributed Computing
Introduction to Parallel and Distributed Computing
Sayed Chhattan Shah
 
Distributed Shared Memory
Distributed Shared MemoryDistributed Shared Memory
Distributed Shared Memory
Prakhar Rastogi
 
Synchronous vs Asynchronous Programming
Synchronous vs Asynchronous ProgrammingSynchronous vs Asynchronous Programming
Synchronous vs Asynchronous Programming
jeetendra mandal
 
Chap 2 classification of parralel architecture and introduction to parllel p...
Chap 2  classification of parralel architecture and introduction to parllel p...Chap 2  classification of parralel architecture and introduction to parllel p...
Chap 2 classification of parralel architecture and introduction to parllel p...
Malobe Lottin Cyrille Marcel
 
file sharing semantics by Umar Danjuma Maiwada
file sharing semantics by Umar Danjuma Maiwada file sharing semantics by Umar Danjuma Maiwada
file sharing semantics by Umar Danjuma Maiwada
umardanjumamaiwada
 
Communication costs in parallel machines
Communication costs in parallel machinesCommunication costs in parallel machines
Communication costs in parallel machines
Syed Zaid Irshad
 
Minmax Algorithm In Artificial Intelligence slides
Minmax Algorithm In Artificial Intelligence slidesMinmax Algorithm In Artificial Intelligence slides
Minmax Algorithm In Artificial Intelligence slides
SamiaAziz4
 
Physical organization of parallel platforms
Physical organization of parallel platformsPhysical organization of parallel platforms
Physical organization of parallel platforms
Syed Zaid Irshad
 
Internet Architecture and Design Philosophy
Internet Architecture and Design PhilosophyInternet Architecture and Design Philosophy
Internet Architecture and Design Philosophy
Dilum Bandara
 
Sources of IoT (JNTUK - UNIT 1)
Sources of IoT (JNTUK - UNIT 1)Sources of IoT (JNTUK - UNIT 1)
Sources of IoT (JNTUK - UNIT 1)
FabMinds
 

Similar to Parallel computing (20)

Parallel architecture &programming
Parallel architecture &programmingParallel architecture &programming
Parallel architecture &programming
Ismail El Gayar
 
Parallel architecture-programming
Parallel architecture-programmingParallel architecture-programming
Parallel architecture-programming
Shaveta Banda
 
Parallel Programming Models: Shared variable model, Message passing model, Da...
Parallel Programming Models: Shared variable model, Message passing model, Da...Parallel Programming Models: Shared variable model, Message passing model, Da...
Parallel Programming Models: Shared variable model, Message passing model, Da...
SHASHIKANT346021
 
Lecture 2
Lecture 2Lecture 2
Lecture 2
Mr SMAK
 
Multiprocessor
MultiprocessorMultiprocessor
Multiprocessor
Dr. A. B. Shinde
 
Week # 1.pdf
Week # 1.pdfWeek # 1.pdf
Week # 1.pdf
giddy5
 
parallel programming models
 parallel programming models parallel programming models
parallel programming models
Swetha S
 
Computer organisation and architecture unit 5, SRM
Computer organisation and architecture unit 5, SRMComputer organisation and architecture unit 5, SRM
Computer organisation and architecture unit 5, SRM
sameerkrdbg
 
Chapter04 new
Chapter04 newChapter04 new
Chapter04 new
vmummaneni
 
intro, definitions, basic laws+.pptx
intro, definitions, basic laws+.pptxintro, definitions, basic laws+.pptx
intro, definitions, basic laws+.pptx
ssuser413a98
 
Unit5
Unit5Unit5
Unit5
Sneha Soni
 
Parallel Processing
Parallel ProcessingParallel Processing
Parallel Processing
Mustafa Salam
 
Module 2.pdf
Module 2.pdfModule 2.pdf
Module 2.pdf
DrAnjuShukla
 
distributed system lab materials about ad
distributed system lab materials about addistributed system lab materials about ad
distributed system lab materials about ad
milkesa13
 
CA UNIT IV.pptx
CA UNIT IV.pptxCA UNIT IV.pptx
CA UNIT IV.pptx
ssuser9dbd7e
 
Computing notes
Computing notesComputing notes
Computing notes
thenraju24
 
01 introduction fundamentals_of_parallelism_and_code_optimization-www.astek.ir
01 introduction fundamentals_of_parallelism_and_code_optimization-www.astek.ir01 introduction fundamentals_of_parallelism_and_code_optimization-www.astek.ir
01 introduction fundamentals_of_parallelism_and_code_optimization-www.astek.ir
aminnezarat
 
Parallelization using open mp
Parallelization using open mpParallelization using open mp
Parallelization using open mp
ranjit banshpal
 
2 parallel processing presentation ph d 1st semester
2 parallel processing presentation ph d 1st semester2 parallel processing presentation ph d 1st semester
2 parallel processing presentation ph d 1st semester
Rafi Ullah
 
Parallel Computing
Parallel Computing Parallel Computing
Parallel Computing
Umma Khatuna Jannat
 
Parallel architecture &programming
Parallel architecture &programmingParallel architecture &programming
Parallel architecture &programming
Ismail El Gayar
 
Parallel architecture-programming
Parallel architecture-programmingParallel architecture-programming
Parallel architecture-programming
Shaveta Banda
 
Parallel Programming Models: Shared variable model, Message passing model, Da...
Parallel Programming Models: Shared variable model, Message passing model, Da...Parallel Programming Models: Shared variable model, Message passing model, Da...
Parallel Programming Models: Shared variable model, Message passing model, Da...
SHASHIKANT346021
 
Lecture 2
Lecture 2Lecture 2
Lecture 2
Mr SMAK
 
Week # 1.pdf
Week # 1.pdfWeek # 1.pdf
Week # 1.pdf
giddy5
 
parallel programming models
 parallel programming models parallel programming models
parallel programming models
Swetha S
 
Computer organisation and architecture unit 5, SRM
Computer organisation and architecture unit 5, SRMComputer organisation and architecture unit 5, SRM
Computer organisation and architecture unit 5, SRM
sameerkrdbg
 
intro, definitions, basic laws+.pptx
intro, definitions, basic laws+.pptxintro, definitions, basic laws+.pptx
intro, definitions, basic laws+.pptx
ssuser413a98
 
distributed system lab materials about ad
distributed system lab materials about addistributed system lab materials about ad
distributed system lab materials about ad
milkesa13
 
Computing notes
Computing notesComputing notes
Computing notes
thenraju24
 
01 introduction fundamentals_of_parallelism_and_code_optimization-www.astek.ir
01 introduction fundamentals_of_parallelism_and_code_optimization-www.astek.ir01 introduction fundamentals_of_parallelism_and_code_optimization-www.astek.ir
01 introduction fundamentals_of_parallelism_and_code_optimization-www.astek.ir
aminnezarat
 
Parallelization using open mp
Parallelization using open mpParallelization using open mp
Parallelization using open mp
ranjit banshpal
 
2 parallel processing presentation ph d 1st semester
2 parallel processing presentation ph d 1st semester2 parallel processing presentation ph d 1st semester
2 parallel processing presentation ph d 1st semester
Rafi Ullah
 
Ad

Parallel computing

  • 1. Parallel Computing Submitted by: •Virendra Singh Yadav •
  • 2. Overview  What is Parallel Computing  Why Use Parallel Computing  Concepts and Terminology  Parallel Computer Memory Architectures  Parallel Programming Models  Examples  Conclusion
  • 3. What is Parallel Computing?  Parallel computing is the simultaneous use of multiple compute resources to solve a computational problem.
  • 5. Why Use Parallel Computing?  Saves time  Solve larger problems  Cost savings  Provide concurrency
  • 6. Concepts and Terminology: Types Of Parallelism  Data Parallelism  Task Parallelism
  • 7. Concepts and Terminology: Flynn’s Classical Taxonomy  Distinguishes multi-processor architecture by instruction and data  SISD – Single Instruction, Single Data  SIMD – Single Instruction, Multiple Data  MISD – Multiple Instruction, Single Data  MIMD – Multiple Instruction, Multiple Data
  • 8. Flynn’s Classical Taxonomy: SISD  Serial  Only one instruction and data stream is acted on during any one clock cycle
  • 9. Flynn’s Classical Taxonomy: SIMD  All processing units execute the same instruction at any given clock cycle.  Each processing unit operates on a different data element.
  • 10. Flynn’s Classical Taxonomy: MISD  Different instructions operated on a single data element.  Example: Multiple cryptography algorithms attempting to crack a single coded message.
  • 11. Flynn’s Classical Taxonomy: MIMD  Can execute different instructions on different data elements.  Most common type of parallel computer.
  • 12. Parallel Computer Memory Architectures: Shared Memory Architecture  All processors access all memory as a single global address space.  Data sharing is fast.  Lack of scalability between memory and CPUs
  • 13. Parallel Computer Memory Architectures: Distributed Memory  Each processor has its own memory.  Is scalable, no overhead for cache coherency.  Programmer is responsible for many details of communication between processors.
  • 14. Parallel Programming Models  Shared Memory Model  Messaging Passing Model  Data Parallel Model
  • 15. Parallel Programming Models: Shared Memory Model  In the shared-memory programming model, tasks share a common address space, which they read and write asynchronously.  Locks may be used to control shared memory access.  Program development can be simplified since there is no need to explicitly specify communication between tasks.
  • 16. Parallel Programming Models: Message Passing Model
  • 17. Parallel Programming Models: Data Parallel Model
  • 18. Designing Parallel Programs  Patitioning -  Domain Decomposition • Functional Decomposition • Communication • Synchronization
  • 19. Partition : Domain Decomposition  Each task handles a portion of the data set.
  • 20. Partition Functional Decomposition  Each task performs a function of the overall work
  • 21. Designing Parallel Programs Communication  Synchronous communications are often referred to as blocking communications since other work must wait until the communications have completed.  Asynchronous communications allow tasks to transfer data independently from one another.
  • 22. Designing Parallel Programs Synchronization Types of Synchronization:  Barrier  Lock / semaphore  Synchronous communication operations
  • 23. Example:  As a simple example, if we are running code on a 2- processor system (CPUs "a" & "b") in a parallel environment and we wish to do tasks "A" and "B" , it is possible to tell CPU "a" to do task "A" and CPU "b" to do task 'B" simultaneously, thereby reducing the runtime of the execution.
  • 24. Example: Array Processing  Serial Solution  Perform a function on a 2D array.  Single processor iterates through each element in the array  Possible Parallel Solution  Assign each processor a partition of the array.  Each process iterates through its own partition.
  • 25. Conclusion  Parallel computing is fast.  There are many different approaches and models of parallel computing.
  • 26. References  https://computing.llnl.gov/tutorials/parallel_comp  Introduction to Parallel Computing, www.llnl.gov/computing/tutorials/paralle l_comp/#Whatis  www.cs.berkeley.edu/~yelick/cs267- sp04/lectures/01/lect01-intro  http://www-users.cs.umn.edu/~karypis/parbook/