0% found this document useful (0 votes)
94 views

Parallel Programming: Process and Threads

The document provides an overview of parallel programming concepts including concurrency, processes, threads, MPI, and OpenMP. It defines concurrency as tasks executing simultaneously, whether on one processor or multiple. Processes are programs in execution that have an address space and process control block. Threads are the basic unit of CPU utilization within a process and share resources like code and data. MPI is introduced as the standard for message passing between processes with private address spaces, while OpenMP utilizes shared memory with compiler directives and libraries for multi-threading. Hybrid programming combines both message passing and shared memory approaches.

Uploaded by

Mcheaven Nojram
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
94 views

Parallel Programming: Process and Threads

The document provides an overview of parallel programming concepts including concurrency, processes, threads, MPI, and OpenMP. It defines concurrency as tasks executing simultaneously, whether on one processor or multiple. Processes are programs in execution that have an address space and process control block. Threads are the basic unit of CPU utilization within a process and share resources like code and data. MPI is introduced as the standard for message passing between processes with private address spaces, while OpenMP utilizes shared memory with compiler directives and libraries for multi-threading. Hybrid programming combines both message passing and shared memory approaches.

Uploaded by

Mcheaven Nojram
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 18

Parallel Programming

Process and Threads


Outline

 Concurrency
 Processes
 Threads
 MPI
 OpenMP
Concurrency
• A property of computing systems in which several tasks are executing simultaneously
• Tasks are in progress at the same time
• Maybe running on one single processor, maybe on more than one
• Typical examples: web server, multiple programs running in your desktop
Concurrency
Time-sharing or multitasking systems

• CPU executes multiple processes by switching among them


• Switching occurs often enough for users to interact with each program while running
• In muti-core/multi-computer, processes may indeed be running in parallel.
Process

• A process is a program in execution


• States of a process

• New: the process is being created


• Ready: waiting to be assigned to a processor
• Running: instructions are being executed
• Waiting: waiting for some event to occur
(e.g., I/O completion)
• Terminated: has finished execution
Process

• Associated address space


• Proram itself (text section)
• Program’s data (data section)
• Stack, heap
Process

• Process control block

• Process ID
• Process status
• CPU registers (PC,...)
• Open files, memory management,...

• Stores context to ensure a process can continue its execution properly after
switching by restoring this context.
Thread
• Basic unit of CPU utilization
-Flow of control within a process
• A thread includes
-Thread ID
-Program counter
-Register set
-Stack
• Shares resources with other threads within the same process
-Text section
-Data section
-Other OS resources (open files,...)
Thread
Benefits of multi-threading

• Responsiveness
- An interactive application can keep running even if a part of it is blocked or
performing a compute-intensive operations.
- A server can accept requests while processing existing ones
• Resource sharing: code and data shared among threads
• Speed: cheap creation and context switching
Cost of multi-threading
Performance overhead:
-Synchronization
-Access to shared resources
Hazards
Deadlocks:
-A thread enters a waiting state for a resource held by another one, which in
turns is waiting for a resource by another (possible the first one).
Race conditions:
-Two or more threads read/write shared data and the result depends on the
actual sequence of execution of the threads.
Not deterministic:
-Harder to debug (hard to reproduce errors)
Single program Multiple Data

• Most common programming model


• The same program is executed on multiple processors
• Different control flow based on the process/thread ID
Message Passing

• Multiple processes (not necessarily running on different nodes)


• Each with its own private address spaces
• Access to (remote) data of other processes via sending/receiving messages
(explicit communication)

Well-suited for distributed memory


MPI (Message Passing Interface) is the de-facto standard
Message Passing Interface (MPI)

Specified and managed by the MPI Forum


-Library offers a collection of communication primitives
-Language binding for C/C++ and Fortran
-www.mpi-forum.org
Relatively low-level programming model
-Data distribution and communication must be done manually
-Primitives are easy to use, but designing parallel programs is hard
Communication modes
-Point-to-point (message between two processes)
-Collective (message among groups of processes)
1 n (e.g., broadcast)
n 1 (e., reduce)
n n (e.g., allreduce)
Shared memory

IEEE POSIx Threads (PThreads)


- Standard UNIX threading API. Also used in Windows.
- Over 60 functions: pthread_create,pthread_join,pthread_exit, etc.
OpenMP
Higher level interface based on:
- compiler directives
- library routines
- runtime
Emphasis on high-performance computing
OpenMP
• Specified and managed by the OpenMP ARB
• Assumes shared memory
• Allows definition of shared/private variables
• Language extensions based on:
- Compiler directives
- Library of routines
- Runtime for the creation and management of threads

Currently available for C/C and Fortran


www.openmp.org
Hybrid programming

• Multiple processes, each spawning a number of


threads
- Inter-process communication via message
passing (MPI)
- Intra-process (thread) communication via shared
memory
• Especially well-suited for hybrid architectures. For
instance:
- one process per shared-memory node, and
- one thread per core
Let us all be the Light of the world...!

THANK YOU!!

You might also like