0% found this document useful (0 votes)
5 views

Lecture 5

The document provides an introduction to OpenMP, an API for parallel programming that simplifies the execution of multiple computations simultaneously on shared-memory architectures. It highlights key features such as ease of use, flexibility, and scalability, and explains how OpenMP manages threads and directives to optimize performance. Additionally, it outlines the basic structure of OpenMP programs, including enabling OpenMP, creating parallel regions, and utilizing work-sharing constructs for efficient task distribution.

Uploaded by

Ehsan Aslam
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Lecture 5

The document provides an introduction to OpenMP, an API for parallel programming that simplifies the execution of multiple computations simultaneously on shared-memory architectures. It highlights key features such as ease of use, flexibility, and scalability, and explains how OpenMP manages threads and directives to optimize performance. Additionally, it outlines the basic structure of OpenMP programs, including enabling OpenMP, creating parallel regions, and utilizing work-sharing constructs for efficient task distribution.

Uploaded by

Ehsan Aslam
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 12

Parallel and Distributed

Computing

Topic: Introduction to OpenMP in Parallel Programming

Muhammad Yousaf
Visiting Lecturer,
Department of Computer Science
[email protected]
Overview of Parallel Programming
• Parallel programming allows the simultaneous execution of multiple
computations, speeding up the processing time for large tasks by
leveraging multiple CPU cores.
What is OpenMP?
OpenMP (Open Multi-Processing) is an API for parallel
programming that supports multi-platform shared-
memory parallelism. It simplifies parallel programming
in C, C++, and Fortran by providing compiler directives,
library routines, and environment variables to create
and manage parallel regions.
What is OpenMP?
Key Features
• Easy to use and integrate into existing codebases.
• Supports both task and data parallelism.
• Suitable for shared-memory architectures.
• Allows incremental parallelism.
How OpenMP Works
OpenMP works by dividing tasks among available threads. It
automatically manages thread creation, task assignment,
synchronization, and data handling based on the specified
directives.
• Threads: The basic unit of parallelism in OpenMP. The
programmer specifies how many threads to use, and OpenMP
creates and manages them.
• Directives: Pragmas that tell the compiler which parts of the code
to execute in parallel. For example, the #pragma omp parallel
directive marks a block of code to be run by multiple threads.
Advantages of OpenMP
Simplicity: Minimal changes are required to the existing
serial code.
Flexibility: Can parallelize different levels of a program,
from loop iterations to entire functions.
Portability: Works on multiple platforms without
significant code changes.
Scalability: Automatically adapts to the number of
cores available on the system.
Basic Structure of OpenMP Programs
1. Enabling OpenMP
#include <omp.h>
2. Parallel Regions
The parallel region is the core construct that defines
which section of the code should be executed by
multiple threads.
Basic Structure of OpenMP Programs

This code creates a parallel region where multiple threads execute the printf
statement. Each thread gets its unique identifier via
omp_get_thread_num().
Basic Structure of OpenMP Programs
3. Parallelizing Loops
One of the most common use cases for OpenMP is parallelizing loops. OpenMP can distribute the
iterations of a loop across multiple threads.
Here, each iteration of the loop can be processed by a different thread, speeding up loop
execution.
Basic Structure of OpenMP Programs
4. Work-Sharing Constructs
OpenMP includes work-sharing constructs such as for, sections, and single, allowing
different types of task distribution:
• For: Distributes loop iterations across threads.
• Sections: Executes different blocks of code concurrently.
• Single: Ensures a block of code is executed by only one thread.
Basic Structure of OpenMP Programs
Any Questions?

You might also like