Parallelization Techniques Heat Conduction
Parallelization Techniques Heat Conduction
Code Overview: The serial version of the program simulates the heat distribution along a
rod over time using the finite difference method. In this approach:
We initialize the temperature distribution along the rod, with the initial condition
given by
The temperature at each point is updated in time using the heat equation:
where α\alphaα is the thermal diffusivity constant, and Δx and Δt are the spatial and time
steps, respectively.
The boundaries are held at a fixed temperature T0=0T_0 = 0T0=0, as per the
problem specification.
3. Main Simulation Loop: In the main function, the program iterates through time
steps, calling the temperature update function at each step and swapping the old
and new temperature arrays.
CODE
#include <iostream>
#include <cmath>
#include <vector>
#include <iomanip>
using namespace std;
double x = i * dx;
T_new[0] = T0;
T_new[N] = T0;
void calculateTemperature() {
cout << "Invalid point. Please enter a value between 0 and " << rodLength << "
meters.\n";
return;
T.swap(T_new);
cout << fixed << setprecision(2) << T[pointIndex] << " °C\n";
int main() {
int choice;
do {
switch (choice) {
case 1:
calculateTemperature();
break;
case 2:
break;
default:
2. Parallel Temperature Update: In this function, the temperature updates are done
in parallel using the #pragma omp parallel for directive. The dynamic scheduling
policy is used to assign work to threads dynamically, helping balance the load when
the computational cost varies across iterations.
3. Timing and Performance Measurement: The code measures the execution time
using clock_t and outputs the time taken to complete the simulation. This time
comparison helps demonstrate the performance improvement of the parallel
approach over the serial one.
CODE
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <omp.h>
#include <time.h>
double x = i * dx;
T_new[0] = T0;
T_new[N] = T0;
void calculateTemperature() {
double rodLength, queryTime, queryPoint;
scanf("%lf", &rodLength);
scanf("%lf", &queryTime);
scanf("%lf", &queryPoint);
return;
omp_set_num_threads(omp_get_num_procs());
double* temp = T;
T = T_new;
T_new = temp;
free(T_new);
int main() {
int choice;
do {
printf("2. Exit\n");
scanf("%d", &choice);
switch (choice) {
case 1:
break;
case 2:
break;
default:
Code Overview: The third approach involves using MPI (Message Passing Interface), which
allows for distributed computing across multiple nodes (machines or processors). In this
version:
The rod is divided into segments, and each process handles a portion of the rod.
Each process updates the temperature for its assigned section of the rod, and after
each step, the results are synchronized across processes.
1. Initialization and Data Distribution: The rod is divided into segments, with each
process handling one segment. The initial temperature is distributed across all
processes using MPI broadcasting.
2. Boundary Communication: Each process sends and receives data to and from
adjacent processes to update the temperature at the boundaries. This ensures that
the simulation remains consistent across all processes.
3. Parallel Update: Each process calculates the temperature for its segment
independently and updates it using the finite difference method. After each time
step, the temperature array is updated, and the boundary values are exchanged.
4. Synchronization and Timing: The code synchronizes processes at each time step
using MPI barriers to ensure all processes are up to date before proceeding to the
next step.
CODE
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <mpi.h>
#include <time.h>
const double alpha = 0.01;
double x = i * dx;
void updateTemperature(double* T, double* T_new, int N, double T0, double dt, double dx)
{
T_new[0] = T0;
T_new[N] = T0;
scanf("%lf", &rodLength);
scanf("%lf", &queryTime);
scanf("%lf", &queryPoint);
if (rank == 0) {
return;
if (rank == size - 1) {
local_N += N % size;
}
double* T = (double*)calloc(local_N + 2, sizeof(double));
MPI_Barrier(MPI_COMM_WORLD);
if (rank > 0) {
if (rank > 0) {
}
updateTemperature(T, T_new, local_N, T0, dt, dx);
double* temp = T;
T = T_new;
T_new = temp;
double tempAtPoint;
if (rank == 0) {
tempAtPoint = T[pointIndex];
if (rank == 0) {
free(T);
free(T_new);
}
int main(int argc, char* argv[]) {
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
int choice;
if (rank == 0) {
do {
printf("2. Exit\n");
scanf("%d", &choice);
switch (choice) {
case 1:
calculateTemperature(rank, size);
break;
case 2:
break;
default:
MPI_Finalize();
return 0;
MPI Parallel Simulation is the most scalable and suitable for large-scale problems
distributed across multiple machines or nodes.
Conclusion
The simulations were implemented using three different approaches: serial, OpenMP
parallel, and MPI parallel. The OpenMP and MPI versions of the simulation showed
significant improvements in performance, particularly for larger problems or those that
require high computational resources. OpenMP provides a quick solution for parallelization
on shared-memory systems, while MPI offers scalability across distributed systems, making
it ideal for large-scale simulations.