0% found this document useful (0 votes)
76 views

ISE-20% Unit Test I-15% Unit Test II-15% ESE-50% (Minimum Passing Marks: 40%)

The document outlines the course details for a Parallel Computing course, including learning outcomes, 5 units of content covering topics like message passing, GPU computing, analytical modeling, and sorting algorithms, as well as required and reference textbooks. The evaluation scheme includes 2 unit tests, an in-semester exam, and an end-semester exam that make up the total grade.

Uploaded by

Priya Patel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
76 views

ISE-20% Unit Test I-15% Unit Test II-15% ESE-50% (Minimum Passing Marks: 40%)

The document outlines the course details for a Parallel Computing course, including learning outcomes, 5 units of content covering topics like message passing, GPU computing, analytical modeling, and sorting algorithms, as well as required and reference textbooks. The evaluation scheme includes 2 unit tests, an in-semester exam, and an end-semester exam that make up the total grade.

Uploaded by

Priya Patel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Rajarambapu Institute of Technology, Sakharale

(An Autonomous Institute)


Final Year B. Tech. ( Department of Information Technology)
Semester VII

Course Code: IT4032 Course Title: Parallel Computing (Theory)


Teaching Scheme: L-3 T-0 P-0 Credits: 3
Evaluation Scheme: ISE-20% Unit Test I-15% Unit Test II-15% ESE-50%
(Minimum Passing Marks: 40%)
Prerequisite Courses: Data Structure, Algorithms
Course Learning Outcomes:
On completion of this subject the student will be able to:
1. Summarize parallel programming technique and compare it with Sequential
Programming
2. Write parallel programs using MPI, Open MP, CUDA etc.
3. Describe CUDA architecture with their memory structure.
4. Compare parallel matrix algorithm
5. Explain Parallel Sorting algorithms
Unit 1: Introduction to Parallel Computing (6)
Implicit Parallelism, Limitations of Memory, Dichotomy of Parallel
Computing Platforms, Physical Organization of Parallel Platforms,
Communication Costs in Parallel Machines, Routing Mechanisms for
Interconnection Networks, Impact of Process-Processor Mapping and
Mapping Techniques.
Unit 2: Message-Passing Paradigm (7)
Principles of Message-Passing Programming Send and Receive Operations,
MPI: Topologies and Embedding, Overlapping Communication with
Computation.
Shared memory model- OpenMP: OpenMP Programming model, Concurrent
tasks in OpenMP, Constructs in OpenMP, Data Handling, Open MP Library
functions, Environment variable.
Unit 3: Introduction to GPU Computing and CUDA (6)
Introduction to GPU Computing, CUDA Data Parallelism Model, CUDA
Program Structure, Device Memories and Data Transfer, Kernel Functions
and Threading. CUDA Threads: CUDA Thread Organization, Using blockIdx
and threadIdx, Thread Assignment, Thread Scheduling and Latency
Tolerance.
Unit 4: Analytical Modeling (6)
Sources of Overhead, Performance Metrics, The Effect of Granularity on
Performance, Scalability of Parallel System, Minimum Execution Time and
Minimum Cost-Optimal Execution Time
Unit 5: Dense Matrix Algorithms (5)
Matrix-Vector Multiplication, Matrix-Matrix Multiplication, Solving a System
of Linear Equations
Unit 6: Sorting (6)
Issues, Sorting Networks, Bubble Sort and its Variants, Quicksort, Bucket and
Sample Sort

Text Book:
1 Ananth Grama, Anshul Gupta, George Karypis, Vipin Kumar, “Introduction to
Parallel Computing”, Pearson Education, Second Edition
2 David B. Kirk, Wen-mei W. Hwu, “Massive parallel Programming with GPGPU”
Morgan Kaufmann Publication

Reference Books:
1 Michel Quinn, “Parallel Programming in C with MPI and Open MP”, Tata
McGraw Hill Publication
2 Rohit Chandra, Leonardo Dagum, Dave Kohr, DrorMaydan, Jeff McDonald,
Ramesh Menon “Parallel Programming in OpenMP”, Morgan Kaufmann
Publishers, 2001 ISBN 1-55860-671-8
3 Barbara Chapman, Gabriele Jost, Ruud van der Pas, “Using OpenMP: Portable
Shared Memory Parallel Programming”, The MIT Press, 2008.
4 Link: http://developer.nvidia.com/cuda
5 http://www-users.cs.umn.edu/~karypis/parbook/

You might also like