SlideShare a Scribd company logo
CUDA – AN
INTRODUCTION
               Raymond Tay
CUDA - What and Why
    CUDA™ is a C/C++ SDK developed by Nvidia. Released in 2006 world-wide for
     the GeForce™ 8800 graphics card. CUDA 4.0 SDK released in 2011.
    CUDA allows HPC developers, researchers to model complex problems and achieve
     up to 100x performance.




                                                                CUDA
                                                                SDK
Nvidia GPUs FPS
    FPS – Floating-point per second aka flops. A measure of how many flops can a
     GPU do. More is Better 


                                                    GPUs beat CPUs
Nvidia GPUs Memory Bandwidth
    With massively parallel processors in Nvidia’s GPUs, providing high memory
     bandwidth plays a big role in high performance computing.


                                                  GPUs beat CPUs
GPU vs CPU




CPU                                  GPU
"   Optimised for low-latency        "   Optimised for data-parallel,
    access to cached data sets           throughput computation
"   Control logic for out-of-order   "   Architecture tolerant of
    and speculative execution            memory latency
                                     "   More transistors dedicated to
                                         computation
I don’t know C/C++, should I leave?

    Relax, no worries. Not to fret.


                Your Brain Asks:
                Wait a minute, why should I learn
                the C/C++ SDK?

                CUDA Answers:
                Efficiency!!!
I’ve heard about OpenCL. What is it?


                                     Entry point for developers
                                     who prefer high-level C


    Entry point for developers
       who want low-level API

Shared back-end compiler and
      optimization technology
What do I need to begin with CUDA?

    A Nvidia CUDA enabled graphics card e.g. Fermi
How does CUDA work


                                       PCI Bus




1.  Copy input data from CPU memory to GPU
    memory
2.  Load GPU program and execute,
    caching data on chip for performance
3.  Copy results from GPU memory to CPU memory
CUDA Kernels: Subdivide into Blocks




    Threads are grouped into blocks
    Blocks are grouped into a grid
    A kernel is executed as a grid of blocks of threads
Transparent Scalability – G80

    1   2   3      4    5     6     7    8     9    10    11   12




                                  9     10   11    12

                                  1     2     3    4     5     6       7   8



                As maximum blocks are executing on the GPU, blocks 9
                – 12 will wait
Transparent Scalability – GT200

        1   2   3   4   5   6   7    8    9   10     11    12




1   2   3   4   5   6   7   8   9   10   11   12   Idle
                                                          ...   Idle   Idle
Arrays of Parallel Threads
   ALL threads run the same kernel code
   Each thread has an ID that’s used to compute

    address & make control decisions
Block 0                                                     Block (N -1)

0       1       2       3       4      5       6     7      0       1       2       3       4      5       6     7


 …                                                           …
unsigned int tid = threadIdx.x + blockIdx.x * blockDim.x;   unsigned int tid = threadIdx.x + blockIdx.x * blockDim.x;

    int shifted = input_array[tid] + shift_amount;              int shifted = input_array[tid] + shift_amount;
    if ( shifted > alphabet_max )                               if ( shifted > alphabet_max )
      shifted = shifted % (alphabet_max + 1);                     shifted = shifted % (alphabet_max + 1);

 output_array[tid] = shifted;                                output_array[tid] = shifted;
…                                                           …
                                    Parallel code                                                Parallel code
Compiling a CUDA program
            C/C++ CUDA    float4 me = gx[gtid];
                          me.x += me.y * me.z;
            Application
                                                   •    Parallel Thread
                                                        eXecution (PTX)‫‏‬
                                                         –  Virtual Machine
              NVCC              CPU Code                    and ISA
                                                         –  Programming
                                                            model
Virtual      PTX Code                                    –  Execution
                                                            resources and
                                                            state
          PTX to Target       ld.global.v4.f32   {$f1,$f3,$f5,$f7}, [$r9+0];
                              mad.f32            $f1, $f5, $f3, $f1;
             Compiler



      G80       …       GPU

          Target code
Example: Block Cypher
void host_shift_cypher(unsigned int *input_array,    __global__ void shift_cypher(unsigned int
    unsigned int *output_array, unsigned int             *input_array, unsigned int *output_array,
    shift_amount, unsigned int alphabet_max,             unsigned int shift_amount, unsigned int
    unsigned int array_length)	
                         alphabet_max, unsigned int array_length)	
{	
                                                  {	
  for(unsigned int i=0;i<array_length;i++)	
           unsigned int tid = threadIdx.x + blockIdx.x *
                                                          blockDim.x;	
 {	
                                                       int shifted = input_array[tid] + shift_amount;	
       int element = input_array[i];	
                                                       if ( shifted > alphabet_max )	
       int shifted = element + shift_amount;	
                                                           	
shifted = shifted % (alphabet_max + 1);	
       if(shifted > alphabet_max)	
       {	
                                                       output_array[tid] = shifted;	
         shifted = shifted % (alphabet_max + 1);	
                                                     }	
       }	
       output_array[i] = shifted;	
                                                     Int main() {	
  }	
                                                     dim3 dimGrid(ceil(array_length)/block_size);	
}	
                                                     dim3 dimBlock(block_size);	
Int main() {	
                                                     shift_cypher<<<dimGrid,dimBlock>>>(input_array,
host_shift_cypher(input_array, output_array,
                                                          output_array, shift_amount, alphabet_max,
    shift_amount, alphabet_max, array_length);	
                                                          array_length);	
}	
                                                     }	
                    CPU Program                                       GPU Program
I see some WEIRD syntax..is it still C?

  CUDA C is an extension of C
  <<< Dg, Db, Ns, S>>> is the execution

   configuration for the call to __global__ ; defines
   the dimensions of the grid and blocks that’ll be used
     (dynamically allocated shared memory & stream is optional)
  __global__ declares a function is a kernel which is
   executed on the GPU and callable from the host
   only. This call is asynchronous.
  See the CUDA C Programming Guide.
How does the CUDA Kernel get Data?

  Allocate CPU memory for n integers e.g. malloc(…)
  Allocate GPU memory for n integers e.g. cudaMalloc(…)

  Copy the CPU memory to GPU memory for n
   integers e.g. cudaMemcpy(…, cudaMemcpyHostToDevice)
  Copy the GPU memory to CPU once computation is

   done e.g. cudaMemcpy(…, cudaMemcpyDeviceToHost)
  Free the GPU & CPU memory e.g. cudaFree(…)
Example: Block Cypher (Host Code)
#include <stdio.h>	

Int main() {	
 unsigned int num_bytes = sizeof(int) * (1 << 22); 	
 unsigned int * input_array = 0;	
 unsigned int * output_array = 0;	
…	
 cudaMalloc((void**)&input_array, num_bytes);	
 cudaMalloc((void**)&output_array, num_bytes);	
 cudaMemcpy(input_array, host_input_array, num_bytes, cudaMemcpyHostToDevice);	
…	
// gpu will compute the kernel and transfer the results out of the gpu to host.	
cudaMemcpy(host_output_array, output_array, num_bytes,
cudaMemcpyDeviceToHost);	
…	
 // free the memory	
 cudaFree(input_array);	
 cudaFree(output_array);	
}
Compiling the Block Cypher GPU Code

    nvcc is the compiler and should be accessible from
     your PATH variable. Set the dynamic library load
     path
       UNIX: $PATH, Win: %PATH%
       UNIX: $LD_LIBRARY_PATH / $DYLD_LIBRARY_PATH

    nvcc block-cypher.cu –arch=sm_12
       Compile   the GPU code for the GPU architecture sm_12
    nvcc –g –G block-cypher.cu –arch=sm_12
       Compiled
               the program s.t. CPU + GPU code is in
       debugged mode
Debugger
               CUDA-GDB	
           • Based on GDB
           • Linux
           • Mac OS X



                              Parallel Nsight	
                            • Plugin inside Visual
                            Studio
Visual Profiler & Memcheck
                                    Profiler	
                             •  Microsoft Windows
                             •  Linux
                             •  Mac OS X

                             •  Analyze Performance




    CUDA-MEMCHECK	
   •  Microsoft Windows
   •  Linux
   •  Mac OS X

   •  Detect memory access
   errors
Hints
    Think about producing a serial algorithm that can
     execute correctly on a CPU
    Think about producing a parallel (CUDA/OpenCL)
     algorithm from that serial algorithm
    Obtain a initial run time (call it gold standard?)
       Use
          the profiler to profile this initial run (Typically its quite
       bad )
    Fine tune your code to take advantage of shared
     memory, improving memory coalescing, reduce shared
     memory conflicts etc (Consult the best practices guide &
     SDK)
       Use   the profiler to conduct cross comparisons
Hints (Not exhaustive!)
    Be aware of the trade offs when your kernel becomes
     too complicated:
       If you noticed the kernel has a lot of local (thread) variables
        e.g. int i, float j : register spilling
       If you noticed the run time is still slow EVEN AFTER you’ve
        used shared memory, re-assess the memory access patterns :
        shared memory conflicts
       TRY to reduce the number of conditionals e.g. Ifs : thread
        divergence
       TRY to unroll ANY loops in the kernel code e.g. #pragma
        unroll n
       Don’t use thread blocks that are not a multiple of warpSize.
Other cool things in the CUDA SDK 4.0
    GPUDirect
    Unified Virtual Address Space
    Multi-GPU
         P2P Memory Access/Copy (gels with the UVA)
    Concurrent Execution
         Kernel + Data
         Streams, Events
    GPU Memories
         Shared, Texture, Surface, Constant, Registers, Portable, Write-combining, Page-locked/
          Pinned
    OpenGL, Direct3D interoperability
    Atomic functions, Fast Math Functions
    Dynamic Global Memory Allocation (in-kernel)
         Determine how much the device supports e.g. cudaDeviceGetLimit
         Set it before you launch the kernel e.g. cudaDeviceSetLimit
         Free it!
Additional Resources
    CUDA FAQ (http://tegradeveloper.nvidia.com/cuda-faq)
    CUDA Tools & Ecosystem (http://tegradeveloper.nvidia.com/cuda-tools-ecosystem)
    CUDA Downloads (http://tegradeveloper.nvidia.com/cuda-downloads)
    NVIDIA Forums (http://forums.nvidia.com/index.php?showforum=62)
    GPGPU (http://gpgpu.org )
    CUDA By Example (
     http://tegradeveloper.nvidia.com/content/cuda-example-introduction-general-purpose-gpu-
     programming-0)
         Jason Sanders & Edward Kandrot
    GPU Computing Gems Emerald Edition (
     http://www.amazon.com/GPU-Computing-Gems-Emerald-Applications/dp/0123849888/ )
         Editor in Chief: Prof Hwu Wen-Mei
CUDA Libraries
  Visit this site
   http://developer.nvidia.com/cuda-tools-
   ecosystem#Libraries
  Thrust, CUFFT, CUBLAS, CUSP, NPP, OpenCV, GPU

   AI-Tree Search, GPU AI-Path Finding
  A lot of the libraries are hosted in Google Code.

   Many more gems in there too!
Questions?
THANK YOU
GPU memories: Shared

             More than 1 Tbyte/sec
              aggregate memory bandwidth
             Use it
                    As a cache
                    To reorganize global memory accesses into
                     coalesced pattern
                    To share data between threads

             16 kbytes per SM (Before Fermi)
             64 kbytes per SM (Fermi)
GPU memories: Texture

                  Texture is an object for reading data
                  Data is cached
                  Host actions
                         Allocate memory on GPU
                         Create a texture memory reference object
                         Bind the texture object to memory
                         Clean up after use
                  GPU actions
                         Fetch using texture references
                          text1Dfetch(), tex1D(), tex2D(), tex3D()
GPU memories: Constant

             Write by host, read by GPU
             Data is cached

             Useful for tables of constants

             64 kbytes
Ad

More Related Content

What's hot (20)

CUDA
CUDACUDA
CUDA
Rachel Miller
 
Cuda introduction
Cuda introductionCuda introduction
Cuda introduction
Hanibei
 
Cuda tutorial
Cuda tutorialCuda tutorial
Cuda tutorial
Mahesh Khadatare
 
GPU: Understanding CUDA
GPU: Understanding CUDAGPU: Understanding CUDA
GPU: Understanding CUDA
Joaquín Aparicio Ramos
 
Cuda
CudaCuda
Cuda
Mannu Malhotra
 
Parallel computing with Gpu
Parallel computing with GpuParallel computing with Gpu
Parallel computing with Gpu
Rohit Khatana
 
NVIDIA CUDA
NVIDIA CUDANVIDIA CUDA
NVIDIA CUDA
Jungsoo Nam
 
GPU Programming
GPU ProgrammingGPU Programming
GPU Programming
William Cunningham
 
AI Hardware Landscape 2021
AI Hardware Landscape 2021AI Hardware Landscape 2021
AI Hardware Landscape 2021
Grigory Sapunov
 
Introduction to parallel computing using CUDA
Introduction to parallel computing using CUDAIntroduction to parallel computing using CUDA
Introduction to parallel computing using CUDA
Martin Peniak
 
Gpu with cuda architecture
Gpu with cuda architectureGpu with cuda architecture
Gpu with cuda architecture
Dhaval Kaneria
 
Cuda
CudaCuda
Cuda
Gopi Saiteja
 
Introduction to GPU Programming
Introduction to GPU ProgrammingIntroduction to GPU Programming
Introduction to GPU Programming
Chakkrit (Kla) Tantithamthavorn
 
GPU Computing
GPU ComputingGPU Computing
GPU Computing
Khan Mostafa
 
GPU - Basic Working
GPU - Basic WorkingGPU - Basic Working
GPU - Basic Working
Nived R Nambiar
 
Cuda Architecture
Cuda ArchitectureCuda Architecture
Cuda Architecture
Piyush Mittal
 
Presentation on graphics processing unit (GPU)
Presentation on graphics processing unit (GPU)Presentation on graphics processing unit (GPU)
Presentation on graphics processing unit (GPU)
MuntasirMuhit
 
Parallel Computing on the GPU
Parallel Computing on the GPUParallel Computing on the GPU
Parallel Computing on the GPU
Tilani Gunawardena PhD(UNIBAS), BSc(Pera), FHEA(UK), CEng, MIESL
 
Introduction to OpenCL
Introduction to OpenCLIntroduction to OpenCL
Introduction to OpenCL
Unai Lopez-Novoa
 
Modular by Design: Supermicro’s New Standards-Based Universal GPU Server
Modular by Design: Supermicro’s New Standards-Based Universal GPU ServerModular by Design: Supermicro’s New Standards-Based Universal GPU Server
Modular by Design: Supermicro’s New Standards-Based Universal GPU Server
Rebekah Rodriguez
 

Similar to Introduction to CUDA (20)

Introduction to cuda geek camp singapore 2011
Introduction to cuda   geek camp singapore 2011Introduction to cuda   geek camp singapore 2011
Introduction to cuda geek camp singapore 2011
Raymond Tay
 
Vpu technology &gpgpu computing
Vpu technology &gpgpu computingVpu technology &gpgpu computing
Vpu technology &gpgpu computing
Arka Ghosh
 
Vpu technology &gpgpu computing
Vpu technology &gpgpu computingVpu technology &gpgpu computing
Vpu technology &gpgpu computing
Arka Ghosh
 
Vpu technology &gpgpu computing
Vpu technology &gpgpu computingVpu technology &gpgpu computing
Vpu technology &gpgpu computing
Arka Ghosh
 
Vpu technology &gpgpu computing
Vpu technology &gpgpu computingVpu technology &gpgpu computing
Vpu technology &gpgpu computing
Arka Ghosh
 
Etude éducatif sur les GPUs & CPUs et les architectures paralleles -Programmi...
Etude éducatif sur les GPUs & CPUs et les architectures paralleles -Programmi...Etude éducatif sur les GPUs & CPUs et les architectures paralleles -Programmi...
Etude éducatif sur les GPUs & CPUs et les architectures paralleles -Programmi...
mouhouioui
 
Lecture 04
Lecture 04Lecture 04
Lecture 04
douglaslyon
 
Introduction to Accelerators
Introduction to AcceleratorsIntroduction to Accelerators
Introduction to Accelerators
Dilum Bandara
 
Gpu workshop cluster universe: scripting cuda
Gpu workshop cluster universe: scripting cudaGpu workshop cluster universe: scripting cuda
Gpu workshop cluster universe: scripting cuda
Ferdinand Jamitzky
 
NVIDIA cuda programming, open source and AI
NVIDIA cuda programming, open source and AINVIDIA cuda programming, open source and AI
NVIDIA cuda programming, open source and AI
Tae wook kang
 
Nvidia cuda tutorial_no_nda_apr08
Nvidia cuda tutorial_no_nda_apr08Nvidia cuda tutorial_no_nda_apr08
Nvidia cuda tutorial_no_nda_apr08
Angela Mendoza M.
 
Intro to GPGPU Programming with Cuda
Intro to GPGPU Programming with CudaIntro to GPGPU Programming with Cuda
Intro to GPGPU Programming with Cuda
Rob Gillen
 
Using GPUs to handle Big Data with Java by Adam Roberts.
Using GPUs to handle Big Data with Java by Adam Roberts.Using GPUs to handle Big Data with Java by Adam Roberts.
Using GPUs to handle Big Data with Java by Adam Roberts.
J On The Beach
 
Accelerating HPC Applications on NVIDIA GPUs with OpenACC
Accelerating HPC Applications on NVIDIA GPUs with OpenACCAccelerating HPC Applications on NVIDIA GPUs with OpenACC
Accelerating HPC Applications on NVIDIA GPUs with OpenACC
inside-BigData.com
 
Programar para GPUs
Programar para GPUsProgramar para GPUs
Programar para GPUs
Alcides Fonseca
 
GPGPU Computation
GPGPU ComputationGPGPU Computation
GPGPU Computation
jtsagata
 
Gpu perf-presentation
Gpu perf-presentationGpu perf-presentation
Gpu perf-presentation
GiannisTsagatakis
 
lecture_GPUArchCUDA02-CUDAMem.pdf
lecture_GPUArchCUDA02-CUDAMem.pdflecture_GPUArchCUDA02-CUDAMem.pdf
lecture_GPUArchCUDA02-CUDAMem.pdf
Tigabu Yaya
 
An Introduction to CUDA-OpenCL - University.pptx
An Introduction to CUDA-OpenCL - University.pptxAn Introduction to CUDA-OpenCL - University.pptx
An Introduction to CUDA-OpenCL - University.pptx
AnirudhGarg35
 
CUDA Deep Dive
CUDA Deep DiveCUDA Deep Dive
CUDA Deep Dive
krasul
 
Introduction to cuda geek camp singapore 2011
Introduction to cuda   geek camp singapore 2011Introduction to cuda   geek camp singapore 2011
Introduction to cuda geek camp singapore 2011
Raymond Tay
 
Vpu technology &gpgpu computing
Vpu technology &gpgpu computingVpu technology &gpgpu computing
Vpu technology &gpgpu computing
Arka Ghosh
 
Vpu technology &gpgpu computing
Vpu technology &gpgpu computingVpu technology &gpgpu computing
Vpu technology &gpgpu computing
Arka Ghosh
 
Vpu technology &gpgpu computing
Vpu technology &gpgpu computingVpu technology &gpgpu computing
Vpu technology &gpgpu computing
Arka Ghosh
 
Vpu technology &gpgpu computing
Vpu technology &gpgpu computingVpu technology &gpgpu computing
Vpu technology &gpgpu computing
Arka Ghosh
 
Etude éducatif sur les GPUs & CPUs et les architectures paralleles -Programmi...
Etude éducatif sur les GPUs & CPUs et les architectures paralleles -Programmi...Etude éducatif sur les GPUs & CPUs et les architectures paralleles -Programmi...
Etude éducatif sur les GPUs & CPUs et les architectures paralleles -Programmi...
mouhouioui
 
Introduction to Accelerators
Introduction to AcceleratorsIntroduction to Accelerators
Introduction to Accelerators
Dilum Bandara
 
Gpu workshop cluster universe: scripting cuda
Gpu workshop cluster universe: scripting cudaGpu workshop cluster universe: scripting cuda
Gpu workshop cluster universe: scripting cuda
Ferdinand Jamitzky
 
NVIDIA cuda programming, open source and AI
NVIDIA cuda programming, open source and AINVIDIA cuda programming, open source and AI
NVIDIA cuda programming, open source and AI
Tae wook kang
 
Nvidia cuda tutorial_no_nda_apr08
Nvidia cuda tutorial_no_nda_apr08Nvidia cuda tutorial_no_nda_apr08
Nvidia cuda tutorial_no_nda_apr08
Angela Mendoza M.
 
Intro to GPGPU Programming with Cuda
Intro to GPGPU Programming with CudaIntro to GPGPU Programming with Cuda
Intro to GPGPU Programming with Cuda
Rob Gillen
 
Using GPUs to handle Big Data with Java by Adam Roberts.
Using GPUs to handle Big Data with Java by Adam Roberts.Using GPUs to handle Big Data with Java by Adam Roberts.
Using GPUs to handle Big Data with Java by Adam Roberts.
J On The Beach
 
Accelerating HPC Applications on NVIDIA GPUs with OpenACC
Accelerating HPC Applications on NVIDIA GPUs with OpenACCAccelerating HPC Applications on NVIDIA GPUs with OpenACC
Accelerating HPC Applications on NVIDIA GPUs with OpenACC
inside-BigData.com
 
GPGPU Computation
GPGPU ComputationGPGPU Computation
GPGPU Computation
jtsagata
 
lecture_GPUArchCUDA02-CUDAMem.pdf
lecture_GPUArchCUDA02-CUDAMem.pdflecture_GPUArchCUDA02-CUDAMem.pdf
lecture_GPUArchCUDA02-CUDAMem.pdf
Tigabu Yaya
 
An Introduction to CUDA-OpenCL - University.pptx
An Introduction to CUDA-OpenCL - University.pptxAn Introduction to CUDA-OpenCL - University.pptx
An Introduction to CUDA-OpenCL - University.pptx
AnirudhGarg35
 
CUDA Deep Dive
CUDA Deep DiveCUDA Deep Dive
CUDA Deep Dive
krasul
 
Ad

More from Raymond Tay (7)

Principled io in_scala_2019_distribution
Principled io in_scala_2019_distributionPrincipled io in_scala_2019_distribution
Principled io in_scala_2019_distribution
Raymond Tay
 
Building a modern data platform with scala, akka, apache beam
Building a modern data platform with scala, akka, apache beamBuilding a modern data platform with scala, akka, apache beam
Building a modern data platform with scala, akka, apache beam
Raymond Tay
 
Practical cats
Practical catsPractical cats
Practical cats
Raymond Tay
 
Toying with spark
Toying with sparkToying with spark
Toying with spark
Raymond Tay
 
Distributed computing for new bloods
Distributed computing for new bloodsDistributed computing for new bloods
Distributed computing for new bloods
Raymond Tay
 
Functional programming with_scala
Functional programming with_scalaFunctional programming with_scala
Functional programming with_scala
Raymond Tay
 
Introduction to Erlang
Introduction to ErlangIntroduction to Erlang
Introduction to Erlang
Raymond Tay
 
Principled io in_scala_2019_distribution
Principled io in_scala_2019_distributionPrincipled io in_scala_2019_distribution
Principled io in_scala_2019_distribution
Raymond Tay
 
Building a modern data platform with scala, akka, apache beam
Building a modern data platform with scala, akka, apache beamBuilding a modern data platform with scala, akka, apache beam
Building a modern data platform with scala, akka, apache beam
Raymond Tay
 
Toying with spark
Toying with sparkToying with spark
Toying with spark
Raymond Tay
 
Distributed computing for new bloods
Distributed computing for new bloodsDistributed computing for new bloods
Distributed computing for new bloods
Raymond Tay
 
Functional programming with_scala
Functional programming with_scalaFunctional programming with_scala
Functional programming with_scala
Raymond Tay
 
Introduction to Erlang
Introduction to ErlangIntroduction to Erlang
Introduction to Erlang
Raymond Tay
 
Ad

Recently uploaded (20)

Procurement Insights Cost To Value Guide.pptx
Procurement Insights Cost To Value Guide.pptxProcurement Insights Cost To Value Guide.pptx
Procurement Insights Cost To Value Guide.pptx
Jon Hansen
 
#StandardsGoals for 2025: Standards & certification roundup - Tech Forum 2025
#StandardsGoals for 2025: Standards & certification roundup - Tech Forum 2025#StandardsGoals for 2025: Standards & certification roundup - Tech Forum 2025
#StandardsGoals for 2025: Standards & certification roundup - Tech Forum 2025
BookNet Canada
 
AI Changes Everything – Talk at Cardiff Metropolitan University, 29th April 2...
AI Changes Everything – Talk at Cardiff Metropolitan University, 29th April 2...AI Changes Everything – Talk at Cardiff Metropolitan University, 29th April 2...
AI Changes Everything – Talk at Cardiff Metropolitan University, 29th April 2...
Alan Dix
 
The Evolution of Meme Coins A New Era for Digital Currency ppt.pdf
The Evolution of Meme Coins A New Era for Digital Currency ppt.pdfThe Evolution of Meme Coins A New Era for Digital Currency ppt.pdf
The Evolution of Meme Coins A New Era for Digital Currency ppt.pdf
Abi john
 
Semantic Cultivators : The Critical Future Role to Enable AI
Semantic Cultivators : The Critical Future Role to Enable AISemantic Cultivators : The Critical Future Role to Enable AI
Semantic Cultivators : The Critical Future Role to Enable AI
artmondano
 
Transcript: #StandardsGoals for 2025: Standards & certification roundup - Tec...
Transcript: #StandardsGoals for 2025: Standards & certification roundup - Tec...Transcript: #StandardsGoals for 2025: Standards & certification roundup - Tec...
Transcript: #StandardsGoals for 2025: Standards & certification roundup - Tec...
BookNet Canada
 
SAP Modernization: Maximizing the Value of Your SAP S/4HANA Migration.pdf
SAP Modernization: Maximizing the Value of Your SAP S/4HANA Migration.pdfSAP Modernization: Maximizing the Value of Your SAP S/4HANA Migration.pdf
SAP Modernization: Maximizing the Value of Your SAP S/4HANA Migration.pdf
Precisely
 
Massive Power Outage Hits Spain, Portugal, and France: Causes, Impact, and On...
Massive Power Outage Hits Spain, Portugal, and France: Causes, Impact, and On...Massive Power Outage Hits Spain, Portugal, and France: Causes, Impact, and On...
Massive Power Outage Hits Spain, Portugal, and France: Causes, Impact, and On...
Aqusag Technologies
 
TrsLabs - Fintech Product & Business Consulting
TrsLabs - Fintech Product & Business ConsultingTrsLabs - Fintech Product & Business Consulting
TrsLabs - Fintech Product & Business Consulting
Trs Labs
 
Electronic_Mail_Attacks-1-35.pdf by xploit
Electronic_Mail_Attacks-1-35.pdf by xploitElectronic_Mail_Attacks-1-35.pdf by xploit
Electronic_Mail_Attacks-1-35.pdf by xploit
niftliyevhuseyn
 
Dev Dives: Automate and orchestrate your processes with UiPath Maestro
Dev Dives: Automate and orchestrate your processes with UiPath MaestroDev Dives: Automate and orchestrate your processes with UiPath Maestro
Dev Dives: Automate and orchestrate your processes with UiPath Maestro
UiPathCommunity
 
What is Model Context Protocol(MCP) - The new technology for communication bw...
What is Model Context Protocol(MCP) - The new technology for communication bw...What is Model Context Protocol(MCP) - The new technology for communication bw...
What is Model Context Protocol(MCP) - The new technology for communication bw...
Vishnu Singh Chundawat
 
Generative Artificial Intelligence (GenAI) in Business
Generative Artificial Intelligence (GenAI) in BusinessGenerative Artificial Intelligence (GenAI) in Business
Generative Artificial Intelligence (GenAI) in Business
Dr. Tathagat Varma
 
Linux Support for SMARC: How Toradex Empowers Embedded Developers
Linux Support for SMARC: How Toradex Empowers Embedded DevelopersLinux Support for SMARC: How Toradex Empowers Embedded Developers
Linux Support for SMARC: How Toradex Empowers Embedded Developers
Toradex
 
How analogue intelligence complements AI
How analogue intelligence complements AIHow analogue intelligence complements AI
How analogue intelligence complements AI
Paul Rowe
 
Andrew Marnell: Transforming Business Strategy Through Data-Driven Insights
Andrew Marnell: Transforming Business Strategy Through Data-Driven InsightsAndrew Marnell: Transforming Business Strategy Through Data-Driven Insights
Andrew Marnell: Transforming Business Strategy Through Data-Driven Insights
Andrew Marnell
 
Big Data Analytics Quick Research Guide by Arthur Morgan
Big Data Analytics Quick Research Guide by Arthur MorganBig Data Analytics Quick Research Guide by Arthur Morgan
Big Data Analytics Quick Research Guide by Arthur Morgan
Arthur Morgan
 
Build Your Own Copilot & Agents For Devs
Build Your Own Copilot & Agents For DevsBuild Your Own Copilot & Agents For Devs
Build Your Own Copilot & Agents For Devs
Brian McKeiver
 
Special Meetup Edition - TDX Bengaluru Meetup #52.pptx
Special Meetup Edition - TDX Bengaluru Meetup #52.pptxSpecial Meetup Edition - TDX Bengaluru Meetup #52.pptx
Special Meetup Edition - TDX Bengaluru Meetup #52.pptx
shyamraj55
 
Technology Trends in 2025: AI and Big Data Analytics
Technology Trends in 2025: AI and Big Data AnalyticsTechnology Trends in 2025: AI and Big Data Analytics
Technology Trends in 2025: AI and Big Data Analytics
InData Labs
 
Procurement Insights Cost To Value Guide.pptx
Procurement Insights Cost To Value Guide.pptxProcurement Insights Cost To Value Guide.pptx
Procurement Insights Cost To Value Guide.pptx
Jon Hansen
 
#StandardsGoals for 2025: Standards & certification roundup - Tech Forum 2025
#StandardsGoals for 2025: Standards & certification roundup - Tech Forum 2025#StandardsGoals for 2025: Standards & certification roundup - Tech Forum 2025
#StandardsGoals for 2025: Standards & certification roundup - Tech Forum 2025
BookNet Canada
 
AI Changes Everything – Talk at Cardiff Metropolitan University, 29th April 2...
AI Changes Everything – Talk at Cardiff Metropolitan University, 29th April 2...AI Changes Everything – Talk at Cardiff Metropolitan University, 29th April 2...
AI Changes Everything – Talk at Cardiff Metropolitan University, 29th April 2...
Alan Dix
 
The Evolution of Meme Coins A New Era for Digital Currency ppt.pdf
The Evolution of Meme Coins A New Era for Digital Currency ppt.pdfThe Evolution of Meme Coins A New Era for Digital Currency ppt.pdf
The Evolution of Meme Coins A New Era for Digital Currency ppt.pdf
Abi john
 
Semantic Cultivators : The Critical Future Role to Enable AI
Semantic Cultivators : The Critical Future Role to Enable AISemantic Cultivators : The Critical Future Role to Enable AI
Semantic Cultivators : The Critical Future Role to Enable AI
artmondano
 
Transcript: #StandardsGoals for 2025: Standards & certification roundup - Tec...
Transcript: #StandardsGoals for 2025: Standards & certification roundup - Tec...Transcript: #StandardsGoals for 2025: Standards & certification roundup - Tec...
Transcript: #StandardsGoals for 2025: Standards & certification roundup - Tec...
BookNet Canada
 
SAP Modernization: Maximizing the Value of Your SAP S/4HANA Migration.pdf
SAP Modernization: Maximizing the Value of Your SAP S/4HANA Migration.pdfSAP Modernization: Maximizing the Value of Your SAP S/4HANA Migration.pdf
SAP Modernization: Maximizing the Value of Your SAP S/4HANA Migration.pdf
Precisely
 
Massive Power Outage Hits Spain, Portugal, and France: Causes, Impact, and On...
Massive Power Outage Hits Spain, Portugal, and France: Causes, Impact, and On...Massive Power Outage Hits Spain, Portugal, and France: Causes, Impact, and On...
Massive Power Outage Hits Spain, Portugal, and France: Causes, Impact, and On...
Aqusag Technologies
 
TrsLabs - Fintech Product & Business Consulting
TrsLabs - Fintech Product & Business ConsultingTrsLabs - Fintech Product & Business Consulting
TrsLabs - Fintech Product & Business Consulting
Trs Labs
 
Electronic_Mail_Attacks-1-35.pdf by xploit
Electronic_Mail_Attacks-1-35.pdf by xploitElectronic_Mail_Attacks-1-35.pdf by xploit
Electronic_Mail_Attacks-1-35.pdf by xploit
niftliyevhuseyn
 
Dev Dives: Automate and orchestrate your processes with UiPath Maestro
Dev Dives: Automate and orchestrate your processes with UiPath MaestroDev Dives: Automate and orchestrate your processes with UiPath Maestro
Dev Dives: Automate and orchestrate your processes with UiPath Maestro
UiPathCommunity
 
What is Model Context Protocol(MCP) - The new technology for communication bw...
What is Model Context Protocol(MCP) - The new technology for communication bw...What is Model Context Protocol(MCP) - The new technology for communication bw...
What is Model Context Protocol(MCP) - The new technology for communication bw...
Vishnu Singh Chundawat
 
Generative Artificial Intelligence (GenAI) in Business
Generative Artificial Intelligence (GenAI) in BusinessGenerative Artificial Intelligence (GenAI) in Business
Generative Artificial Intelligence (GenAI) in Business
Dr. Tathagat Varma
 
Linux Support for SMARC: How Toradex Empowers Embedded Developers
Linux Support for SMARC: How Toradex Empowers Embedded DevelopersLinux Support for SMARC: How Toradex Empowers Embedded Developers
Linux Support for SMARC: How Toradex Empowers Embedded Developers
Toradex
 
How analogue intelligence complements AI
How analogue intelligence complements AIHow analogue intelligence complements AI
How analogue intelligence complements AI
Paul Rowe
 
Andrew Marnell: Transforming Business Strategy Through Data-Driven Insights
Andrew Marnell: Transforming Business Strategy Through Data-Driven InsightsAndrew Marnell: Transforming Business Strategy Through Data-Driven Insights
Andrew Marnell: Transforming Business Strategy Through Data-Driven Insights
Andrew Marnell
 
Big Data Analytics Quick Research Guide by Arthur Morgan
Big Data Analytics Quick Research Guide by Arthur MorganBig Data Analytics Quick Research Guide by Arthur Morgan
Big Data Analytics Quick Research Guide by Arthur Morgan
Arthur Morgan
 
Build Your Own Copilot & Agents For Devs
Build Your Own Copilot & Agents For DevsBuild Your Own Copilot & Agents For Devs
Build Your Own Copilot & Agents For Devs
Brian McKeiver
 
Special Meetup Edition - TDX Bengaluru Meetup #52.pptx
Special Meetup Edition - TDX Bengaluru Meetup #52.pptxSpecial Meetup Edition - TDX Bengaluru Meetup #52.pptx
Special Meetup Edition - TDX Bengaluru Meetup #52.pptx
shyamraj55
 
Technology Trends in 2025: AI and Big Data Analytics
Technology Trends in 2025: AI and Big Data AnalyticsTechnology Trends in 2025: AI and Big Data Analytics
Technology Trends in 2025: AI and Big Data Analytics
InData Labs
 

Introduction to CUDA

  • 2. CUDA - What and Why   CUDA™ is a C/C++ SDK developed by Nvidia. Released in 2006 world-wide for the GeForce™ 8800 graphics card. CUDA 4.0 SDK released in 2011.   CUDA allows HPC developers, researchers to model complex problems and achieve up to 100x performance. CUDA SDK
  • 3. Nvidia GPUs FPS   FPS – Floating-point per second aka flops. A measure of how many flops can a GPU do. More is Better  GPUs beat CPUs
  • 4. Nvidia GPUs Memory Bandwidth   With massively parallel processors in Nvidia’s GPUs, providing high memory bandwidth plays a big role in high performance computing. GPUs beat CPUs
  • 5. GPU vs CPU CPU GPU "   Optimised for low-latency "   Optimised for data-parallel, access to cached data sets throughput computation "   Control logic for out-of-order "   Architecture tolerant of and speculative execution memory latency "   More transistors dedicated to computation
  • 6. I don’t know C/C++, should I leave?   Relax, no worries. Not to fret. Your Brain Asks: Wait a minute, why should I learn the C/C++ SDK? CUDA Answers: Efficiency!!!
  • 7. I’ve heard about OpenCL. What is it? Entry point for developers who prefer high-level C Entry point for developers who want low-level API Shared back-end compiler and optimization technology
  • 8. What do I need to begin with CUDA?   A Nvidia CUDA enabled graphics card e.g. Fermi
  • 9. How does CUDA work PCI Bus 1.  Copy input data from CPU memory to GPU memory 2.  Load GPU program and execute, caching data on chip for performance 3.  Copy results from GPU memory to CPU memory
  • 10. CUDA Kernels: Subdivide into Blocks   Threads are grouped into blocks   Blocks are grouped into a grid   A kernel is executed as a grid of blocks of threads
  • 11. Transparent Scalability – G80 1 2 3 4 5 6 7 8 9 10 11 12 9 10 11 12 1 2 3 4 5 6 7 8 As maximum blocks are executing on the GPU, blocks 9 – 12 will wait
  • 12. Transparent Scalability – GT200 1 2 3 4 5 6 7 8 9 10 11 12 1 2 3 4 5 6 7 8 9 10 11 12 Idle ... Idle Idle
  • 13. Arrays of Parallel Threads   ALL threads run the same kernel code   Each thread has an ID that’s used to compute address & make control decisions Block 0 Block (N -1) 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 … … unsigned int tid = threadIdx.x + blockIdx.x * blockDim.x; unsigned int tid = threadIdx.x + blockIdx.x * blockDim.x; int shifted = input_array[tid] + shift_amount; int shifted = input_array[tid] + shift_amount; if ( shifted > alphabet_max ) if ( shifted > alphabet_max ) shifted = shifted % (alphabet_max + 1); shifted = shifted % (alphabet_max + 1); output_array[tid] = shifted; output_array[tid] = shifted; … … Parallel code Parallel code
  • 14. Compiling a CUDA program C/C++ CUDA float4 me = gx[gtid]; me.x += me.y * me.z; Application •  Parallel Thread eXecution (PTX)‫‏‬ –  Virtual Machine NVCC CPU Code and ISA –  Programming model Virtual PTX Code –  Execution resources and state PTX to Target ld.global.v4.f32 {$f1,$f3,$f5,$f7}, [$r9+0]; mad.f32 $f1, $f5, $f3, $f1; Compiler G80 … GPU Target code
  • 15. Example: Block Cypher void host_shift_cypher(unsigned int *input_array, __global__ void shift_cypher(unsigned int unsigned int *output_array, unsigned int *input_array, unsigned int *output_array, shift_amount, unsigned int alphabet_max, unsigned int shift_amount, unsigned int unsigned int array_length) alphabet_max, unsigned int array_length) { { for(unsigned int i=0;i<array_length;i++) unsigned int tid = threadIdx.x + blockIdx.x * blockDim.x; { int shifted = input_array[tid] + shift_amount; int element = input_array[i]; if ( shifted > alphabet_max ) int shifted = element + shift_amount; shifted = shifted % (alphabet_max + 1); if(shifted > alphabet_max) { output_array[tid] = shifted; shifted = shifted % (alphabet_max + 1); } } output_array[i] = shifted; Int main() { } dim3 dimGrid(ceil(array_length)/block_size); } dim3 dimBlock(block_size); Int main() { shift_cypher<<<dimGrid,dimBlock>>>(input_array, host_shift_cypher(input_array, output_array, output_array, shift_amount, alphabet_max, shift_amount, alphabet_max, array_length); array_length); } } CPU Program GPU Program
  • 16. I see some WEIRD syntax..is it still C?   CUDA C is an extension of C   <<< Dg, Db, Ns, S>>> is the execution configuration for the call to __global__ ; defines the dimensions of the grid and blocks that’ll be used (dynamically allocated shared memory & stream is optional)   __global__ declares a function is a kernel which is executed on the GPU and callable from the host only. This call is asynchronous.   See the CUDA C Programming Guide.
  • 17. How does the CUDA Kernel get Data?   Allocate CPU memory for n integers e.g. malloc(…)   Allocate GPU memory for n integers e.g. cudaMalloc(…)   Copy the CPU memory to GPU memory for n integers e.g. cudaMemcpy(…, cudaMemcpyHostToDevice)   Copy the GPU memory to CPU once computation is done e.g. cudaMemcpy(…, cudaMemcpyDeviceToHost)   Free the GPU & CPU memory e.g. cudaFree(…)
  • 18. Example: Block Cypher (Host Code) #include <stdio.h> Int main() { unsigned int num_bytes = sizeof(int) * (1 << 22); unsigned int * input_array = 0; unsigned int * output_array = 0; … cudaMalloc((void**)&input_array, num_bytes); cudaMalloc((void**)&output_array, num_bytes); cudaMemcpy(input_array, host_input_array, num_bytes, cudaMemcpyHostToDevice); … // gpu will compute the kernel and transfer the results out of the gpu to host. cudaMemcpy(host_output_array, output_array, num_bytes, cudaMemcpyDeviceToHost); … // free the memory cudaFree(input_array); cudaFree(output_array); }
  • 19. Compiling the Block Cypher GPU Code   nvcc is the compiler and should be accessible from your PATH variable. Set the dynamic library load path   UNIX: $PATH, Win: %PATH%   UNIX: $LD_LIBRARY_PATH / $DYLD_LIBRARY_PATH   nvcc block-cypher.cu –arch=sm_12   Compile the GPU code for the GPU architecture sm_12   nvcc –g –G block-cypher.cu –arch=sm_12   Compiled the program s.t. CPU + GPU code is in debugged mode
  • 20. Debugger CUDA-GDB • Based on GDB • Linux • Mac OS X Parallel Nsight • Plugin inside Visual Studio
  • 21. Visual Profiler & Memcheck Profiler •  Microsoft Windows •  Linux •  Mac OS X •  Analyze Performance CUDA-MEMCHECK •  Microsoft Windows •  Linux •  Mac OS X •  Detect memory access errors
  • 22. Hints   Think about producing a serial algorithm that can execute correctly on a CPU   Think about producing a parallel (CUDA/OpenCL) algorithm from that serial algorithm   Obtain a initial run time (call it gold standard?)   Use the profiler to profile this initial run (Typically its quite bad )   Fine tune your code to take advantage of shared memory, improving memory coalescing, reduce shared memory conflicts etc (Consult the best practices guide & SDK)   Use the profiler to conduct cross comparisons
  • 23. Hints (Not exhaustive!)   Be aware of the trade offs when your kernel becomes too complicated:   If you noticed the kernel has a lot of local (thread) variables e.g. int i, float j : register spilling   If you noticed the run time is still slow EVEN AFTER you’ve used shared memory, re-assess the memory access patterns : shared memory conflicts   TRY to reduce the number of conditionals e.g. Ifs : thread divergence   TRY to unroll ANY loops in the kernel code e.g. #pragma unroll n   Don’t use thread blocks that are not a multiple of warpSize.
  • 24. Other cool things in the CUDA SDK 4.0   GPUDirect   Unified Virtual Address Space   Multi-GPU   P2P Memory Access/Copy (gels with the UVA)   Concurrent Execution   Kernel + Data   Streams, Events   GPU Memories   Shared, Texture, Surface, Constant, Registers, Portable, Write-combining, Page-locked/ Pinned   OpenGL, Direct3D interoperability   Atomic functions, Fast Math Functions   Dynamic Global Memory Allocation (in-kernel)   Determine how much the device supports e.g. cudaDeviceGetLimit   Set it before you launch the kernel e.g. cudaDeviceSetLimit   Free it!
  • 25. Additional Resources   CUDA FAQ (http://tegradeveloper.nvidia.com/cuda-faq)   CUDA Tools & Ecosystem (http://tegradeveloper.nvidia.com/cuda-tools-ecosystem)   CUDA Downloads (http://tegradeveloper.nvidia.com/cuda-downloads)   NVIDIA Forums (http://forums.nvidia.com/index.php?showforum=62)   GPGPU (http://gpgpu.org )   CUDA By Example ( http://tegradeveloper.nvidia.com/content/cuda-example-introduction-general-purpose-gpu- programming-0)   Jason Sanders & Edward Kandrot   GPU Computing Gems Emerald Edition ( http://www.amazon.com/GPU-Computing-Gems-Emerald-Applications/dp/0123849888/ )   Editor in Chief: Prof Hwu Wen-Mei
  • 26. CUDA Libraries   Visit this site http://developer.nvidia.com/cuda-tools- ecosystem#Libraries   Thrust, CUFFT, CUBLAS, CUSP, NPP, OpenCV, GPU AI-Tree Search, GPU AI-Path Finding   A lot of the libraries are hosted in Google Code. Many more gems in there too!
  • 29. GPU memories: Shared   More than 1 Tbyte/sec aggregate memory bandwidth   Use it   As a cache   To reorganize global memory accesses into coalesced pattern   To share data between threads   16 kbytes per SM (Before Fermi)   64 kbytes per SM (Fermi)
  • 30. GPU memories: Texture   Texture is an object for reading data   Data is cached   Host actions   Allocate memory on GPU   Create a texture memory reference object   Bind the texture object to memory   Clean up after use   GPU actions   Fetch using texture references text1Dfetch(), tex1D(), tex2D(), tex3D()
  • 31. GPU memories: Constant   Write by host, read by GPU   Data is cached   Useful for tables of constants   64 kbytes