Lecture17 12
Lecture17 12
Massimiliano Fatica
NVIDIA Corporation
1
NVIDIA CUDA Libraries
CUDA Toolkit includes several libraries:
— CUFFT: Fourier transforms
— CUBLAS: Dense Linear Algebra
— CUSPARSE : Sparse Linear Algebra
— LIBM: Standard C Math library
— CURAND: Pseudo-random and Quasi-random numbers
— NPP: Image and Signal Processing
— Thrust : Template Library
2
NVIDIA CUDA Libraries
Applications — CUFFT
— CUBLAS
3rd Party
— CUSPARSE
— Libm (math.h)
NVIDIA — CURAND
Libraries
— NPP
— Thrust
CUDA C/Fortran — CUSP
3
CUFFT Library
4
CUFFT Library Features
Cooley-Tukey
5
CUFFT in 4 easy steps
6
Code example:
#define NX 256
#define NY 128
cufftHandle plan;
cufftComplex *idata, *odata;
cudaMalloc((void**)&idata, sizeof(cufftComplex)*NX*NY);
cudaMalloc((void**)&odata, sizeof(cufftComplex)*NX*NY);
…
/* Create a 2D FFT plan. */
cufftPlan2d(&plan, NX,NY, CUFFT_C2C);
cudaFree(idata), cudaFree(odata);
7
CUBLAS Library
Implementation of BLAS (Basic Linear Algebra Subprograms)
Self-contained at the API level
Supports all the BLAS functions
— Level1 (vector,vector): O(N)
AXPY : y = alpha.x + y
DOT : dot = x.y
— Level3(matrix,matrix): O(N3)
General Matrix Multiplication : GEMM
Triangular Solver : TRSM
8
Using CUBLAS
Interface to CUBLAS library is in cublas.h
Function naming convention
cublas + BLAS name
Eg., cublasSGEMM
Error handling
CUBLAS core functions do not return error
CUBLAS provides function to retrieve last error recorded
CUBLAS helper functions do return error
Helper functions:
Memory allocation, data transfer
9
Calling CUBLAS from C
10
Calling CUBLAS from FORTRAN
Two interfaces:
Thunking
Allows interfacing to existing applications without any changes
During each call, the wrappers allocate GPU memory, copy source data from CPU memory space to GPU
memory space, call CUBLAS, and finally copy back the results to CPU memory space and deallocate the
GPGPU memory
Intended for light testing due to call overhead
Non-Thunking (default)
Intended for production code
Substitute device pointers for vector and matrix arguments in all BLAS functions
Existing applications need to be modified slightly to allocate and deallocate data structures in GPGPU
memory space (using CUBLAS_ALLOC and CUBLAS_FREE) and to copy data between GPU and CPU
memory spaces (using CUBLAS_SET_VECTOR, CUBLAS_GET_VECTOR, CUBLAS_SET_MATRIX, and
CUBLAS_GET_MATRIX)
11
SGEMM example (THUNKING)
program example_sgemm
! Define 3 single precision matrices A, B, C
real, dimension(:,:),allocatable:: A(:,:),B(:,:),C(:,:)
integer:: n=16
allocate (A(n,n),B(n,n),C(n,n))
! Initialize A, B and C
…
#ifdef CUBLAS
! Call SGEMM in CUBLAS library using THUNKING interface (library takes care of
! memory allocation on device and data movement)
call cublas_SGEMM('n','n', n,n,n,1.,A,n,B,n,1.,C,n)
#else
! Call SGEMM in host BLAS library
call SGEMM ('n','n',m1,m1,m1,alpha,A,m1,B,m1,beta,C,m1)
#endif
print *,c(n,n)
end program example_sgemm
12
SGEMM example (NON-THUNKING)
program example_sgemm
real, dimension(:,:),allocatable:: A(:,:),B(:,:),C(:,:)
integer*8:: devPtrA, devPtrB, devPtrC
integer:: n=16, size_of_real=16
allocate (A(n,n),B(n,n),C(n,n))
call cublas_Alloc(n*n,size_of_real, devPtrA)
call cublas_Alloc(n*n,size_of_real, devPtrB)
call cublas_Alloc(n*n,size_of_real, devPtrC)
! Initialize A, B and C
…
! Copy data to GPU
call cublas_Set_Matrix(n,n,size_of_real,A,n,devPtrA,n)
call cublas_Set_Matrix(n,n,size_of_real,B,n,devPtrB,n)
call cublas_Set_Matrix(n,n,size_of_real,C,n,devPtrC,n)
! Call SGEMM in CUBLAS library
call cublas_SGEMM('n','n', n,n,n,1.,devPtrA,n,devPtrB,n,1.,devPtrC,n)
! Copy data from GPU
call cublas_Get_Matrix(n,n,size_of_real,devPtrC,n,C,n)
print *,c(n,n)
call cublas_Free(devPtrA)
call cublas_Free(devPtrB)
call cublas_Free(devPtrC)
end program example_sgemm
13
Using CPU and GPU concurrently
Find the optimal split, knowing the relative performances of the GPU and CPU cores on DGEMM
14
Overlap DGEMM on CPU and GPU
// Copy A from CPU memory to GPU memory devA
status = cublasSetMatrix (m, k , sizeof(A[0]), A, lda, devA, m_gpu);
// Copy B1 from CPU memory to GPU memory devB
status = cublasSetMatrix (k ,n_gpu, sizeof(B[0]), B, ldb, devB, k_gpu);
// Copy C1 from CPU memory to GPU memory devC
status = cublasSetMatrix (m, n_gpu, sizeof(C[0]), C, ldc, devC, m_gpu);
15
CUDA Libm features
16
CURAND Library
Features:
XORWOW pseudo-random generator
Sobol’ quasi-random number generators
Host API for generating random numbers in bulk
Inline implementation allows use inside GPU functions/kernels
Single- and double-precision, uniform, normal and log-normal distributions
17
CURAND use
1. Create a generator:
curandCreateGenerator()
2. Set a seed:
curandSetPseudoRandomGeneratorSeed()
3. Generate the data from a distribution:
curandGenerateUniform()/(curandGenerateUniformDouble(): Uniform
curandGenerateNormal()/cuRandGenerateNormalDouble(): Gaussian
curandGenerateLogNormal/curandGenerateLogNormalDouble(): Log-Normal
4. Destroy the generator:
curandDestroyGenerator()
18
Example CURAND Program: Host API
19
Example CURAND Program: Run on CPU
20
NVIDIA Performance Primitives (NPP)
21
Image Processing Primitives
22
Thrust
A template library for CUDA
— Mimics the C++ STL
Containers
— Manage memory on host and device: thrust::host_vector<T>, thrust:device_vector<T>
— Help avoid common errors
Iterators
— Know where data lives
— Define ranges: d_vec.begin()
Algorithms
— Sorting, reduction, scan, etc: thrust::sort()
— Algorithms act on ranges and support general types and operators
23
Thrust Example
#include <thrust/host_vector.h>
#include <thrust/device_vector.h>
#include <thrust/sort.h>
#include <cstdlib.h>
int main(void)
{
// generate 32M random numbers on the host
thrust::host_vector<int> h_vec(32 << 20);
thrust::generate(h_vec.begin(), h_vec.end(), rand);
// sort data on the device (846M keys per sec on GeForce GTX 480)
thrust::sort(d_vec.begin(), d_vec.end());
return 0;
}
24
Algorithms
Elementwise operations
— for_each, transform, gather, scatter …
Reductions
— reduce, inner_product, reduce_by_key …
Prefix-Sums
— inclusive_scan, inclusive_scan_by_key …
Sorting
— sort, stable_sort, sort_by_key …
25
Interoperability (from Thrust to C/CUDA)
26
Interoperability (from C/CUDA to Thrust)
Wrap raw pointers with device_ptr
// free memory
cudaFree(raw_ptr);
27
Introduction to CUDA Fortran
28
Introduction
• CUDA is a scalable model for parallel computing
29
CUDA Programming
• Heterogeneous programming model
– CPU and GPU are separate devices with separate memory spaces
– Host code runs on the CPU
• Handles data management for both host and device
• Launches kernels which are subroutines executed on the GPU
– Device code runs on the GPU
• Executed by many GPU threads in parallel
– Allows for incremental development
30
Heterogeneous Programming
• Host = CPU and its memory
• Device = GPU and its memory
Host Device
• Typical code progression CPU GPU
– Allocate memory on host and device
– Transfer data from host to device PCIe
DRAM DRAM
– Execute kernel (device computation)
– Transfer result from device to host
– Deallocate memory
31
Data Transfers
program copyData
use cudafor
implicit none Host Device
integer, parameter :: n = 256
real :: a(n), b(n) CPU GPU
real, device :: a_d(n), b_d(n)
DRAM DRAM
a = 1.0
a_d = a a a_d
b_d = a_d
b = b_d b b_d
if (all(a == b)) &
write(*,*) 'Test Passed'
end program copyData
32
Data Transfers
program copyData
use cudafor
implicit none Host Device
integer, parameter :: n = 256
real :: a(n), b(n) CPU GPU
real, device :: a_d(n), b_d(n)
DRAM DRAM
a = 1.0
a_d = a a a_d
b_d = a_d
b = b_d b b_d
if (all(a == b)) &
write(*,*) 'Test Passed'
end program copyData
33
Data Transfers
program copyData
use cudafor
implicit none Host Device
integer, parameter :: n = 256
real :: a(n), b(n) CPU GPU
real, device :: a_d(n), b_d(n)
DRAM DRAM
a = 1.0 PCIe
a_d = a a a_d
b_d = a_d
b = b_d b b_d
if (all(a == b)) &
write(*,*) 'Test Passed'
end program copyData
34
Data Transfers
program copyData
use cudafor
implicit none Host Device
integer, parameter :: n = 256
real :: a(n), b(n) CPU GPU
real, device :: a_d(n), b_d(n)
DRAM DRAM
a = 1.0
a_d = a a a_d
b_d = a_d
b = b_d b b_d
if (all(a == b)) &
write(*,*) 'Test Passed'
end program copyData
35
Data Transfers
program copyData
use cudafor
implicit none Host Device
integer, parameter :: n = 256
real :: a(n), b(n) CPU GPU
real, device :: a_d(n), b_d(n)
DRAM DRAM
a = 1.0
a_d = a a a_d
b_d = a_d PCIe
b = b_d b b_d
if (all(a == b)) &
write(*,*) 'Test Passed'
end program copyData
36
F90 Array Increment
module simpleOps_m program incTest
contains use simpleOps_m
subroutine inc(a, b) implicit none
implicit none integer :: b, n = 256
integer :: a(:) integer, allocatable:: a(:)
integer :: b
integer :: i, n allocate (a(n))
a = 1 ! array assignment
n = size(a) b = 3
do i = 1, n call inc(a, b)
a(i) = a(i)+b
enddo if (all(a == 4)) &
write(*,*) 'Test Passed'
end subroutine inc
end module simpleOps_m deallocate(a)
end program incTest
37
CUDA Fortran - Host Code
F90 CUDA Fortran
program incTest program incTest
use cudafor
use simpleOps_m use simpleOps_m
implicit none implicit none
integer :: b, n = 256 integer :: b, n = 256
integer, allocatable:: a(:) integer, allocatable :: a(:)
integer, allocatable, device :: a_d(:)
allocate (a(n)) allocate (a(n),a_d(n))
a = 1 ! array assignment a = 1
b = 3 b = 3
a_d = a
call inc(a, b) call inc<<<1,n>>>(a_d, b)
a = a_d
if (all(a == 4)) & if (all(a == 4)) &
write(*,*) 'Test Passed' write(*,*) 'Test Passed'
deallocate(a)
deallocate (a,a_d)
end program incTest end program incTest
38
CUDA Fortran - Device Code
F90 CUDA Fortran
module simpleOps_m module simpleOps_m
contains contains
subroutine inc(a, b) attributes(global) subroutine inc(a, b)
implicit none implicit none
integer :: a(:) integer :: a(:)
integer :: b integer, value :: b
integer :: i, n integer :: i
n = size(a) i = threadIdx%x
do i = 1, n a(i) = a(i)+b
a(i) = a(i)+b
enddo
end subroutine inc end subroutine inc
end module simpleOps_m end module simpleOps_m
39
Extending to Larger Arrays
• Previous example works with small arrays
call inc<<<1,n>>>(a_d,b)
40
Execution Model
Software Hardware
Threads are executed by thread
Thread processors
Thread Processor
Grid Device
41
Execution Configuration
• Execution configuration specified in host code
call inc<<<1,n>>>(a_d,b)
tPB = 256
call inc<<<ceiling(real(n)/tPB),tPB>>>(a_d,b)
42
Mapping Arrays to Thread Blocks
• call inc<<<3,4>>>(a_d, b)
blockDim%x = 4
blockIdx%x 1 2 3
threadIdx%x 1 2 3 4 1 2 3 4 1 2 3 4
(blockIdx%x-1)*blockDim%x
1 2 3 4 5 6 7 8 9 10 11 12
+ threadIdx%x
43
Large Array - Host Code
program incTest
use cudafor
use simpleOps_m
implicit none
integer, parameter :: n = 1024*1024
integer, parameter :: tPB = 256
integer :: a(n), b
integer, device :: a_d(n)
a = 1
b = 3
a_d = a
call inc<<<ceiling(real(n)/tPB),tPB>>>(a_d, b)
a = a_d
if (all(a == 4)) then
write(*,*) 'Success'
endif
end program incTest
44
Built-in Variables for Device Code
• Predefined variables in device subroutines
– Grid and block dimensions - gridDim, blockDim
– Block and thread indices - blockIdx, threadIdx
– Of type dim3
type (dim3)
integer (kind=4) :: x, y, z
end type
45
Large Array - Device Code
module simpleOps_m
contains
attributes(global) subroutine inc(a, b)
implicit none
integer :: a(:)
integer, value :: b
integer :: i, n
i = (blockIdx%x-1)*blockDim%x + threadIdx%x
n = size(a)
if (i <= n) a(i) = a(i)+ b
end subroutine inc
end module simpleOps_m
46
Multidimensional Example - Host
• Execution Configuration
type (dim3)
integer (kind=4) :: x, y, z
end type
47
2D Example - Host Code
program incTest
use cudafor
use simpleOps_m
implicit none
integer, parameter :: nx=1024, ny=512
real :: a(nx,ny), b
real, device :: a_d(nx,ny)
type(dim3) :: grid, tBlock
a = 1; b = 3
tBlock = dim3(32,8,1)
grid = dim3(ceiling(real(nx)/tBlock%x), ceiling(real(ny)/tBlock%y), 1)
a_d = a
call inc<<<grid,tBlock>>>(a_d, b)
a = a_d
48
2D Example - Device Code
module simpleOps_m
contains
attributes(global) subroutine inc(a, b)
implicit none
real :: a(:,:)
real, value :: b
integer :: i, j
i = (blockIdx%x-1)*blockDim%x + threadIdx%x
j = (blockIdx%y-1)*blockDim%y + threadIdx%y
49
Kernel Loop Directives (CUF Kernels)
• Automatic kernel generation and invocation of host code
region (arrays in loops must reside on GPU)
program incTest
use cudafor
implicit none
integer, parameter :: n = 256
integer :: a(n), b
integer, device :: a_d(n)
a = 1; b = 3; a_d = a
a = a_d
if (all(a == 4)) write(*,*) 'Test Passed'
end program incTest
50
Kernel Loop Directives (CUF Kernels)
• Multidimensional arrays
!$cuf kernel do(2) <<< *,* >>>
do j = 1, ny
do i = 1, nx
a_d(i,j) = b_d(i,j) + c_d(i,j)
enddo
enddo
51
Reduction using CUF Kernels
• Compiler recognizes use of scalar reduction and generates one
result
rsum = 0.0
!$cuf kernel do <<<*,*>>>
do i = 1, nx
rsum = rsum + a_d(i)
enddo
52
Compute Capabilities
Architecture Tesla Fermi Kepler
Double precision
3D grids
54
Host-Device Transfers
• Host-device bandwidth is much lower than bandwidth within
device
– 8 GB/s peak (PCIe x16 Gen 2) vs. 177 GB/s peak (Tesla M2090)
55
Page-Locked Data Transfers
Device Device
DRAM DRAM
Host Host
Pageable Pinned Pinned
memory buffer memory
56
Page-Locked Data Transfers
• Page-locked or pinned host memory by declaration
– Designated by pinned variable attribute
– Must be allocatable
– Tesla M2050/Nehalem
• Pageable: ~3.5 GB/s
• Pinned: ~6 GB/s
57
Overlapping Transfers and Computation
• Kernel launches are asynchronous, normal memory copies are
blocking
a_d = a ! blocks on host until transfer completes
call inc<<<g,b>>>(a_d, b) ! Control returns immediately to CPU
a = a_d ! starts only after kernel completes
58
Asynchronous Data Transfers
• Asynchronous host-device transfers return control immediately
to CPU
– cudaMemcpyAsync(dst, src, nElements, stream)
– Requires pinned host memory
59
Overlapping Transfers and Kernels
• Requires:
– Pinned host memory
– Kernel and transfer to use different non-zero streams
60
GPU/CPU Synchronization
• cudaDeviceSynchronize()
– Blocks until all previously issued operations on the GPU complete
• cudaStreamSynchronize(stream)
– Blocks until all previously issued operations to stream complete
• cudaStreamQuery(stream)
– Indicated whether stream is idle
– Does not block CPU code
61
GPU/CPU Synchronization
• Stream-based using CUDA events
– Events can be inserted into streams
– cudaEventSynchronize(event)
• Blocks CPU until event is recorded
62
Shared Memory
• On-chip
• All threads in a block have access to same shared memory
• Used to reduce multiple loads of device data
• Used to accommodate coalescing
DRAM
Shared Shared
63
Matrix Transpose
attributes(global) subroutine transposeNaive(odata, idata)
real, intent(out) :: odata(ny,nx)
real, intent(in) :: idata(nx,ny)
integer :: x, y
64
Matrix Transpose - Shared Memory
attributes(global) subroutine transposeCoalesced(odata, idata)
real, intent(out) :: odata(ny,nx)
real, intent(in) :: idata(nx,ny)
real, shared :: tile(TILE_DIM, TILE_DIM)
integer :: x, y
x = (blockIdx%x-1)*blockDim%x + threadIdx%x
y = (blockIdx%y-1)*blockDim%y + threadIdx%y
tile(threadIdx%x, threadIdx%y) = idata(x,y)
call syncthreads()
x = (blockIdx%y-1)*blockDim%y + threadIdx%x
y = (blockIdx%x-1)*blockDim%x + threadIdx%y idata odata
odata(x,y) = tile(threadIdx%y, threadIdx%x)
end subroutine transposeCoalesced tile
65
Calling CUBLAS from CUDA Fortran
• Module which defines interfaces to CUBLAS from CUDA Fortran
– use cublas
• Interfaces in three forms
– Overloaded BLAS interfaces that take device array arguments
• call saxpy(n, a_d, x_d, incx, y_d, incy)
– Legacy CUBLAS interfaces
• call cublasSaxpy(n, a_d, x_d, incx, y_d, incy)
– Multi-GPU version (CUDA 4.0) that utilizes a handle h
• istat = cublasSaxpy_v2(h, n, a_d, x_d, incx, y_d, incy)
• Mixing the three forms is allowed
66
Calling CUBLAS from CUDA Fortran
program cublasTest
use cublas
implicit none
a = 1; a_d = a
b = 2; b_d = b
c = 3; c_d = c
c=c_d
write(*,*) 'Maximum error: ', maxval(abs(c-14.0))
deallocate (a,b,c,a_d,b_d,c_d)
68
Calling Thrust from CUDA Fortran
Fortran interface to C wrapper using ISO C Bindings
module thrust
interface thrustsort
subroutine sort_int( input,N) & subroutine sort_float( input,N) &
bind(C,name="sort_int_wrapper") bind(C,name="sort_float_wrapper")
use iso_c_binding use iso_c_binding
integer(c_int),device:: input(*) real(c_float),device:: input(*)
integer(c_int),value:: N integer(c_int),value:: N
end subroutine end subroutine
end interface
subroutine sort_double( input,N) & end module
bind(C,name="sort_double_wrapper")
use iso_c_binding
real(c_double),device:: input(*)
integer(c_int),value:: N
end subroutine
69
CUDA Fortran Sorting with Thrust
program testsort !Print unsorted data
use thrust print *, cpuData
real, allocatable :: cpuData(:) !Send data to GPU
real, allocatable, device :: gpuData(:) gpuData = cpuData
integer :: N=10 !Sort the data
call thrustsort(gpuData,N)
!Allocate CPU and GPU arrays !Copy the result back
allocate(cpuData(N),gpuData(N)) cpuData = gpuData
!Fill the host array with random data !Print sorted data
do i=1,N print *, cpuData
cpuData(i)=random(i) !Deallocate arrays
end do deallocate(cpuData,gpuData)
end program testsort
$ ./testsort
Before sorting 4.1630346E-02 0.9124327 0.7832350 0.6540373 100.0000 0.3956419 0.2664442 0.13724658.0488138E-03 0.8788511
After sorting 8.0488138E-03 4.1630346E-02 0.1372465 0.26644420.3956419 0.6540373 0.7832350 0.87885110.9124327 100.0000
70
CUDA Fortran Sorting with Thrust
program timesort ! Create events
use cudafor istat = cudaEventCreate ( startEvent )
use thrust istat = cudaEventCreate ( stopEvent )
real, allocatable :: cpuData(:) ! Send data to GPU
real, allocatable, device :: gpuData(:) gpuData = cpuData
integer:: N=100000000,istat !Sort the data
! cuda events for elapsing istat = cudaEventRecord ( startEvent , 0)
type (cudaEvent) :: startEvent, stopEvent call thrustsort(gpuData,N)
real :: time, random istat = cudaEventRecord ( stopEvent , 0)
istat = cudaEventSynchronize ( stopEvent )
!Allocate CPU and GPU arrays istat = cudaEventElapsedTime( time, &
allocate(cpuData(N), gpuData(N)) startEvent ,
!Fill the host array with random data stopEvent)
do i=1,N !Copy the result back
cpuData(i)=random(i) cpuData = gpuData
end do print *," Sorted array in:",time," (ms)"
print *,"After sorting", &
cpuData(1:5),cpuData(N-4:N)
!Deallocate arrays
deallocate(cpuData,gpuData)
$ ./timesort
Sorting array of 100000000 single precision end program timesort
Sorted array in: 194.6642 (ms)
After sorting 7.0585919E-09 1.0318221E-08 1.9398616E-08 3.1738640E-08 4.4078664E-08 0.9999999 0.9999999 1.000000 1.000000 1.000000
71
Convolution Example
A B
72
Naive approach
Transfer A Transfer B FFT(A) FFT(B) conv IFFT(C) Transfer C
0.3s time
Smarter approach
time
Optimal approach
H2D A(1) B(1) A(2) B(2) A(N) B(N)
Compute FFT FFT conv IFFT FFT FFT conv IFFT FFT
A(N)
FFT
B(N)
conv IFFT
C(N)
A(1) B(1) C(1) A(2) B(2) C(2)
73
program driver do ifr=1,nomega
use cudafor ind = mod(ifr,num_streams)+1
use cufft
implicit none ! Send data to GPU
complex, allocatable,dimension(:,:,:), pinned :: A , B istat= cudaMemcpyAsync(A_d(1,1,ind),A(1,1,ifr),nxx*nyy, stream(ind))
complex, allocatable,dimension(:,:,:), device :: A_d, B_d istat= cudaMemcpyAsync(B_d(1,1,ind),B(1,1,ifr),nxx*nyy, stream(ind))
74
Computing π with CUDA Fortran
Simple example:
–Generate random numbers ( CURAND)
–Compute sum using of kernel loop directive
–Compute sum using two stages reduction with Cuda Fortran kernels
–Compute sum using single stage reduction with Cuda Fortran kernel
–Accuracy
75
CUDA Libraries from CUDA Fortran
• All the toolkit libraries have C interfaces
• Use F90 interfaces and ISO C Binding to use from CUDA Fortran
interface curandGenerateUniform
!curandGenerateUniform(curandGenerator_t generator, float *outputPtr, size_t num);
subroutine curandGenerateUniform(generator, odata, numele) &
bind(C,name='curandGenerateUniform')
use iso_c_binding
integer(c_size_t),value:: generator
!pgi$ ignore_tr odata
real(c_float), device:: odata(*)
integer(c_size_t),value:: numele
end subroutine curandGenerateUniform
76
Computing π with CUF Kernel
! Perform the test on GPU using CUF kernel
! Compute pi using a Monte Carlo method
inside=0
program compute_pi
!$cuf kernel do <<<*,*>>>
use precision
do i=1,Nhalf
use cudafor ! CUDA Fortran runtime
if( (deviceData(i)**2+deviceData(i+Nhalf)**2) &
use curand ! CURAND interface
<= 1._fp_kind ) inside=inside+1
implicit none
end do
real(fp_kind), allocatable, pinned:: hostData(:)
! Perform the test on CPU
real(fp_kind), allocatable, device:: deviceData(:)
inside_cpu=0
real(fp_kind):: pival
do i=1,Nhalf
integer :: inside_cpu,inside, i, iter, Nhalf
if( (hostData(i)**2+hostData(i+Nhalf)**2) &
integer(kind=8) :: gen, N, seed=1234
<= 1._fp_kind) inside_cpu=inside_cpu+1
end do
! Define how many numbers we want to generate
! Check the results
N=2000
if (inside_cpu .ne. inside) &
Nhalf=N/2
print *,"Mismatch between CPU/ &
! Allocate arrays on CPU and GPU
GPU",inside_cpu,inside
allocate(hostData(N), deviceData(N))
! Print the value of pi and the error
! Create pseudonumber generator
pival= 4._fp_kind*real(inside,fp_kind) &
call curandCreateGenerator(gen, &
/real(Nhalf,fp_kind)
CURAND_RNG_PSEUDO_DEFAULT)
print"(t3,a,i10,a,f10.8,a,e11.4)", "Samples=", &
! Set seed
Nhalf," Pi=", pival," Error=", &
call curandSetPseudoRandomGeneratorSeed( gen, seed)
abs(pival-2.0_fp_kind*asin(1.0_fp_kind))
! Generate N floats or double on device
! Deallocate data on CPU and GPU
call curandGenerateUniform(gen, deviceData, N)
deallocate(hostData,deviceData)
! Copy the data back to CPU to check result later
! Destroy the generator
hostData=deviceData
call curandDestroyGenerator(gen)
end program compute_pi
77
Computing π
pgf90 -O3 -Mpreprocess -o pi_gpu precision_module.cuf curand_module.cuf pi.cuf -lcurand
78
Computing π
pgf90 -O3 -Mpreprocess -o pi_gpu precision_module.cuf curand_module.cuf pi.cuf -lcurand
- Sum of the point inside the circle is done with integers ( no issues due to floating point arithmetic)
- Computation of the distance from the origin (x*x+y*y), no special functions just + and *
78
Computing π
pgf90 -O3 -Mpreprocess -o pi_gpu precision_module.cuf curand_module.cuf pi.cuf -lcurand
Compute pi in single precision (seed 1234) Compute pi in single precision (seed 1234567)
Samples= 10000 Pi=3.11120009 Error= 0.3039E-01 Samples= 10000 Pi=3.16720009 Error= 0.2561E-01
Samples= 100000 Pi=3.13632011 Error= 0.5273E-02 Samples= 100000 Pi=3.13919997 Error= 0.2393E-02
Samples= 1000000 Pi=3.14056396 Error= 0.1029E-02 Samples= 1000000 Pi=3.14109206 Error= 0.5007E-03
Samples= 10000000 Pi=3.14092445 Error= 0.6683E-03 Samples= 10000000 Pi=3.14106607 Error= 0.5267E-03
Samples= 100000000 Pi=3.14158082 Error= 0.1192E-04 Mismatch between CPU/GPU 78534862 78534859
Samples= 100000000 Pi=3.14139414 Error= 0.1986E-03
- Sum of the point inside the circle is done with integers ( no issues due to floating point arithmetic)
- Computation of the distance from the origin (x*x+y*y), no special functions just + and *
78
GPU Accuracy
• FERMI GPUs are IEEE-754 compliant, both for single and double precision
• Support for Fused Multiply-Add instruction (IEEE 754-2008)
• Results with FMA could be different* from results without FMA
• In CUDA Fortran is possible to toggle FMA on/off with a compiler switch:
-Mcuda=nofma
• Extremely useful to compare results to “golden” CPU output
• FMA is being supported in future CPUs
Compute pi in single precision (seed=1234567 FMA disabled)
Samples= 10000 Pi=3.16720009 Error= 0.2561E-01
Samples= 100000 Pi=3.13919997 Error= 0.2393E-02
Samples= 1000000 Pi=3.14109206 Error= 0.5007E-03
Samples= 10000000 Pi=3.14106607 Error= 0.5267E-03
Samples= 100000000 Pi=3.14139462 Error= 0.1981E-03
*GPU results with FMA are identical to CPU if operations are done in double precision
79
Reductions on GPU 3 1 7 0 4 1 6 3
4 7 5 9
• Parallelism across blocks 11 14
• Parallelism within a block 25
• No global synchronization
– two-stage approach (two kernel lauches), same code for both stages
3 1 7 0 4 1 6 3 3 1 7 0 4 1 6 3 3 1 7 0 4 1 6 3 3 1 7 0 4 1 6 3 3 1 7 0 4 1 6 3 3 1 7 0 4 1 6 3 3 1 7 0 4 1 6 3 3 1 7 0 4 1 6 3
4 7 5 9 4 7 5 9 4 7 5 9 4 7 5 9 4 7 5 9 4 7 5 9 4 7 5 9 4 7 5 9
11 14 11 14 11 14 11 14 11 14 11 14 11 14 11 14
25 25 25 25 25 25 25 25 Level 0:
8 blocks
3 1 7 0 4 1 6 3 Level 1:
4 7 5 9
11
25
14 1 block
80
Parallel Reduction: Sequential Addressing
10 1 8 -1 0 -2 3 5 -2 -3 2 7 0 11 0 2
Step 1 Thread
Stride 8 IDs
1 2 3 4 5 6 7 8
Values 8 -2 10 6 0 9 3 7 -2 -3 2 7 0 11 0 2
Step 2 Thread
Stride 4 IDs 1 2 3 4
Values 8 7 13 13 0 9 3 7 -2 -3 2 7 0 11 0 2
Step 3 Thread
Stride 2 IDs 1 2
Values 21 20 13 13 0 9 3 7 -2 -3 2 7 0 11 0 2
Step 4 Thread
Stride 1 IDs 1
Values 41 20 13 13 0 9 3 7 -2 -3 2 7 0 11 0 2
81
Computing π with CUDA Fortran Kernels
attributes(global) subroutine partial_sum psum(index)=interior
(input,partial,N)
call syncthreads()
real(fp_kind) :: input(N)
integer :: partial(256)
inext=blockDim%x/2
integer, shared, dimension(256) :: psum
do while ( inext >=1 )
integer(kind=8),value :: N
if (index <=inext) psum(index) = &
integer :: i,index, inext,interior
psum(index) + psum(index+inext)
index=threadIdx%x+(BlockIdx%x-1)*BlockDim%x inext = inext /2
! Check if the point is inside the circle call syncthreads()
! and increment local counter end do
interior=0
do i=index,N/2,BlockDim%x*GridDim%x ! Each block writes back its partial sum
if( (input(i)**2+input(i+N/2)**2) <= 1._fp_kind) & if (index == 1) partial(BlockIdx%x)=psum(1)
interior=interior+1 end subroutine
end do
! Local reduction per block ! Compute the partial sums with 256 blocks of 512 threads
index=threadIdx%x
call partial_sum<<<256,512,512*4>>>(deviceData,partial,N)
! Compute the final sum with 1 block of 256 threads
call final_sum<<<1,256,256*4>>>(partial,inside_gpu)
82
Computing π with CUDA Fortran Kernels
attributes(global) subroutine final_sum(partial,nthreads,total)
integer, intent(in) :: partial(nthreads)
integer, intent(out) :: total
integer, shared :: psum(*)
integer :: index, inext
index=threadIdx%x
inext=blockDim%x/2
do while ( inext >=1 )
if (index <=inext) psum(index)=psum(index)+psum(index+inext)
inext = inext /2
call syncthreads()
end do
! First thread has the total sum, writes it back to global memory
if (index == 1) total=psum(1)
end subroutine
83
Computing π with an Atomic Lock
Instead of storing back the partial sum:
! Each block writes back its partial sum
if (index == 1) partial(BlockIdx%x)=psum(1)
use an atomic lock to ensure that one block at the time updates the final sum:
if (index == 1) then
do while ( atomiccas(lock,0,1) == 1) !set lock
end do
partial(1)=partial(1)+psum(1) ! atomic update of partial(1)
call threadfence() ! Wait for memory transaction to be visible to all the other threads
lock =0 ! release lock
end if
partial(1)=0
call sum<<<64,256,256*4>>>(deviceData,partial,N)
inside=partial(1)
84