0% found this document useful (0 votes)
116 views

Opencl 1pp PDF

- OpenCL consists of a C/C++-callable API and an almost-C programming language that was originally proposed by Apple but is now a multi-vendor standard. - The programming language can run on NVIDIA GPUs, AMD GPUs, Intel CPUs, and FPGAs, but is best suited for devices with large data parallelism like GPUs. - Problems are broken into small pieces that get distributed to threads on a GPU in a SPMD manner. OpenCL can share data and interoperate with OpenGL.

Uploaded by

Abed Momani
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
116 views

Opencl 1pp PDF

- OpenCL consists of a C/C++-callable API and an almost-C programming language that was originally proposed by Apple but is now a multi-vendor standard. - The programming language can run on NVIDIA GPUs, AMD GPUs, Intel CPUs, and FPGAs, but is best suited for devices with large data parallelism like GPUs. - Problems are broken into small pieces that get distributed to threads on a GPU in a SPMD manner. OpenCL can share data and interoperate with OpenGL.

Uploaded by

Abed Momani
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

The Open Computing Language (OpenCL)

Mike Bailey
[email protected]

Oregon State University

Oregon State University


Computer Graphics
opencl.pptx

mjb April 24, 2014

OpenCL

Consists of two parts: a C/C++-callable API and an almost-C programming language. It


was originally proposed by Apple, but now is a multi-vendor standard
The programming language can run on NVIDIA GPUs, AMD GPUs, Intel CPUs, and
(supposedly) FPGAs. But, OpenCL is at its best on compute devices with large amounts of
data parallelism, which usually implies using GPUs.
You break your computational problem up into small pieces. Each piece gets farmed out to
threads on the GPU in a SPMD way.
OpenCL
p
can share data,, and interoperate
p
with,, OpenGL
p
There is a JavaScript implementation of OpenCL, called WebCL
There is a JavaScript
p implementation
p
of OpenGL,
p
, called WebGL
WebCL can share data, and interoperate with, WebGL
p
p
programming
g
g language
g g can do neither recursion nor function p
pointers
The OpenCL

Oregon State University


Computer Graphics
mjb April 24, 2014

Who Is Behind OpenCL?


Members of Khronoss OpenCL Working Group

Oregon State University


Computer Graphics
mjb April 24, 2014

Qualcomm Node Full Linux and OpenCL

Oregon State University


Computer Graphics
mjb April 24, 2014

OpenCL Vendor-independent GPU Programming

Your OpenCL Code


or
AMD code

or
NVIDIA code

AMD Compiler
and Linker
OpenCL for
AMD/ATI GPU
Systems
y

NVIDIA Compiler
and Linker
OpenCL for
NVIDIA GPU
Systems
y

Intel code
Intel Compiler and
Linker
OpenCL for
Intel
Systems
y

This happens in the vendor-specific driver


Oregon State University
Computer Graphics
mjb April 24, 2014

The OpenCL
Programming Environment

C/C++ program plus OpenCL code


C/C++ code

OpenCL code

Compiler
p
and Linker

Compiler
p
and Linker

CPU
C
Ub
binary
ayo
on
the host

Ope C b
OpenCL
binary
ay
on the GPU

Oregon State University


Computer Graphics
mjb April 24, 2014

OpenCL wants you to break the problem up into Pieces

If you were
writing
g in C/C++,,
you would say:

If you were writing


in OpenCL, you
would say:

void
ArrayMult( int n, float *a, float *b, float *c)
{
f ( int
for
i t i = 0;
0 i < n; i++ )
c[i] = a[i] * b[i];
}

kernel
void
ArrayMult( global float *dA, global float *dB, global float *dC)
{
int gid = get_global_id ( 0 );
dC[gid] = dA[gid] * dB[gid];
}

This is basically PCAM with lots of P and little A


Oregon State University
Computer Graphics
mjb April 24, 2014

The OpenCL Language also supports Vector Parallelism

OpenCL code can be vector-oriented, meaning that it can perform a single instruction on
multiple data values at the same time (SIMD).
Vector data types are: charn, intn, floatn, where n = 2, 4, 8, or 16.
float4 f, g;
f = (float4)( 1
1.f,
f 2
2.f,
f 3
3.f,
f 4
4.ff );
float16 a16, x16, y16, z16;
ff.xx = 0
0.;;
f.xy = g.zw;
x16.s89ab = f;
float16 a16 = x16 * y16 + z16;

(Note: just because the language supports it, doesnt mean the hardware does.)
Oregon State University
Computer Graphics
mjb April 24, 2014

From the GPU101 Notes:


Compute Units and Processing Elements are Arranged in Grids

Device
CU

CU

CU

CU

CU

CU

A GPU Device is organized as a grid of


Compute Units.
Compute Unit

Each Compute Unit is organized as a grid of


Processing Elements.
So in NVIDIA terms, a Fermi Device has 16
Compute Units and each Compute Unit has 32
Processing Elements.

Oregon State University


Computer Graphics

PE

PE

PE

PE

PE

PE

PE

PE

PE

PE

PE

PE

PE

PE

PE

mjb April 24, 2014

Work-Groups and Work-Items are Arranged in Grids


An OpenCL program is organized as a grid of WorkGroups.
Each Work-Group is organized as a grid of Work-Items.
In terms of hardware, a Work-Group runs on a Compute
Unit and a Work-Item runs on a Processing Element.
In terms of software, threads are swapped on and off the
PEs.

Grid
Work-Group Work-Group Work-Group
0
1
2

Work-Group Work-Group Work-Group


3
4
5


Work-Group 4

Work-Item Work-Item Work-Item Work-Item Work-Item


0
1
2
3
4

Work-Item Work-Item Work-Item Work-Item Work-Item


5
6
7
8
9

Work-Item Work-Item Work-Item Work-Item Work-Item


10
11
12
13
14
Oregon State University
Computer Graphics http://news.cision.com

mjb April 24, 2014

OpenCL Memory Model


Kernel
Global Memory
C
Constant
t tM
Memory
WorkGroup
WorkGroup
Local Memory
WorkGroup
Local Memory
WorkGroup
Local Memory

WorkWorkWorkLocal
Memory
ItemWork- Item Work- Item
WorkItem
Item
Item
WorkWorkWorkItem
Item
Item
WorkWorkWorkItem
Item
Item
Private
Memory

Private
Memory

Private
Memory
Oregon State University
Computer Graphics

mjb April 24, 2014

Rules
Threads can share memory with the other Threads in the same Work-Group
Threads can synchronize with other Threads in the same Work-Group
Global and Constant memory is accessible by all Threads in all Work-Groups
Global and Constant memory is often cached inside a Work-Group
Each Thread has registers and private memory
Each
E h Work-Group
W kG
has
h a maximum
i
number
b off registers
i t
it can use. Th
These are
divided equally among all its Threads

Oregon State University


Computer Graphics
mjb April 24, 2014

Querying the Number of Platforms (usually one)


cl_uint numPlatforms;
status = clGetPlatformIDs( 0, NULL, &numPlatforms );
if( status != CL_SUCCESS )
fprintf( stderr, "clGetPlatformIDs failed (1)\n" );
fprintf( stderr, "Number
Number of Platforms = %d\n
%d\n",, numPlatforms );
cl_platform_id * platforms = new cl_platform_id[ numPlatforms ];
status = clGetPlatformIDs( numPlatforms, platforms, NULL );
if( status != CL_SUCCESS )
f i tf( stderr,
fprintf(
td
" lG tPl tf
"clGetPlatformIDs
ID failed
f il d (2)\
(2)\n"" ));

Thi way off querying


This
i information
i f
ti
is
i a recurring
i OpenCL
O
CL pattern
tt

status = clGetPlatformIDs(

How many
to get

Where to
put them

How many total


there are

0,

NULL,

&numPlatforms );

status = clGetPlatformIDs( numPlatforms,


numPlatforms platforms,
platforms

NULL

);

Oregon State University


Computer Graphics
mjb April 24, 2014

OpenCL Error Codes

This one is #defined as zero.


All the others are negative.

CL_SUCCESS
CL
SUCCESS
CL_DEVICE_NOT_FOUND
CL_DEVICE_NOT_AVAILABLE
CL_COMPILER_NOT_AVAILABLE
CL_MEM_OBJECT_ALLOCATION_FAILURE
CL_OUT_OF_RESOURCES
CL_OUT_OF_HOST_MEMORY
CL_PROFILING_INFO_NOT_AVAILABLE
CL_MEM_COPY_OVERLAP
CL IMAGE FORMAT MISMATCH
CL_IMAGE_FORMAT_MISMATCH
CL_IMAGE_FORMAT_NOT_SUPPORTED
CL_BUILD_PROGRAM_FAILURE
CL_MAP_FAILURE
CL_INVALID_VALUE
CL_INVALID_DEVICE_TYPE
CL_INVALID_PLATFORM
CL_INVALID_DEVICE
CL_INVALID_CONTEXT

CL_INVALID_QUEUE_PROPERTIES
CL
INVALID QUEUE PROPERTIES
CL_INVALID_COMMAND_QUEUE
CL_INVALID_HOST_PTR
CL_INVALID_MEM_OBJECT
CL_INVALID_IMAGE_FORMAT_DESCRIPTOR
CL INVALID IMAGE SIZE
CL_INVALID_IMAGE_SIZE
CL_INVALID_SAMPLER
CL_INVALID_BINARY
CL_INVALID_BUILD_OPTIONS
CL_INVALID_PROGRAM
_
_
CL_INVALID_PROGRAM_EXECUTABLE
CL_INVALID_KERNEL_NAME
CL_INVALID_KERNEL_DEFINITION
CL_INVALID_KERNEL
CL INVALID ARG INDEX
CL_INVALID_ARG_INDEX
CL_INVALID_ARG_VALUE
CL_INVALID_ARG_SIZE
CL_INVALID_KERNEL_ARGS
CL_INVALID_WORK_DIMENSION

Oregon State University


Computer Graphics
mjb April 24, 2014

A Way to Print OpenCL Error Codes get from the Class Web Site
struct errorcode
{
cl int
cl_int
statusCode;
char *
meaning;
}
ErrorCodes[ ] =
{
{ CL_SUCCESS,
{ CL_DEVICE_NOT_FOUND,
{ CL_DEVICE_NOT_AVAILABLE,

"
"Device Not Found"
"Device Not Available"

},
},
},

"Invalid MIP Level"


"Invalid Global Work Size"

},
},

...
{ CL_INVALID_MIP_LEVEL,
{ CL_INVALID_GLOBAL_WORK_SIZE,
};
void
PrintCLError( cl_int errorCode, char * prefix, FILE *fp )
{
if( errorCode == CL
CL_SUCCESS
SUCCESS )
return;
const int numErrorCodes = sizeof( ErrorCodes ) / sizeof( struct errorcode );
char * meaning = ";
for(( int i = 0;; i < numErrorCodes;; i++ )
{
if( errorCode == ErrorCodes[i].statusCode )
{
meaning = ErrorCodes[i].meaning;
break;
}
}
Oregon Statefprintf(
University
fp, "%s %s\n", prefix, meaning );
Computer
Graphics
}

mjb April 24, 2014

Querying the Number of Devices on a Platform

// find out how many devices are attached to each platform and get their ids:
status = clGetDeviceIDs( platform, CL_DEVICE_TYPE_ALL, 0,

NULL, &numDevices );

devices = new cl_device_id[ numDevices ];


status = clGetDeviceIDs( platform, CL_DEVICE_TYPE_ALL, numDevices, devices,

NULL

);

Getting Just the GPU Device

cl_device_id device;
status = clGetDeviceIDs( platform, CL_DEVICE_TYPE_GPU, 1, &device,

NULL

);

Oregon State University


Computer Graphics
mjb April 24, 2014

Querying the Device (this is really useful!)


// find out how many platforms are attached here and get their ids:
cl_uint
cl
uint numPlatforms;
status = clGetPlatformIDs( 0, NULL, &numPlatforms );
if( status != CL_SUCCESS )
fprintf( stderr, "clGetPlatformIDs failed (1)\n" );
fprintf( OUTPUT, "Number of Platforms = %d\n", numPlatforms );
cl_platform_id *platforms = new cl_platform_id[ numPlatforms ];
status = clGetPlatformIDs( numPlatforms, platforms, NULL );
if( status != CL_SUCCESS )
fprintf( stderr, "clGetPlatformIDs failed (2)\n" );
cl_uint numDevices;
cl_device_id *devices;
for( int i = 0; i < (int)numPlatforms; i++ )
{
fprintf( OUTPUT, "Platform #%d:\n", i );
size_t size;
char *str;
clGetPlatformInfo( platforms[i], CL_PLATFORM_NAME, 0, NULL, &size );
str = new char
h [ size
i ]];
clGetPlatformInfo( platforms[i], CL_PLATFORM_NAME, size, str, NULL );
fprintf( OUTPUT, "\tName = '%s'\n", str );
delete[ ] str;
clGetPlatformInfo( platforms[i],
platforms[i] CL_PLATFORM_VENDOR,
CL PLATFORM VENDOR 0,
0 NULL,
NULL &size );
str = new char [ size ];
clGetPlatformInfo( platforms[i], CL_PLATFORM_VENDOR, size, str, NULL );
fprintf(
OUTPUT,
"\tVendor = '%s'\n", str );
Oregon
State
University
delete[
]
str;
Computer Graphics
mjb April 24, 2014

clGetPlatformInfo( platforms[i], CL_PLATFORM_VERSION, 0, NULL, &size );


str = new char [ size ];
clGetPlatformInfo( platforms[i],
platforms[i] CL_PLATFORM_VERSION,
CL PLATFORM VERSION size,
size str,
str NULL );
fprintf( OUTPUT, "\tVersion = '%s'\n", str );
delete[ ] str;
clGetPlatformInfo( platforms[i], CL_PLATFORM_PROFILE, 0, NULL, &size );
str = new char [ size ];
]
clGetPlatformInfo( platforms[i], CL_PLATFORM_PROFILE, size, str, NULL );
fprintf( OUTPUT, "\tProfile = '%s'\n", str );
delete[ ] str;
// find out how many devices are attached to each platform and get their ids:
status = clGetDeviceIDs( platforms[i], CL_DEVICE_TYPE_ALL, 0, NULL, &numDevices );
if( status != CL_SUCCESS )
fprintf( stderr, "clGetDeviceIDs failed (2)\n" );
devices = new cl_device_id[
cl device id[ numDevices ];
status = clGetDeviceIDs( platforms[i], CL_DEVICE_TYPE_ALL, numDevices, devices, NULL );
if( status != CL_SUCCESS )
fprintf( stderr, "clGetDeviceIDs failed (2)\n" );
for( int j = 0; j < (int)numDevices; j++ )
{
fprintf( OUTPUT, "\tDevice #%d:\n", i );
size_t size;
cl_device_type type;
cl_uint ui;
si e t sizes[3]
size_t
si es[3] = { 0
0, 0
0, 0 }};
clGetDeviceInfo( devices[i], CL_DEVICE_TYPE, sizeof(type), &type, NULL );
OUTPUT, "\t\tType = 0x%04x = ", type );
Oregon fprintf(
State University
Computer Graphics
mjb April 24, 2014

switch( type )
{
case CL_DEVICE_TYPE_CPU:
fprintf( OUTPUT,
OUTPUT "CL_DEVICE_TYPE_CPU\n"
"CL DEVICE TYPE CPU\n" );
break;
case CL_DEVICE_TYPE_GPU:
fprintf( OUTPUT, "CL_DEVICE_TYPE_GPU\n" );
break;
case CL
CL_DEVICE_TYPE_ACCELERATOR:
DEVICE TYPE ACCELERATOR:
fprintf( OUTPUT, "CL_DEVICE_TYPE_ACCELERATOR\n" );
break;
default:
fprintf( OUTPUT, "Other...\n" );
break;
}
clGetDeviceInfo( devices[i], CL_DEVICE_VENDOR_ID, sizeof(ui), &ui, NULL );
fprintf( OUTPUT, "\t\tDevice Vendor ID = 0x%04x\n", ui );
clGetDeviceInfo( devices[i], CL_DEVICE_MAX_COMPUTE_UNITS, sizeof(ui), &ui, NULL );
fprintf( OUTPUT
OUTPUT, "\t\tDevice
\t\tDevice Maximum Compute Units = %d\n
%d\n", ui );
clGetDeviceInfo( devices[i], CL_DEVICE_MAX_WORK_ITEM_DIMENSIONS, sizeof(ui), &ui, NULL );
fprintf( OUTPUT, "\t\tDevice Maximum Work Item Dimensions = %d\n", ui );
clGetDeviceInfo( devices[i], CL
CL_DEVICE_MAX_WORK_ITEM_SIZES,
DEVICE MAX WORK ITEM SIZES, sizeof(sizes), sizes, NULL );
fprintf( OUTPUT, "\t\tDevice Maximum Work Item Sizes = %d x %d x %d\n", sizes[0], sizes[1], sizes[2] );
clGetDeviceInfo( devices[i], CL_DEVICE_MAX_WORK_GROUP_SIZE, sizeof(size), &size, NULL );
fprintf( OUTPUT, "\t\tDevice Maximum Work Group Size = %d\n", size );
clGetDeviceInfo( devices[i], CL_DEVICE_MAX_CLOCK_FREQUENCY, sizeof(ui), &ui, NULL );
fprintf( OUTPUT, "\t\tDevice Maximum Clock Frequency = %d MHz\n", ui );
}
} Oregon State University
Computer Graphics

mjb April 24, 2014

Typical Values from Querying the Device

Number of Platforms = 1
Platform #0:
Name = 'NVIDIA CUDA'
Vendor = 'NVIDIA Corporation'
V i = 'O
Version
'OpenCL
CL 1.1
1 1 CUDA 4
4.1.1'
1 1'
Profile = 'FULL_PROFILE'
Device #0:
Type = 0x0004 = CL_DEVICE_TYPE_GPU
Device Vendor ID = 0x10de
Device Maximum Compute Units = 15
Device Maximum Work Item Dimensions = 3
Device Maximum Work Item Sizes = 1024 x 1024 x 64
Device Maximum Work Group Size = 1024
Device Maximum Clock Frequency = 1401 MHz
Kernel Maximum Work Group Size = 1024
Kernel Compile Work Group Size = 0 x 0 x 0
Kernel Local Memory Size = 0

Oregon State University


Computer Graphics
mjb April 24, 2014

Querying to see what extensions are supported on this device

size_t extensionSize;
clGetDeviceInfo( device, CL_DEVICE_EXTENSIONS,
0,
NULL,
&extensionSize );
char *extensions
extensions = new char [extensionSize];
clGetDeviceInfo( devices, CL_DEVICE_EXTENSIONS, extensionSize, extensions,
NULL );
fprintf( stderr, "\nDevice Extensions:\n" );
for( int i = 0; i < (int)strlen(extensions); i++ )
{
if( extensions[ i ] == ' ' )
extensions[ i ] = '\n';
}
fprintf( stderr, "%s\n",
%s\n , extensions );
delete [ ] extensions;

Oregon State University


Computer Graphics
mjb April 24, 2014

Querying to see what extensions are supported on this device

This is the big one


you are looking for

This one is handy too

Device Extensions:
cl_khr_byte_addressable_store
cl_khr_icd
cl_khr_gl_sharing
cl_nv_d3d9_sharing
cl_nv_d3d10_sharing
cl_khr_d3d10_sharing
g
cl_nv_d3d11_sharing
cl_nv_compiler_options
cl_nv_device_attribute_query
cl_nv_p
pragma
g _unroll
cl_khr_global_int32_base_atomics
cl_khr_global_int32_extended_atomics
cl_khr_local_int32_base_atomics
cl_khr_local_int32_extended_atomics
cl_khr_fp64

Oregon State University


Computer Graphics
mjb April 24, 2014

Steps in Creating and Running an OpenCL program

1. Program header
2 Allocate the host memory buffers
2.
3. Create an OpenCL context
4. Create an OpenCL command queue
5. Allocate the device memoryy buffers
6. Write the data from the host buffers to the device buffers
7. Read the kernel code from a file
8. Compile and link the kernel code
9 Create the kernel object
9.
10. Setup the arguments to the kernel object
11. Enqueue the kernel object for execution
12. Read the results buffer back from the device to the host
13. Clean everything up

Oregon State University


Computer Graphics
mjb April 24, 2014

1. .cpp Program Header

#include
#i
l d <
<stdio.h>
tdi h>
#include <math.h>
#include <string.h>
#include <stdlib.h>
#i l d <
#include
<omp.h>
h>

// for
f timing
ti i

#include "cl.h"

Oregon State University


Computer Graphics
mjb April 24, 2014

2. Allocate the Host Memory Buffers

Its being done this way instead of


// allocate the host memory buffers:
float * hA = new float [ NUM_ELEMENTS ];
fl t * hB = new float
float
fl t [ NUM
NUM_ELEMENTS
ELEMENTS ];
]
float * hC = new float [ NUM_ELEMENTS ];

float hA[ NUM_ELEMENTS ];


because the heap usually has more space than the stack

y buffers:
// fill the host memory
for( int i = 0; i < NUM_ELEMENTS; i++ )
{
hA[ i ] = hB[ i ] = sqrt( (float) i );
}
// array size in bytes (will need this later):
size_t dataSize = NUM_ELEMENTS * sizeof( float );
// opencl function return status:
cl_int status;

// test against CL_SUCCESS

Oregon State University


Computer Graphics
mjb April 24, 2014

3. Create an OpenCL Context


cl context context = clCreateContext( NULL,
cl_context
NULL 1,
1 &device,
&device NULL
NULL, NULL,
NULL &status );

properties

// create a context:

the
device

Pass in
user data

cl_context context = clCreateContext( NULL, 1, &device, NULL, NULL, &status );


one device

Callback

returned
status

Oregon State University


Computer Graphics
mjb April 24, 2014

4. Create an OpenCL Command Queue

// create a command queue:


cl_command_queue cmdQueue = clCreateCommandQueue( context, device, 0, &status );

the
context

properties

cl_command_queue cmdQueue = clCreateCommandQueue( context, device, 0, &status );


the
device

returned
status

Oregon State University


Computer Graphics
mjb April 24, 2014

5. Allocate the Device Memory Buffers

// allocate memory buffers on the device:


cl_mem dA = clCreateBuffer( context, CL_MEM_READ_ONLY, dataSize, NULL, &status );
cl_mem dB = clCreateBuffer( context, CL_MEM_READ_ONLY, dataSize, NULL, &status );
cl_mem
l
dC = clCreateBuffer(
lC t B ff ( context,
t t CL_MEM_WRITE_ONLY,
CL MEM WRITE ONLY dataSize,
d t Si
NULL &
NULL,
&status
t t ));

how this buffer is


restricted

buffer data already


allocated

cl_mem dA = clCreateBuffer( context, CL_MEM_READ_ONLY, dataSize, NULL, &status );


# bytes

returned
status

Oregon State University


Computer Graphics
mjb April 24, 2014

6. Write the Data from the Host Buffers to the Device Buffers

// enqueue the 2 commands to write data into the device buffers:


status = clEnqueueWriteBuffer( cmdQueue, dA, CL_FALSE, 0, dataSize, hA, 0, NULL, NULL );
status = clEnqueueWriteBuffer( cmdQueue, dB, CL_FALSE, 0, dataSize, hB, 0, NULL, NULL );

command
queue

want to block
until done?

# bytes

# events event object

status = clEnqueueWriteBuffer( cmdQueue, dA, CL_FALSE, 0, dataSize, hA, 0, NULL, NULL );


device buffer

offset

host
buffer

event wait
list

Oregon State University


Computer Graphics
mjb April 24, 2014

Enqueuing Works Like a Conveyer Belt

Read
Buffer dC

Execute
Kernel

Write
Buffer dB

Write
Buffer dA

Oregon State University


Computer Graphics
mjb April 24, 2014

The .cl File

kernel
void
ArrayMult( global const float *dA, global const float *dB, global float *dC )
{
int gid = get_global_id( 0 );
dC[gid] = dA[gid] * dB[gid];
}

Oregon State University


Computer Graphics
mjb April 24, 2014

OpenCL code is compiled in the Driver . . .

GPU

Application
pp
Program
OpenCL Driver
does the
Compile and Link

OpenCL code in
a separate file

kernel void
ArrayMult( global float *A, global float *B, global float *C )
{
int gid = get_global_id
get global id ( 0 );
C[gid] = A[gid] * B[gid];
}

Oregon State University


Computer Graphics
mjb April 24, 2014

(. . . just like OpenGLs GLSL Shader code is compiled in the driver)

GPU

Application
pp
Program
GLSL Driver
does the
Compile and Link

GLSL shader code in


a separate file

void main( )
{
vec3 newcolor = texture2D( uTexUnit, vST) ).rgb;
newcolor = mix( newcolor, vColor.rgb, uBlend );
gl_FragColor = vec4(u LightIntensity*newcolor, 1. );
}
Oregon State University
Computer Graphics
mjb April 24, 2014

7. Read the Kernel Code from a File into a Character Array


r should work,
work since the .cl
cl file is pure
text, but some people report that it doesnt
work unless you use rb
const char *CL_FILE_NAME = { arraymult.cl" };
...

Watch out for the \r + \n problem!


FILE *fp = fopen( CL_FILE_NAME, "r" );
if( fp == NULL )
{
fprintf( stderr,
stderr "Cannot open OpenCL source file '%s'\n"
'%s'\n", CL
CL_FILE_NAME
FILE NAME );
return 1;
}
// read the characters from the opencl kernel program:
fseek( fp, 0, SEEK_END );
size_t fileSize = ftell( fp );
fseek( fp, 0, SEEK_SET );
char *clProgramText
clProgramText = new char[ fileSize+1 ];
size_t n = fread( clProgramText, 1, fileSize, fp );
clProgramText[fileSize] = '\0';
fclose( fp );

Oregon State University


Computer Graphics
mjb April 24, 2014

8. Compile and Link the Kernel Code

// create the kernel program on the device:


char * strings [ 1 ];
strings[0] = clProgramText;
cl_program
l
program = clCreateProgramWithSource(
lC t P
WithS
( context,
t t 1,
1 (const
(
t char
h **)strings,
**) t i
NULL
NULL, &status
& t t );
)
delete [ ] clProgramText;
// build the kernel program on the device:
char *options = { "" };
status = clBuildProgram( program, 1, &device, options, NULL, NULL );
if( status != CL_SUCCESS )
{
size t size;
size_t
clGetProgramBuildInfo( program, devices[0], CL_PROGRAM_BUILD_LOG, 0, NULL, &size );
cl_char *log = new cl_char[ size ];
clGetProgramBuildInfo( program, devices[0], CL_PROGRAM_BUILD_LOG, size, log, NULL );
fprintf( stderr, "clBuildProgram failed:\n%s\n", log );
delete [ ] log;
}

Oregon State University


Computer Graphics
mjb April 24, 2014

9. Create the Kernel Object

cl_kernel kernel = clCreateKernel( program, ArrayMult", &status );

Oregon State University


Computer Graphics
mjb April 24, 2014

10. Setup the Arguments to the Kernel Object

status = clSetKernelArg( kernel, 0, sizeof(cl_mem),


sizeof(cl mem), &dA );
status = clSetKernelArg( kernel, 1, sizeof(cl_mem), &dB );
status = clSetKernelArg( kernel, 2, sizeof(cl_mem), &dC );

Oregon State University


Computer Graphics
mjb April 24, 2014

11. Enqueue the Kernel Object for Execution

size_t globalWorkSize[ 3 ] = { NUM_ELEMENT, 1, 1 };


size_t localWorkSize[ 3 ] = { LOCAL_SIZE,
1, 1 } ;
status = clEnqueueBarrier( cmdQueue );
double time0 = omp_get_wtime( );
status = clEnqueueNDRangeKernel( cmdQueue, kernel, 1, NULL, globalWorkSize, localWorkSize, 0, NULL, NULL );
status = clEnqueueBarrier(( cmdQueue ));
double time1 = omp_get_wtime( );
# dimensions

# events

event object

status = clEnqueueNDRangeKernel( cmdQueue, kernel, 1, NULL, globalWorkSize, localWorkSize, 0, NULL, NULL );


global
l b l work
k
offset
(always NULL)

event wait
list

Oregon State University


Computer Graphics
mjb April 24, 2014

Work-Groups, Local IDs, and Global IDs


NDRange
NDRange Index Space
Space can be
1D, 2D, or 3D. This one is 1D.
Gx = 20
Wx = 5

Lx = 4

GlobalIndexSpaceSize
# WorkGroups
#WorkGroups
WorkGroupSize

5x4

20
4

Oregon State University


Computer Graphics
mjb April 24, 2014

Work-Groups, Local IDs, and Global IDs

Gy = 12

Wy = 4

NDRange
NDRange Index Space
Space can be
1D, 2D, or 3D. This one is 2D.

Gx = 20
Ly = 3

Wx = 5

GlobalIndexSpaceSize
# WorkGroups
#WorkGroups
WorkGroupSize

5x4

20 x12
4 x3

Lx = 4

Oregon State University


Computer Graphics
mjb April 24, 2014

Work-Groups, Local IDs, and Global IDs


NDRange
NDRange Index Space
Space can be
1D, 2D, or 3D. This one is 3D.

Oregon State University


Computer Graphics
mjb April 24, 2014

Figuring Out What Thread You Are

uint

get_work_dim( ) ;

size t
size_t

get global size( uint dimindx ) ;


get_global_size(

size_t

get_global_id( uint dimindx ) ;

size_t

get_local_size( uint dimindx ) ;

size_t

get_local_id( uint dimindx ) ;

size_t

get_num_groups( uint dimindx ) ;

size_t

get_group_id( uint dimindx ) ;

size_t

get_global_offset( uint dimindx ) ;


0 dimindx 2

Oregon State University


Computer Graphics
mjb April 24, 2014

12. Read the Results Buffer Back from the Device to the Host

status = clEnqueueReadBuffer( cmdQueue, dC, CL_TRUE, 0, dataSize, hC, 0, NULL, NULL );

command
queue

want to block
until done?

# bytes

# events event object

status = clEnqueueReadBuffer( cmdQueue, dC, CL_TRUE, 0, dataSize, hC, 0, NULL, NULL );


device buffer

offset

host
buffer

event wait
list

Oregon State University


Computer Graphics
mjb April 24, 2014

13. Clean Everything Up

// clean everything up:


clReleaseKernel(
clReleaseProgram(
clReleaseCommandQueue(
clReleaseMemObject( dA
clReleaseMemObject( dB
clReleaseMemObject( dC

kernel );
program );
cmdQueue );
);
);
);

delete [ ] hA;
delete [ ] hB;
delete [ ] hC;

Oregon State University


Computer Graphics
mjb April 24, 2014

Array Multiplication Performance:

GigaMultiplications/Second

What is a Good Work-Group Size?

Array Size (K)


Work-Group Size
Oregon State University
Computer Graphics
mjb April 24, 2014

Array Multiplication Performance:

GigaMultiplications/Second

What is a Good Work-Group Size?

Work-Group Size
Array Size (K)
Oregon State University
Computer Graphics
mjb April 24, 2014

Writing the .cl Programs Binary Code

size_t binary_sizes;
status = clGetProgramInfo( Program, CL_PROGRAM_BINARY_SIZES, 0, NULL, &binary_sizes );
size_t
_ size;
status = clGetProgramInfo( Program, CL_PROGRAM_BINARY_SIZES, sizeof(size_t), &size, NULL );
unsigned char *binary = new unsigned char [ size ];
status = clGetProgramInfo( Program, CL_PROGRAM_BINARIES, size, &binary, NULL );
FILE *fpbin = fopen( "particles.nv", "wb" );
if( fpbin == NULL )
{
fprintf( stderr, "Cannot create 'particles.bin'\n" );
}
else
{
fwrite( binary, 1, size, fpbin );
fclose( fpbin );
}
delete [ ] binary;

Oregon State University


Computer Graphics
mjb April 24, 2014

Importing that Binary Code back In:


Instead of doing this:

8. Compile and Link the Kernel Code

char * strings [ 1 ];
strings[0] = clProgramText;
cl_program program = clCreateProgramWithSource( context, 1, (const char **)strings, NULL, &status );
delete [ ] clProgramText;

You would do this:


unsigned char byteArray[ numBytes ];
cl_program program = clCreateProgramWithBinary( context, 1, &device, &numBytes, &byteArray, &binaryStatus, &status );

d l t [ ] byteArray;
delete
b t A

And you still have to do this:


char *options = { "" };
status = clBuildProgram( program, 1, &device, options, NULL, NULL );
if( status != CL_SUCCESS )
{
size_t size;
clGetProgramBuildInfo( program, device, CL_PROGRAM_BUILD_LOG, 0, NULL, &size );
cl_char *log = new cl_char[ size ];
clGetProgramBuildInfo( program, device, CL_PROGRAM_BUILD_LOG, size, log, NULL );
fprintf( stderr,
stderr "clBuildProgram failed:\n%s\n",
failed:\n%s\n" log );
delete [ ] log;
}
Oregon State University
Computer Graphics

mjb April 24, 2014

You might also like