ar-sdk-programming-guide
ar-sdk-programming-guide
Programming Guide
This section provides information about the NVIDIA® AR SDK API architecture.
This function creates a handle to the feature instance, which is required in function calls to
get and set the properties of the instance and to load, run, or destroy the instance.
‣ The configuration properties that are required to load the feature type.
‣ Input and output properties are provided at runtime when instances of the feature type are
run.
Refer to Key Values in the Properties of a Feature Type for a complete list of properties.
To set properties, NVIDIA AR SDK provides type-safe set accessor functions. If you need the
value of a property that has been set by a set accessor function, use the corresponding get
accessor function. Refer to Summary of NVIDIA AR SDK Accessor Functions for a complete
list of get and set functions.
NvAR_Parameter_Input(BoundingBoxes)
Bounding boxes that determine the region of interest (ROI) of an input image that contains a
face of type NvAR_BBoxes.
String equivalent: NvAR_Parameter_InputBoundingBoxes
Property type: object (void*)
NvAR_Parameter_Input(FocalLength)
The focal length of the camera used for 3D Body Pose.
String equivalent: NvAR_Parameter_Input_FocalLength
Property type: object (float)
NvAR_Parameter_Output(Pose)
CPU array of type NvAR_Quaternion to hold the output-detected pose as an XYZW
quaternion.
String equivalent: NvAR_Parameter_Output_Pose
Property type: object (void*)
NvAR_Parameter_Output(FaceMesh)
CPU 3D face Mesh of type NvAR_FaceMesh.
String equivalent: NvAR_Parameter_Output_FaceMesh
Property type: object (void*)
NvAR_Parameter_Output(RenderingParams)
CPU output structure of type NvAR_RenderingParams that contains the rendering
parameters that might be used to render the 3D face mesh.
String equivalent: NvAR_Parameter_Output_RenderingParams
Property type: object (void*)
NvAR_Parameter_Output(ShapeEigenValues)
Float array of shape eigenvalues. Get NvAR_Parameter_Config(ShapeEigenValueCount)
to determine how many eigenvalues there are.
String equivalent: NvAR_Parameter_Output_ShapeEigenValues
Property type: const floating point array
NvAR_Parameter_Output(ExpressionCoefficients)
Float array of expression coefficients. Get NvAR_Parameter_Config(ExpressionCount)
to determine how many coefficients there are.
String equivalent: NvAR_Parameter_Output_ExpressionCoefficients
Property type: const floating point array
NvAR_Parameter_Output(KeyPoints)
CPU output buffer of type NvAR_Point2f to hold the output detected 2D Keypoints for Body
Pose. Refer to 3D Body Pose Keypoint Format for information about the Keypoint names
and the order of Keypoint output.
String equivalent: NvAR_Parameter_Output_KeyPoints
Property type: object (void*)
NvAR_Parameter_Output(KeyPoints3D)
CPU output buffer of type NvAR_Point3f to hold the output detected 3D Keypoints for Body
Pose. Refer to 3D Body Pose Keypoint Format for information about the Keypoint names
and the order of Keypoint output.
String equivalent: NvAR_Parameter_Output_KeyPoints3D
Property type: object (void*)
NvAR_Parameter_Output(JointAngles)
CPU output buffer of type NvAR_Point3f to hold the joint angles in axis-angle format for
the Keypoints for Body Pose.
String equivalent: NvAR_Parameter_Output_JointAngles
Property type: object (void*)
NvAR_Parameter_Output(KeyPointsConfidence)
Float array of confidence values for each detected keypoints.
String equivalent: NvAR_Parameter_Output_KeyPointsConfidence
Property type: floating point array
NvAR_Parameter_Output(OutputHeadTranslation)
Float array of three values that represent the x, y and z values of head translation with
respect to the camera for Eye Contact.
String equivalent: NvAR_Parameter_Output_OutputHeadTranslation
Property type: floating point array
NvAR_Parameter_Output(OutputGazeVector)
Float array of two values that represent the yaw and pitch angles of the estimated gaze for
Eye Contact.
String equivalent: NvAR_Parameter_Output_OutputGazeVector
Property type: floating point array
NvAR_Parameter_Output(HeadPose)
CPU array of type NvAR_Quaternion to hold the output-detected head pose as an XYZW
quaternion in Eye Contact. This is an alternative to the head pose that was obtained from
the facial landmarks feature. This head pose is obtained using the PnP algorithm over the
landmarks.
String equivalent: NvAR_Parameter_Output_HeadPose
Property type: object (void*)
NvAR_Parameter_Output(GazeDirection)
Float array of two values that represent the yaw and pitch angles of the estimated gaze for
Eye Contact.
String equivalent: NvAR_Parameter_Output_GazeDirection
Property type: floating point array
‣ The key value that identifies the property that you are getting.
‣ The location in memory where you want the value of the property to be written.
This example determines the length of the NvAR_Point2f output buffer that was returned by
the landmark detection feature:
unsigned int OUTPUT_SIZE_KPTS;
NvAR_GetU32(landmarkDetectHandle, NvAR_Parameter_Config(Landmarks_Size),
&OUTPUT_SIZE_KPTS);
1. Allocate memory for all inputs and outputs that are required by the feature and any other
properties that might be required.
2. Call the set accessor function that is appropriate for the data type of the property.
In the call to the function, pass the following information:
NvCVImage InputImageBuffer;
NvCVImage_Alloc(&inputImageBuffer, input_image_width, input_image_height,
NVCV_BGR, NVCV_U8, NVCV_CHUNKY, NVCV_GPU, 1) ;
NvAR_SetObject(landmarkDetectHandle, NvAR_Parameter_Input(Image),
&InputImageBuffer, sizeof(NvCVImage));
Refer to List of Properties for AR Features for more information about the properties and
input and output requirements for each feature.
Note: The listed property name is the input to the macro that defines the key value for the
property.
Note: The AR SDK provides wrapper functions only for RGB images. No wrapper functions are
provided for YUV images.
NVWrapperForCVMat(&srcCVImg, &srcCPUImg);
NVWrapperForCVMat(&dstCVImg, &dstCPUImg);
‣ To create an OpenCV image wrapper for an NvCVImage object, use the
CVWrapperForNvCVImage() function.
// Allocate source and destination NvCVImage objects
NvCVImage srcCPUImg(...);
NvCVImage dstCPUImg(...);
NvDecoder from the decoded pixel format to the format that is required by a feature of the AR
SDK.
The following sample shows a decoded frame that was converted from the NV12 to the BGRA
pixel format.
NvCVImage decoded_frame, BGRA_frame, stagingBuffer;
NvDecoder dec;
//Initialize decoder...
//Assuming dec.GetOutputFormat() == cudaVideoSurfaceFormat_NV12
decoded_frame.pixels = (void*)dec.GetFrame();
Note: The sample above assumes the typical colorspace specification for HD content. SD
typically uses NVCV_601. There are eight possible combinations, and you should use the one
that matches your video as described in the video header or proceed by trial and error.
Here is some additional information:
std::vector<std::vector<uint8_t>> vPacket;
//Get the address of the next input frame from the encoder
const NvEncInputFrame* encoderInputFrame = pEnc->GetNextInputFrame();
//Copy the pixel data from BGRA_frame into the input frame address obtained above
NvEncoderCuda::CopyToDeviceFrame(cuContext,
BGRA_frame.pixels,
BGRA_frame.pitch,
(CUdeviceptr)encoderInputFrame->inputPtr,
encoderInputFrame->pitch,
pEnc->GetEncodeWidth(),
pEnc->GetEncodeHeight(),
CU_MEMORYTYPE_DEVICE,
encoderInputFrame->bufferFormat,
encoderInputFrame->chromaOffsets,
encoderInputFrame->numChromaPlanes);
pEnc->EncodeFrame(vPacket);
‣ The pixel organization determines whether blue, green, and red are in separate planes or
interleaved.
‣ The memory type determines whether the buffer resides on the GPU or the CPU.
‣ The byte alignment determines the gap between consecutive scanlines.
The following examples show how to use the final three optional parameters of the allocation
constructor to determine the properties of the NvCVImage object.
‣ This example creates an object without setting the final three optional parameters of the
allocation constructor. In this object, the blue, green, and red components interleaved in
each pixel, the buffer resides on the CPU, and the byte alignment is the default alignment.
NvCVImage cpuSrc(
srcWidth,
srcHeight,
NVCV_BGR,
NVCV_U8
);
‣ This example creates an object with identical pixel organization, memory type, and byte
alignment to the previous example by setting the final three optional parameters explicitly.
As in the previous example, the blue, green, and red components are interleaved in
each pixel, the buffer resides on the CPU, and the byte alignment is the default, that is,
optimized for maximum performance.
NvCVImage src(
srcWidth,
srcHeight,
NVCV_BGR,
NVCV_U8,
NVCV_INTERLEAVED,
NVCV_CPU,
0
);
‣ This example creates an object in which the blue, green, and red components are in
separate planes, the buffer resides on the GPU, and the byte alignment ensures that no
gap exists between one scanline and the next scanline.
NvCVImage gpuSrc(
srcWidth,
srcHeight,
NVCV_BGR,
NVCV_U8,
NVCV_PLANAR,
NVCV_GPU,
1
);
1. Create an NvCVImage object to use as a staging GPU buffer that has the same dimensions
and format as the source CPU buffer.
NvCVImage srcGpuPlanar(inWidth, inHeight, NVCV_BGR, NVCV_F32, NVCV_PLANAR,
NVCV_GPU,1)
‣ To avoid allocating memory in a video pipeline, create a GPU buffer that has the same
dimensions and format as required for input to the video effect filter.
NvCVImage srcGpuStaging(inWidth, inHeight, srcCPUImg.pixelFormat,
srcCPUImg.componentType, srcCPUImg.planar, NVCV_GPU)
‣ To simplify your application program code, declare an empty staging buffer.
NvCVImage srcGpuStaging;
1. Create an NvCVImage object to use as a staging GPU buffer that has the same dimensions
and format as the destination CPU buffer.
NvCVImage dstGpuPlanar(outWidth, outHeight, NVCV_BGR, NVCV_F32, NVCV_PLANAR,
NVCV_GPU, 1)
For more information about NvCVImage, refer to the NvCVImage API Guide.
2. Create a staging buffer in one of the following ways:
‣ To avoid allocating memory in a video pipeline, create a GPU buffer that has the same
dimensions and format as the output of the video effect filter.
NvCVImage dstGpuStaging(outWidth, outHeight, dstCPUImg.pixelFormat,
dstCPUImg.componentType, dstCPUImg.planar, NVCV_GPU)
‣ To simplify your application program code, declare an empty staging buffer:
NvCVImage dstGpuStaging;
‣ NvAR_Parameter_Config(BatchSize)
‣ NvAR_Parameter_Config(LandmarksConfidence_Size)
‣ NvAR_Parameter_Config(BatchSize)
‣ NvAR_Parameter_Config(Landmarks_Size)
‣ NvAR_Parameter_Config(BatchSize)
‣ NvAR_Parameter_Config(LandmarksConfidence_Size)
‣ NvAR_Parameter_Config(BatchSize)
‣ 34
To be allocated by the user.
//Pass output bounding boxes from face detection as an input on which //landmark
detection is to be run
NvAR_SetObject(landmarkDetectHandle, NvAR_Parameter_Input(BoundingBoxes),
&output_bboxes, sizeof(NvAR_BBoxes));
is not an optional property and, to explicitly run this feature, this property must be set with a
provided output buffer.
Note: The internally determined bounding box can be queried from this feature but is not
required for the feature to run.
This example uses the Landmark Detection AR feature to obtain landmarks directly from the
image, without first explicitly running Face Detection:
//Set input image buffer
NvAR_SetObject(landmarkDetectHandle, NvAR_Parameter_Input(Image), &inputImageBuffer,
sizeof(NvCVImage));
//OPTIONAL – Set output memory for pose, landmark confidence, or even bounding box
confidence if desired
NvAR_Run(landmarkDetectHandle);
Note: The facial keypoints and/or the face bounding box that were determined internally can be
queried from this feature but are not required for the feature to run.
This example uses the Mesh Tracking AR feature to obtain the face mesh directly from the
image, without explicitly running Landmark Detection or Face Detection:
//Set input image buffer instead of providing facial keypoints
NvAR_SetObject(faceFitHandle, NvAR_Parameter_Input(Image), &inputImageBuffer,
sizeof(NvCVImage));
face_mesh->num_vertices = n;
err = NvAR_GetU32(faceFitHandle, NvAR_Parameter_Config(TriangleCount), &n);
face_mesh->num_triangles = n;
face_mesh->vertices = new NvAR_Vector3f[face_mesh->num_vertices];
face_mesh->tvi = new NvAR_Vector3u16[face_mesh->num_triangles];
NvAR_SetObject(faceFitHandle, NvAR_Parameter_Output(FaceMesh), face_mesh,
sizeof(NvAR_FaceMesh));
//OPTIONAL – Set output memory for bounding boxes, or other parameters, such as
pose, bounding box/landmarks confidence, etc.
NvAR_Run(faceFitHandle);
‣ Gaze Estimation
‣ Gaze Redirection
In this release, gaze estimation and redirection of only one face in the frame is supported.
//OPTIONAL – Set output memory for landmarks, head pose, head translation and gaze
direction if desired
std::vector<NvAR_Point2f> facial_landmarks;
facial_landmarks.assign(batchSize * OUTPUT_SIZE_KPTS, {0.f, 0.f});
NvAR_SetObject(gazeRedirectHandle, NvAR_Parameter_Output(Landmarks),
facial_landmarks.data(),sizeof(NvAR_Point2f));
NvAR_Quaternion head_pose;
NvAR_SetObject(gazeRedirectHandle, NvAR_Parameter_Output(HeadPose), &head_pose,
sizeof(NvAR_Quaternion));
NvAR_Run(gazeRedirectHandle);
‣ Body Detection
‣ 3D Keypoint Detection
In this release, we support only one person in the frame, and when the full body (head to toe)
is visible. The feature will still work if a part of the body, such as an arm or a foot, is occluded/
truncated.
‣ This example runs the Body Detection with an input image buffer and output memory to
hold bounding boxes:
//Set input image buffer
NvAR_SetObject(bodyDetectHandle, NvAR_Parameter_Input(Image), &inputImageBuffer,
sizeof(NvCVImage));
//Set output memory for bounding boxes
NvAR_BBoxes = output_boxes{};
output_bboxes.boxes = new NvAR_Rect[25];
output_bboxes.max_boxes = 25;
NvAR_SetObject(bodyDetectHandle, NvAR_Parameter_Output(BoundingBoxes),
&output_bboxes, sizeof(NvAR_BBoxes));
NvAR_Run(bodyDetectHandle);
‣ The input to 3D Body Keypoint Detection is an input image. It outputs the 2D Keypoints, 3D
Keypoints, Keypoints confidence scores, and bounding box encapsulating the person.
This example runs the 3D Body Pose Detection AR feature:
//Set input image buffer
NvAR_SetObject(keypointDetectHandle, NvAR_Parameter_Input(Image),
&inputImageBuffer, sizeof(NvCVImage));
//Pass output bounding boxes from body detection as an input on which //landmark
detection is to be run
NvAR_SetObject(keypointDetectHandle, NvAR_Parameter_Input(BoundingBoxes),
&output_bboxes, sizeof(NvAR_BBoxes));
NvAR_Run(keyPointDetectHandle);
//Pass output bounding boxes from body detection as an input on which //landmark
detection is to be run
NvAR_SetObject(keypointDetectHandle, NvAR_Parameter_Input(BoundingBoxes),
&output_bboxes, sizeof(NvAR_BBoxes));
NvAR_Run(keyPointDetectHandle);
Note: Currently, we only actively track eight people in the scene. There can be more than eight
people throughout the video but only a maximum of eight people in a given frame. Temporal
mode is not supported for Multi-Person Tracking. The batch size should be 8 when Multi-
Person Tracking is enabled. This feature is currently Windows only.
This example uses the 3D Body Pose Tracking AR feature to enable multi-person tracking and
object the tracking ID for each person:
//Set input image buffer
NvAR_SetObject(keypointDetectHandle, NvAR_Parameter_Input(Image), &inputImageBuffer,
sizeof(NvCVImage));
// Enable Multi-Person Tracking
NvAR_SetU32(keyPointDetectHandle, NvAR_Parameter_Config(TrackPeople),
bEnablePeopleTracking);
// Set Shadow Tracking Age
NvAR_SetU32(keyPointDetectHandle, NvAR_Parameter_Config(ShadowTrackingAge),
shadowTrackingAge);
// Set Probation Age
NvAR_SetU32(keyPointDetectHandle, NvAR_Parameter_Config(ProbationAge),
probationAge);
// Set Maximum Targets to be tracked
NvAR_SetU32(keyPointDetectHandle, NvAR_Parameter_Config(MaxTargetsTracked),
maxTargetsTracked);
//Set output buffer to hold detected keypoints
std::vector<NvAR_Point2f> keypoints;
std::vector<NvAR_Point3f> keypoints3D;
std::vector<NvAR_Point3f> jointAngles;
std::vector<float> keypoints_confidence;
// Get the number of keypoints
unsigned int numKeyPoints;
NvAR_GetU32(keyPointDetectHandle, NvAR_Parameter_Config(NumKeyPoints),
&numKeyPoints);
keypoints.assign(batchSize * numKeyPoints , {0.f, 0.f});
keypoints3D.assign(batchSize * numKeyPoints , {0.f, 0.f, 0.f});
jointAngles.assign(batchSize * numKeyPoints , {0.f, 0.f, 0.f});
NvAR_SetObject(keyPointDetectHandle, NvAR_Parameter_Output(KeyPoints),
keypoints.data(), sizeof(NvAR_Point2f));
NvAR_SetObject(keyPointDetectHandle, NvAR_Parameter_Output(KeyPoints3D),
keypoints3D.data(), sizeof(NvAR_Point3f));
NvAR_SetF32Array(keyPointDetectHandle, NvAR_Parameter_Output(KeyPointsConfidence),
keypoints_confidence.data(), batchSize * numKeyPoints);
NvAR_SetObject(keyPointDetectHandle, NvAR_Parameter_Output(JointAngles),
jointAngles.data(), sizeof(NvAR_Point3f));
//Set output memory for tracking bounding boxes
NvAR_TrackingBBoxes output_tracking_bboxes{};
std::vector<NvAR_TrackingBBox> output_tracking_bbox_data;
output_tracking_bbox_data.assign(maxTargetsTracked, { 0.f, 0.f, 0.f, 0.f, 0 });
output_tracking_bboxes.boxes = output_tracking_bbox_data.data();
output_tracking_bboxes.max_boxes = (uint8_t)output_tracking_bbox_size;
NvAR_SetObject(keyPointDetectHandle, NvAR_Parameter_Output(TrackingBoundingBoxes),
&output_tracking_bboxes, sizeof(NvAR_TrackingBBoxes));
NvAR_Run(keyPointDetectHandle);
//OPTIONAL – Set output memory for bounding boxes, and their confidences if desired
err = NvAR_Run(faceExpressionHandle);
Note: The facial keypoints and/or the face bounding box that were determined internally can be
queried from this feature but are not required for the feature to run.
This example uses the Facial Expression Estimation feature to obtain the face expression
coefficients directly from the image, without explicitly running Landmark Detection or Face
Detection:
//Set input image buffer instead of providing facial keypoints
NvAR_SetObject(faceExpressionHandle, NvAR_Parameter_Input(Image), &inputImageBuffer,
sizeof(NvCVImage));
//OPTIONAL – Set output memory for bounding boxes, or other parameters, such as
pose, bounding box/landmarks confidence, etc.
NvAR_Run(faceExpressionHandle);
Buffers need to be allocated on the selected GPU, so before you allocate images on the GPU,
call cudaSetDevice(). Neural networks need to be loaded on the selected GPU, so before
NvAR_Load() is called, set this GPU as the current device.
To use the buffers and models, before you call NvAR_Run() and set the GPU device as the
current device. A previous call to NvAR_SetS32(NULL, NvAR_Parameter_Config(GPU),
whichGPU) helps enforce this requirement.
For performance concerns, switching to the appropriate GPU is the responsibility of the
application.
This example uses the compute capability to determine whether a GPU’s properties
should be analyzed and determine whether the current GPU is the best GPU on which
to apply a video effect filter. A GPU’s properties are analyzed only when the compute
capability is is 7.5, 8.6, or 8.9, which denotes a GPU that is based on Turing, the Ampere
architecture, or the Ada architecture respectively.
// Loop through the GPUs to get the properties of each GPU and
//determine if it is the best GPU for each task based on the
//properties obtained.
for (int dev = 0; dev < deviceCount; ++dev) {
cudaSetDevice(dev);
cudaGetDeviceProperties(&deviceProp, dev);
if (DeviceIsBestForARSDK(&deviceProp)) gpuARSDK = dev;
if (DeviceIsBestForGame(&deviceProp)) gpuGame = dev;
...
}
cudaSetDevice(gpuARSDK);
err = NvAR_Set...; // set parameters
err = NvAR_Load(eff);
3. In the loop to complete the application’s tasks, select the best GPU for each task before
performing the task.
a). Call cudaSetDevice() to select the GPU for the task.
b). Make all the function calls required to perform the task.
In this way, you select the best GPU for each task only once without setting the GPU for
every function call.
This example selects the best GPU for rendering a game and uses custom code to
render the game. It then selects the best GPU for applying a video effect filter before
calling the NvCVImage_Transfer() and NvAR_Run() functions to apply the filter,
avoiding the need to save and restore the GPU for every AR SDK API call.
// Select the best GPU for each task and perform the task.
while (!done) {
...
cudaSetDevice(gpuGame);
RenderGame();
cudaSetDevice(gpuARSDK);
err = NvAR_Run(eff, 1);
...
}
This section provides detailed information about the APIs in the AR SDK.
2.1. Structures
The structures in the AR SDK are defined in the following header files:
‣ nvAR.h
‣ nvAR_defs.h
The structures defined in the nvAR_defs.h header file are mostly data types.
2.1.1. NvAR_BBoxes
Here is detailed information about the NvAR_BBoxes structure.
struct NvAR_BBoxes {
NvAR_Rect *boxes;
uint8_t num_boxes;
uint8_t max_boxes;
};
Members
boxes
Type: NvAR_Rect *
Pointer to an array of bounding boxes that are allocated by the user.
num_boxes
Type: uint8_t
The number of bounding boxes in the array.
max_boxes
Type: uint8_t
The maximum number of bounding boxes that can be stored in the array as defined by the
user.
Remarks
This structure is returned as the output of the face detection feature.
Defined in: nvAR_defs.h
2.1.2. NvAR_TrackingBBox
Here is detailed information about the NvAR_TrackingBBox structure.
struct NvAR_TrackingBBox {
NvAR_Rect bbox;
uint16_t tracking_id;
};
Members
bbox
Type: NvAR_Rect
Bounding box that is allocated by the user.
tracking_id
Type: uint16_t
The Tracking ID assigned to the bounding box by Multi-Person Tracking.
Remarks
This structure is returned as the output of the body pose feature when multi-person tracking
is enabled.
Defined in: nvAR_defs.h
2.1.3. NvAR_TrackingBBoxes
Here is detailed information about the NvAR_TrackingBBoxes structure.
struct NvAR_TrackingBBoxes {
NvAR_TrackingBBox *boxes;
uint8_t num_boxes;
uint8_t max_boxes;
};
Members
boxes
Type: NvAR_TrackingBBox *
Pointer to an array of tracking bounding boxes that are allocated by the user.
num_boxes
Type: uint8_t
Remarks
This structure is returned as the output of the body pose feature when multi-person tracking
is enabled.
Defined in: nvAR_defs.h
2.1.4. NvAR_FaceMesh
Here is detailed information about the NvAR_FaceMesh structure.
struct NvAR_FaceMesh {
NvAR_Vec3<float> *vertices;
size_t num_vertices;
NvAR_Vec3<unsigned short> *tvi;
size_t num_tri_idx;
};
Members
vertices
Type: NvAR_Vec3<float>*
Pointer to an array of vectors that represent the mesh 3D vertex positions.
num_triangles
Type: size_t
The number of mesh triangles.
tvi
Type: NvAR_Vec3<unsigned short> *
Pointer to an array of vectors that represent the mesh triangle's vertex indices.
num_tri_idx
Type: size_t
The number of mesh triangle vertex indices.
Remarks
This structure is returned as an output of the Mesh Tracking feature.
Defined in: nvAR_defs.h
2.1.5. NvAR_Frustum
Here is detailed information about the NvAR_Frustum structure.
struct NvAR_Frustum {
float left = -1.0f;
float right = 1.0f;
float bottom = -1.0f;
float top = 1.0f;
};
Members
left
Type: float
The X coordinate of the top-left corner of the viewing frustum.
right
Type: float
The X coordinate of the bottom-right corner of the viewing frustum.
bottom
Type: float
The Y coordinate of the bottom-right corner of the viewing frustum.
top
Type: float
The Y coordinate of the top-left corner of the viewing frustum.
Remarks
This structure represents a camera viewing frustum for an orthographic camera. As a result,
it contains only the left, the right, the top, and the bottom coordinates in pixels. It does not
contain a near or a far clipping plane.
Defined in: nvAR_defs.h
2.1.6. NvAR_FeatureHandle
Here is detailed information about the NvAR_FeatureHandle structure.
typedef struct nvAR_Feature *NvAR_FeatureHandle;
Remarks
This type defines the handle of a feature that is defined by the SDK. It is used to reference the
feature at runtime when the feature is executed and must be destroyed when it is no longer
required.
2.1.7. NvAR_Point2f
Here is detailed information about the NvAR_Point2f structure.
typedef struct NvAR_Point2f {
float x, y;
} NvAR_Point2f;
Members
x
Type: float
The X coordinate of the point in pixels.
y
Type: float
The Y coordinate of the point in pixels.
Remarks
This structure represents the X and Y coordinates of one point in 2D space.
Defined in: nvAR_defs.h
2.1.8. NvAR_Point3f
Here is detailed information about the NvAR_Point3f structure.
typedef struct NvAR_Point3f {
float x, y, z;
} NvAR_Point3f;
Members
x
Type: float
The X coordinate of the point in pixels.
y
Type: float
The Y coordinate of the point in pixels.
z
Type: float
The Z coordinate of the point in pixels.
Remarks
This structure represents the X, Y, Z coordinates of one point in 3D space.
Defined in: nvAR_defs.h
2.1.9. NvAR_Quaternion
Here is detailed information about the NvAR_Quaternion structure.
struct NvAR_Quaternion {
float x, y, z, w;
};
Members
x
Type: float
The first coefficient of the complex part of the quaternion.
y
Type: float
The second coefficient of the complex part of the quaternion.
z
Type: float
The third coefficient of the complex part of the quaternion.
w
Type: float
The scalar coefficient of the quaternion.
Remarks
This structure represents the coefficients in the quaternion that are expressed in the following
equation:
Defined in: nvAR_defs.h
2.1.10. NvAR_Rect
Here is detailed information about the NvAR_Rect structure.
typedef struct NvAR_Rect {
float x, y, width, height;
} NvAR_Rect;
Members
x
Type: float
The X coordinate of the top left corner of the bounding box in pixels.
y
Type: float
The Y coordinate of the top left corner of the bounding box in pixels.
width
Type: float
The width of the bounding box in pixels.
height
Type: float
The height of the bounding box in pixels.
Remarks
This structure represents the position and size of a rectangular 2D bounding box.
Defined in: nvAR_defs.h
2.1.11. NvAR_RenderingParams
Here is detailed information about the NvAR_RenderingParams structure.
struct NvAR_RenderingParams {
NvAR_Frustum frustum;
NvAR_Quaternion rotation;
NvAR_Vec3<float> translation;
};
Members
frustum
Type: NvAR_Frustum
The camera viewing frustum for an orthographic camera.
rotation
Type: NvAR_Quaternion
The rotation of the camera relative to the mesh.
translation
Type: NvAR_Vec3<float>
Remarks
This structure defines the parameters that are used to draw a 3D face mesh in a window on
the computer screen so that the face mesh is aligned with the corresponding video frame. The
projection matrix is constructed from the frustum parameter, and the model view matrix is
constructed from the rotation and translation parameters.
Defined in: nvAR_defs.h
2.1.12. NvAR_Vector2f
Here is detailed information about the NvAR_Vector2f structure.
typedef struct NvAR_Vector2f {
float x, y;
} NvAR_Vector2f;
Members
x
Type: float
The X component of the 2D vector.
y
Type: float
The Y component of the 2D vector.
Remarks
This structure represents a 2D vector.
Defined in: nvAR_defs.h
2.1.13. NvAR_Vector3f
Here is detailed information about the NvAR_Vector3f structure.
typedef struct NvAR_Vector3f {
float vec[3];
} NvAR_Vector3f;
Members
vec
Type: float array of size 3
A vector of size 3.
Remarks
This structure represents a 3D vector.
Defined in: nvAR_defs.h
2.1.14. NvAR_Vector3u16
Here is detailed information about the NvAR_Vector3u16 structure.
typedef struct NvAR_Vector3u16 {
unsigned short vec[3];
} NvAR_Vector3u16;
Members
vec
Type: unsigned short array of size 3
A vector of size 3.
Remarks
This structure represents a 3D vector.
Defined in: nvAR_defs.h
2.2. Functions
This section provides information about the functions in the AR SDK.
2.2.1. NvAR_Create
Here is detailed information about the NvAR_Create structure.
NvAR_Result NvAR_Create(
NvAR_FeatureID featureID,
NvAR_FeatureHandle *handle
);
Parameters
featureID [in]
Type: NvAR_FeatureID
The type of feature to be created.
handle[out]
Type: NvAR_FeatureHandle *
A handle to the newly created feature instance.
Return Value
Returns one of the following values:
‣ NVCV_SUCCESS on success
‣ NVCV_ERR_FEATURENOTFOUND
‣ NVCV_ERR_INITIALIZATION
Remarks
This function creates an instance of the specified feature type and writes a handle to the
feature instance to the handle out parameter.
2.2.2. NvAR_Destroy
Here is detailed information about the NvAR_Destroy structure.
NvAR_Result NvAR_Destroy(
NvAR_FeatureHandle handle
);
Parameters
handle [in]
Type: NvAR_FeatureHandle
The handle to the feature instance to be released.
Return Value
Returns one of the following values:
‣ NVCV_SUCCESS on success
‣ NVCV_ERR_FEATURENOTFOUND
Remarks
This function releases the feature instance with the specified handle. Because handles are not
reference counted, the handle is invalid after this function is called.
2.2.3. NvAR_Load
Here is detailed information about the NvAR_Load structure.
NvAR_Result NvAR_Load(
NvAR_FeatureHandle handle,
);
Parameters
handle [in]
Type: NvAR_FeatureHandle
The handle to the feature instance to load.
Return Value
Returns one of the following values:
‣ NVCV_SUCCESS on success
‣ NVCV_ERR_MISSINGINPUT
‣ NVCV_ERR_FEATURENOTFOUND
‣ NVCV_ERR_INITIALIZATION
‣ NVCV_ERR_UNIMPLEMENTED
Remarks
This function loads the specified feature instance and validates any configuration properties
that were set for the feature instance.
2.2.4. NvAR_Run
Here is detailed information about the NvAR_Run structure.
NvAR_Result NvAR_Run(
NvAR_FeatureHandle handle,
);
Parameters
handle[in]
Type: const NvAR_FeatureHandle
The handle to the feature instance to be run.
Return Value
Returns one of the following values:
‣ NVCV_SUCCESS on success
‣ NVCV_ERR_GENERAL
‣ NVCV_ERR_FEATURENOTFOUND
‣ NVCV_ERR_MEMORY
‣ NVCV_ERR_MISSINGINPUT
‣ NVCV_ERR_PARAMETER
Remarks
This function validates the input/output properties that are set by the user, runs the specified
feature instance with the input properties that were set for the instance, and writes the
results to the output properties set for the instance. The input and output properties are set
by the accessor functions. Refer to Summary of NVIDIA AR SDK Accessor Functions for more
information.
2.2.5. NvAR_GetCudaStream
Here is detailed information about the NvAR_GetCudaStream structure.
NvAR_GetCudaStream(
NvAR_FeatureHandle handle,
const char *name,
const CUStream *stream
);
Parameters
handle
Type: NvAR_FeatureHandle
The handle to the feature instance from which you want to get the CUDA stream.
name
Type: const char *
The NvAR_Parameter_Config(CUDAStream) key value. Any other key value returns an
error.
stream
Type: const CUStream *
Pointer to the CUDA stream where the CUDA stream retrieved is to be written.
Return Value
Returns one of the following values:
‣ NVCV_SUCCESS on success
‣ NVCV_ERR_PARAMETER
‣ NVCV_ERR_SELECTOR
‣ NVCV_ERR_MISSINGINPUT
‣ NVCV_ERR_GENERAL
‣ NVCV_ERR_MISMATCH
Remarks
This function gets the CUDA stream in which the specified feature instance will run and writes
the CUDA stream to be retrieved to the location that is specified by the stream parameter.
2.2.6. NvAR_CudaStreamCreate
Here is detailed information about the NvAR_CudaStreamCreate structure.
NvCV_Status NvAR_CudaStreamCreate(
CUstream *stream
);
Parameters
stream [out]
Type: CUstream *
The location in which to store the newly allocated CUDA stream.
Return Value
Returns one of the following values:
‣ NVCV_SUCCESS on success
‣ NVCV_ERR_CUDA_VALUE if a CUDA parameter is not within its acceptable range.
Remarks
This function creates a CUDA stream. It is a wrapper for the CUDA Runtime API function
cudaStreamCreate() that you can use to avoid linking with the NVIDIA CUDA Toolkit libraries.
This function and cudaStreamCreate() are equivalent and interchangeable.
2.2.7. NvAR_CudaStreamDestroy
Here is detailed information about the NvAR_CudaStreamDestroy structure.
void NvAR_CudaStreamDestroy(
CUstream stream
);
Parameters
stream [in]
Type: CUstream
The CUDA stream to destroy.
Return Value
Does not return a value.
Remarks
This function destroys a CUDA stream. It is a wrapper for the CUDA Runtime API function
cudaStreamDestroy() that you can use to avoid linking with the NVIDIA CUDA Toolkit
libraries. This function and cudaStreamDestroy() are equivalent and interchangeable.
2.2.8. NvAR_GetF32
Here is detailed information about the NvAR_GetF32 structure.
NvAR_GetF32(
NvAR_FeatureHandle handle,
const char *name,
float *val
);
Parameters
handle
Type: NvAR_FeatureHandle
The handle to the feature instance from which you want to get the specified 32-bit floating-
point parameter.
name
Type: const char *
The key value that is used to access the 32-bit float parameters as defined in nvAR_defs.h
and in Key Values in the Properties of a Feature Type.
val
Type: float*
Pointer to the 32-bit floating-point number where the value retrieved is to be written.
Return Value
Returns one of the following values:
‣ NVCV_SUCCESS on success
‣ NVCV_ERR_PARAMETER
‣ NVCV_ERR_SELECTOR
‣ NVCV_ERR_GENERAL
‣ NVCV_ERR_MISMATCH
Remarks
This function gets the value of the specified single-precision (32-bit) floating-point parameter
for the specified feature instance and writes the value to be retrieved to the location that is
specified by the val parameter.
2.2.9. NvAR_GetF64
Here is detailed information about the NvAR_GetF64 structure.
NvAR_GetF64(
NvAR_FeatureHandle handle,
const char *name,
double *val
);
Parameters
handle
Type: NvAR_FeatureHandle
The handle to the feature instance from which you want to get the specified 64-bit floating-
point parameter.
name
Type: const char *
The key value used to access the 64-bit double parameters as defined in nvAR_defs.h and
in Key Values in the Properties of a Feature Type.
val
Type: double*
Pointer to the 64-bit double-precision floating-point number where the retrieved value will
be written.
Return Value
Returns one of the following values:
‣ NVCV_SUCCESS on success
‣ NVCV_ERR_PARAMETER
‣ NVCV_ERR_SELECTOR
‣ NVCV_ERR_GENERAL
‣ NVCV_ERR_MISMATCH
Remarks
This function gets the value of the specified double-precision (64-bit) floating-point parameter
for the specified feature instance and writes the retrieved value to the location that is specified
by the val parameter.
2.2.10. NvAR_GetF32Array
Here is detailed information about the NvAR_GetF32Array structure.
NvAR_GetFloatArray (
NvAR_FeatureHandle handle,
const char *name,
const float** vals,
int *count
);
Parameters
handle
Type: NvAR_FeatureHandle
The handle to the feature instance from which you want to get the specified float array.
name
Type: const char *
Refer to Key Values in the Properties of a Feature Type for a complete list of key values.
vals
Type: const float**
Pointer to an array of floating-point numbers where the retrieved values will be written.
count
Type: int *
Currently unused. The number of elements in the array that is specified by the vals
parameter.
Return Value
Returns one of the following values:
‣ NVCV_SUCCESS on success
‣ NVCV_ERR_PARAMETER
‣ NVCV_ERR_SELECTOR
‣ NVCV_ERR_MISSINGINPUT
‣ NVCV_ERR_GENERAL
‣ NVCV_ERR_MISMATCH
Remarks
This function gets the values in the specified floating-point array for the specified feature
instance and writes the retrieved values to an array at the location that is specified by the vals
parameter.
2.2.11. NvAR_GetObject
Here is detailed information about the NvAR_GetObject structure.
NvAR_GetObject(
NvAR_FeatureHandle handle,
const char *name,
const void **ptr,
unsigned long typeSize
);
Parameters
handle
Type: NvAR_FeatureHandle
The handle to the feature instance from which you can get the specified object.
name
Type: const char *
Refer to Key Values in the Properties of a Feature Type for a complete list of key values.
ptr
Type: const void**
A pointer to the memory that is allocated for the objects defined in Structures.
typeSize
Type: unsigned long
The size of the item to which the pointer points. If the size does not match, an
NVCV_ERR_MISMATCH is returned.
Return Value
Returns one of the following values:
‣ NVCV_SUCCESS on success
‣ NVCV_ERR_PARAMETER
‣ NVCV_ERR_SELECTOR
‣ NVCV_ERR_MISSINGINPUT
‣ NVCV_ERR_GENERAL
‣ NVCV_ERR_MISMATCH
Remarks
This function gets the specified object for the specified feature instance and stores the object
in the memory location that is specified by the ptr parameter.
2.2.12. NvAR_GetS32
Here is detailed information about the NvAR_GetS32 structure.
NvAR_GetS32(
NvAR_FeatureHandle handle,
const char *name,
int *val
);
Parameters
handle
Type: NvAR_FeatureHandle
The handle to the feature instance from which you get the specified 32-bit signed integer
parameter.
name
Type: const char *
The key value that is used to access the signed integer parameters as defined in
nvAR_defs.h and in Key Values in the Properties of a Feature Type.
val
Type: int*
Pointer to the 32-bit signed integer where the retrieved value will be written.
Return Value
Returns one of the following values:
‣ NVCV_SUCCESS on success
‣ NVCV_ERR_PARAMETER
‣ NVCV_ERR_SELECTOR
‣ NVCV_ERR_GENERAL
‣ NVCV_ERR_MISMATCH
Remarks
This function gets the value of the specified 32-bit signed integer parameter for the specified
feature instance and writes the retrieved value to the location that is specified by the val
parameter.
2.2.13. NvAR_GetString
Here is detailed information about the NvAR_GetString structure.
NvAR_GetString(
NvAR_FeatureHandle handle,
const char *name,
const char** str
);
Parameters
handle
Type: NvAR_FeatureHandle
The handle to the feature instance from which you get the specified character string
parameter.
name
Type: const char *
Refer to Key Values in the Properties of a Feature Type for a complete list of key values.
str
Type: const char**
The address where the requested character string pointer is stored.
Return Value
Returns one of the following values:
‣ NVCV_SUCCESS on success
‣ NVCV_ERR_PARAMETER
‣ NVCV_ERR_SELECTOR
‣ NVCV_ERR_MISSINGINPUT
‣ NVCV_ERR_GENERAL
‣ NVCV_ERR_MISMATCH
Remarks
This function gets the value of the specified character string parameter for the specified
feature instance and writes the retrieved string to the location that is specified by the str
parameter.
2.2.14. NvAR_GetU32
Here is detailed information about the NvAR_GetU32 structure.
NvAR_GetU32(
NvAR_FeatureHandle handle,
Parameters
handle
Type: NvAR_FeatureHandle
The handle to the feature instance from which you want to get the specified 32-bit unsigned
integer parameter.
name
Type: const char *
The key value that is used to access the unsigned integer parameters as defined in
nvAR_defs.h and in Key Values in the Properties of a Feature Type.
val
Type: unsigned int*
Pointer to the 32-bit unsigned integer where the retrieved value will be written.
Return Value
Returns one of the following values:
‣ NVCV_SUCCESS on success
‣ NVCV_ERR_PARAMETER
‣ NVCV_ERR_SELECTOR
‣ NVCV_ERR_GENERAL
‣ NVCV_ERR_MISMATCH
Remarks
This function gets the value of the specified 32-bit unsigned integer parameter for the
specified feature instance and writes the retrieved value to the location that is specified by the
val parameter.
2.2.15. NvAR_GetU64
Here is detailed information about the NvAR_GetU64 structure.
NvAR_GetU64(
NvAR_FeatureHandle handle,
const char *name,
unsigned long long *val
);
Parameters
handle
Type: NvAR_FeatureHandle
The handle to the returned feature instance from which you get the specified 64-bit
unsigned integer parameter.
name
Type: const char *
The key value used to access the unsigned 64-bit integer parameters as defined in
nvAR_defs.h and in Key Values in the Properties of a Feature Type.
val
Type: unsigned long long*
Pointer to the 64-bit unsigned integer where the retrieved value will be written.
Return Values
Returns one of the following values:
‣ NVCV_SUCCESS on success
‣ NVCV_ERR_PARAMETER
‣ NVCV_ERR_SELECTOR
‣ NVCV_ERR_GENERAL
‣ NVCV_ERR_MISMATCH
Remarks
This function gets the value of the specified 64-bit unsigned integer parameter for the
specified feature instance and writes the retrieved value to the location specified by the va1
parameter.
2.2.16. NvAR_SetCudaStream
Here is detailed information about the NvAR_SetCudaStream structure.
NvAR_SetCudaStream(
NvAR_FeatureHandle handle,
const char *name,
CUStream stream
);
Parameters
handle
Type: NvAR_FeatureHandle
The handle to the feature instance that is returned for which you want to set the CUDA
stream.
name
Type: const char *
The NvAR_Parameter_Config(CUDAStream) key value. Any other key value returns an
error.
stream
Type: CUStream
The CUDA stream in which to run the feature instance on the GPU.
Return Value
Returns one of the following values:
‣ NVCV_SUCCESS on success
‣ NVCV_ERR_PARAMETER
‣ NVCV_ERR_SELECTOR
‣ NVCV_ERR_GENERAL
‣ NVCV_ERR_MISMATCH
Remarks
This function sets the CUDA stream, in which the specified feature instance will run, to the
parameter stream.
Defined in: nvAR.h
2.2.17. NvAR_SetF32
Here is detailed information about the NvAR_SetF32 structure.
NvAR_SetF32(
NvAR_FeatureHandle handle,
const char *name,
float val
);
Parameters
handle
Type: NvAR_FeatureHandle
The handle to the feature instance for which you want to set the specified 32-bit floating-
point parameter.
name
Type: const char *
The key value used to access the 32-bit float parameters as defined in nvAR_defs.h and in
Key Values in the Properties of a Feature Type.
val
Type: float
The 32-bit floating-point number to which the parameter is to be set.
Return Value
Returns one of the following values:
‣ NVCV_SUCCESS on success
‣ NVCV_ERR_PARAMETER
‣ NVCV_ERR_SELECTOR
‣ NVCV_ERR_GENERAL
‣ NVCV_ERR_MISMATCH
Remarks
This function sets the specified single-precision (32-bit) floating-point parameter for the
specified feature instance to the val parameter.
2.2.18. NvAR_SetF64
Here is detailed information about the NvAR_SetF64 structure.
NvAR_SetF64(
NvAR_FeatureHandle handle,
const char *name,
double val
);
Parameters
handle
Type: NvAR_FeatureHandle
The handle to the feature instance for which you want to set the specified 64-bit floating-
point parameter.
name
Type: const char *
The key value used to access the 64-bit float parameters as defined in nvAR_defs.h and in
Key Values in the Properties of a Feature Type.
val
Type: double
The 64-bit double-precision floating-point number to which the parameter will be set.
Return Value
Returns one of the following values:
‣ NVCV_SUCCESS on success
‣ NVCV_ERR_PARAMETER
‣ NVCV_ERR_SELECTOR
‣ NVCV_ERR_GENERAL
‣ NVCV_ERR_MISMATCH
Remarks
This function sets the specified double-precision (64-bit) floating-point parameter for the
specified feature instance to the val parameter.
2.2.19. NvAR_SetF32Array
Here is detailed information about the NvAR_SetF32Array structure.
NvAR_SetFloatArray(
NvAR_FeatureHandle handle,
const char *name,
float* vals,
int count
);
Parameters
handle
Type: NvAR_FeatureHandle
The handle to the feature instance for which you want to set the specified float array.
name
Type: const char *
Refer to Key Values in the Properties of a Feature Type for a complete list of key values.
vals
Type: float*
An array of floating-point numbers to which the parameter will be set.
count
Type: int
Currently unused. The number of elements in the array that is specified by the vals
parameter.
Return Value
Returns one of the following values:
‣ NVCV_SUCCESS on success
‣ NVCV_ERR_PARAMETER
‣ NVCV_ERR_SELECTOR
‣ NVCV_ERR_GENERAL
‣ NVCV_ERR_MISMATCH
Remarks
This function assigns the array of floating-point numbers that are defined by the vals
parameter to the specified floating-point-array parameter for the specified feature instance.
2.2.20. NvAR_SetObject
Here is detailed information about the NvAR_SetObject structure.
NvAR_SetObject(
NvAR_FeatureHandle handle,
const char *name,
void *ptr,
unsigned long typeSize
);
Parameters
handle
Type: NvAR_FeatureHandle
The handle to the feature instance for which you want to set the specified object.
name
Type: const char *
Refer to Key Values in the Properties of a Feature Type for a complete list of key values.
ptr
Type: void*
A pointer to memory that was allocated to the objects that were defined in Structures.
typeSize
Type: unsigned long
The size of the item to which the pointer points. If the size does not match, an
NVCV_ERR_MISMATCH is returned.
Return Value
Returns one of the following values:
‣ NVCV_SUCCESS on success
‣ NVCV_ERR_PARAMETER
‣ NVCV_ERR_SELECTOR
‣ NVCV_ERR_GENERAL
‣ NVCV_ERR_MISMATCH
Remarks
This function assigns the memory of the object that was specified by the ptr parameter to the
specified object parameter for the specified feature instance.
2.2.21. NvAR_SetS32
Here is detailed information about the NvAR_SetS32 structure.
NvAR_SetS32(
NvAR_FeatureHandle handle,
const char *name,
int val
);
Parameters
handle
Type: NvAR_FeatureHandle
The handle to the feature instance for which you want to set the specified 32-bit signed
integer parameter.
name
Type: const char *
The key value used to access the signed 32-bit integer parameters as defined in
nvAR_defs.h and in Key Values in the Properties of a Feature Type.
val
Type: int
The 32-bit signed integer to which the parameter will be set.
Return Value
Returns one of the following values:
‣ NVCV_SUCCESS on success
‣ NVCV_ERR_PARAMETER
‣ NVCV_ERR_SELECTOR
‣ NVCV_ERR_GENERAL
‣ NVCV_ERR_MISMATCH
Remarks
This function sets the specified 32-bit signed integer parameter for the specified feature
instance to the val parameter.
2.2.22. NvAR_SetString
Here is detailed information about the NvAR_SetString structure.
NvAR_SetString(
NvAR_FeatureHandle handle,
const char *name,
const char* str
);
Parameters
handle
Type: NvAR_FeatureHandle
The handle to the feature instance for which you want to set the specified character string
parameter.
name
Type: const char *
Refer to Key Values in the Properties of a Feature Type for a complete list of key values.
str
Type: const char*
Pointer to the character string to which you want to set the parameter.
Return Value
Returns one of the following values:
‣ NVCV_SUCCESS on success
‣ NVCV_ERR_PARAMETER
‣ NVCV_ERR_SELECTOR
‣ NVCV_ERR_GENERAL
‣ NVCV_ERR_MISMATCH
Remarks
This function sets the value of the specified character string parameter for the specified
feature instance to the str parameter.
2.2.23. NvAR_SetU32
Here is detailed information about the NvAR_SetU32 structure.
NvAR_SetU32(
NvAR_FeatureHandle handle,
const char *name,
unsigned int val
);
Parameters
handle
Type: NvAR_FeatureHandle
The handle to the feature instance for which you want to set the specified 32-bit unsigned
integer parameter.
name
Type: const char *
The key value used to access the unsigned 32-bit integer parameters as defined in
nvAR_defs.h and in Summary of NVIDIA AR SDK Accessor Functions.
val
Type: unsigned int
The 32-bit unsigned integer to which you want to set the parameter.
Return Values
Returns one of the following values:
‣ NVCV_SUCCESS on success
‣ NVCV_ERR_PARAMETER
‣ NVCV_ERR_SELECTOR
‣ NVCV_ERR_GENERAL
‣ NVCV_ERR_MISMATCH
Remarks
This function sets the value of the specified 32-bit unsigned integer parameter for the
specified feature instance to the val parameter.
2.2.24. NvAR_SetU64
Here is detailed information about the NvAR_SetU64 structure.
NvAR_SetU64(
NvAR_FeatureHandle handle,
const char *name,
unsigned long long val
);
Parameters
handle
Type: NvAR_FeatureHandle
The handle to the feature instance for which you want to set the specified 64-bit unsigned
integer parameter.
name
Type: const char *
The key value used to access the unsigned 64-bit integer parameters as defined in
nvAR_defs.h and in Key Values in the Properties of a Feature Type.
val
Type: unsigned long long
The 64-bit unsigned integer to which you want to set the parameter.
Return Value
Returns one of the following values:
‣ NVCV_SUCCESS on success
‣ NVCV_ERR_PARAMETER
‣ NVCV_ERR_SELECTOR
‣ NVCV_ERR_GENERAL
‣ NVCV_ERR_MISMATCH
Remarks
This function sets the value of the specified 64-bit unsigned integer parameter for the
specified feature instance to the val parameter.
NVCV_ERR_UNSUPPORTEDDRIVER
The currently installed graphics driver is not supported.
NVCV_ERR_MODELDEPENDENCIES
There is no model with dependencies that match this system.
NVCV_ERR_PARSE
There has been a parsing or syntax error while reading a file.
NVCV_ERR_MODELSUBSTITUTION
The specified model does not exist and has been substituted.
NVCV_ERR_READ
An error occurred while reading a file.
NVCV_ERR_WRITE
An error occurred while writing a file.
NVCV_ERR_PARAMREADONLY
The selected parameter is read-only.
NVCV_ERR_TRT_ENQUEUE
TensorRT enqueue failed.
NVCV_ERR_TRT_BINDINGS
Unexpected TensorRT bindings.
NVCV_ERR_TRT_CONTEXT
An error occurred while creating a TensorRT context.
NVCV_ERR_TRT_INFER
There was a problem creating the inference engine.
NVCV_ERR_TRT_ENGINE
There was a problem deserializing the inference runtime engine.
NVCV_ERR_NPP
An error has occurred in the NPP library.
NVCV_ERR_CONFIG
No suitable model exists for the specified parameter configuration.
NVCV_ERR_TOOSMALL
The supplied parameter or buffer is not large enough.
NVCV_ERR_TOOBIG
The supplied parameter is too big.
NVCV_ERR_WRONGSIZE
The supplied parameter is not the expected size.
NVCV_ERR_OBJECTNOTFOUND
The specified object was not found.
NVCV_ERR_SINGULAR
A mathematical singularity has been encountered.
NVCV_ERR_NOTHINGRENDERED
Nothing was rendered in the specified region.
NVCV_ERR_OPENGL
An OpenGL error has occurred.
NVCV_ERR_DIRECT3D
A Direct3D error has occurred.
NVCV_ERR_CUDA_MEMORY
The requested operation requires more CUDA memory than is available.
NVCV_ERR_CUDA_VALUE
A CUDA parameter is not within its acceptable range.
NVCV_ERR_CUDA_PITCH
A CUDA pitch is not within its acceptable range.
NVCV_ERR_CUDA_INIT
The CUDA driver and runtime could not be initialized.
NVCV_ERR_CUDA_LAUNCH
The CUDA kernel failed to launch.
NVCV_ERR_CUDA_KERNEL
No suitable kernel image is available for the device.
NVCV_ERR_CUDA_DRIVER
The installed NVIDIA CUDA driver is older than the CUDA runtime library.
NVCV_ERR_CUDA_UNSUPPORTED
The CUDA operation is not supported on the current system or device.
NVCV_ERR_CUDA_ILLEGAL_ADDRESS
CUDA attempted to load or store an invalid memory address.
NVCV_ERR_CUDA
An unspecified CUDA error has occurred.
There are many other CUDA-related errors that are not listed here. However, the function
NvCV_GetErrorStringFromCode() will turn the error code into a string to help you debug.
The NVIDIA 3DMM file format is based on encapsulated objects that are scoped by a FOURCC
tag and a 32-bit size. The header must appear first in the file. The objects and their subobjects
can appear in any order. In this guide, they are listed in the default order.
A.1. Header
The header contains the following information:
‣ A mean shape.
‣ A set of shape modes.
‣ The eigenvalues for the modes.
‣ A triangle list.
‣ Landmarks
‣ Right contour
‣ Left contour
Number of faces
Vertex index
Number of vertices
Smoothing group
NAME
size
Partition name string
MTRL
PART size
Material name string
(any number of
additional partitions)
...
Keypoint Parent
Pelvis None, as this is root.
Left hip Pelvis
Right hip Pelvis
Torso Pelvis
Left knee Left hip
Right knee Right hip
Neck Torso
Left ankle Left knee
Right ankle Right knee
Left big toe Left ankle
Right big toe Right ankle
Left small toe Left ankle
Right small toe Right ankle
Left heel Left ankle
Right heel Right ankle
Nose Neck
Keypoint Parent
Left eye Nose
Right eye Nose
Left ear Nose
Right ear Nose
Left shoulder Neck
Right shoulder Neck
Left elbow Left shoulder
Right elbow Right shoulder
Left wrist Left elbow
Right wrist Right elbow
Left pinky knuckle Left wrist
Right pinky knuckle Right wrist
Left middle tip Left wrist
Right middle tip Right wrist
Left index knuckle Left wrist
Right index knuckle Right wrist
Left thumb tip Left wrist
Right thumb tip Right wrist
B.2. NvAR_Parameter_Output(KeyPoints)
Order
Here is some information about the order of the keypoints.
The Keypoints order of the output from NvAR_Parameter_Output(KeyPoints) are the same
as mentioned in 34 Keypoints of Body Pose Tracking.
With the SDK comes a default face model used by the face fitting feature. This model is a
modification of the ICT Face Model https://github.com/ICT-VGL/ICT-FaceKit. The modified
version of the face model is optimized for real time face fitting applications and is therefore of
lower resolution. This model is the version face_model2.nvf. In addition to the blendshapes
provided by the ICT Model, it uses linear blendshapes for eye gaze expressions, which enables
it to be used in implicit gaze tracking.
In the following graphic, on the left is the original ICT face model topology, and on the right is
the modified face_model2.nvf face model topology.
Here is the face blendshape expression list that is used in the Face 3D Mesh feature and the
Facial Expression Estimation feature:
‣ 0: BrowDown_L
‣ 1: BrowDown_R
‣ 2: BrowInnerUp_L
‣ 3: BrowInnerUp_R
‣ 4: BrowOuterUp_L
‣ 5: BrowOuterUp_R
‣ 6: cheekPuff_L
‣ 7: cheekPuff_R
‣ 8: cheekSquint_L
‣ 9: cheekSquint_R
‣ 10: eyeBlink_L
‣ 11: eyeBlink_R
‣ 12: eyeLookDown_L
‣ 13: eyeLookDown_R
‣ 14: eyeLookIn_L
‣ 15: eyeLookIn_R
‣ 16: eyeLookOut_L
‣ 17: eyeLookOut_R
‣ 18: eyeLookUp_L
‣ 19: eyeLookUp_R
‣ 20: eyeSquint_L
‣ 21: eyeSquint_R
‣ 22: eyeWide_L
‣ 23: eyeWide_R
‣ 24: jawForward
‣ 25: jawLeft
‣ 26: jawOpen
‣ 27: jawRight
‣ 28: mouthClose
‣ 29: mouthDimple_L
‣ 30: mouthDimple_R
‣ 31: mouthFrown_L
‣ 32: mouthFrown_R
‣ 33: mouthFunnel
‣ 34: mouthLeft
‣ 35: mouthLowerDown_L
‣ 36: mouthLowerDown_R
‣ 37: mouthPress_L
‣ 38: mouthPress_R
‣ 39: mouthPucker
‣ 40: mouthRight
‣ 41: mouthRollLower
‣ 42: mouthRollUpper
‣ 43: mouthShrugLower
‣ 44: mouthShrugUpper
‣ 45: mouthSmile_L
‣ 46: mouthSmile_R
‣ 47: mouthStretch_L
‣ 48: mouthStretch_R
‣ 49: mouthUpperUp_L
‣ 50: mouthUpperUp_R
‣ 51: noseSneer_L
‣ 52: noseSneer_R
The items in the list above can be mapped to the ARKit blendshapes using the following
options:
‣ A27_Jaw_Left = jawLeft
‣ A28_Jaw_Right = jawRight
‣ A29_Mouth_Funnel = mouthFunnel
‣ A30_Mouth_Pucker = mouthPucker
‣ A31_Mouth_Left = mouthLeft
‣ A32_Mouth_Right = mouthRight
‣ A33_Mouth_Roll_Upper = mouthRollUpper
‣ A34_Mouth_Roll_Lower = mouthRollLower
‣ A35_Mouth_Shrug_Upper = mouthShrugUpper
‣ A36_Mouth_Shrug_Lower = mouthShrugLower
‣ A37_Mouth_Close = mouthClose
‣ A38_Mouth_Smile_Left = mouthSmile_L
‣ A39_Mouth_Smile_Right = mouthSmile_R
‣ A40_Mouth_Frown_Left = mouthFrown_L
‣ A41_Mouth_Frown_Right = mouthFrown_R
‣ A42_Mouth_Dimple_Left = mouthDimple_L
‣ A43_Mouth_Dimple_Right = mouthDimple_R
‣ A44_Mouth_Upper_Up_Left = mouthUpperUp_L
‣ A45_Mouth_Upper_Up_Right = mouthUpperUp_R
‣ A46_Mouth_Lower_Down_Left = mouthLowerDown_L
‣ A47_Mouth_Lower_Down_Right = mouthLowerDown_R
‣ A48_Mouth_Press_Left = mouthPress_L
‣ A49_Mouth_Press_Right = mouthPress_R
‣ A50_Mouth_Stretch_Left = mouthStretch_L
‣ A51_Mouth_Stretch_Right = mouthStretch_R
‣ A52_Tongue_Out = 0
The documentation may refer to different coordinate systems where virtual 3D objects are
defined. This appendix defines the different coordinate spaces that can be interfaced with
through the SDK.
There are two different flavors of this coordinate system, either on-pixel or inter-pixel.
In the on-pixel system, the origin point is located on the center of the pixel. In the inter-pixel
system, the origin is offset from the center of the pixel by a distance of -0.5 pixels along the x-
axis and the y-axis. The on-pixel should be used for integer based coordinates, while the inter-
pixel should be used for real valued coordinates (usually defined as floating point coordinates).
NVIDIA reserves the right to make corrections, modifications, enhancements, improvements, and any other changes to this document, at any time without notice.
Customer should obtain the latest relevant information before placing orders and should verify that such information is current and complete.
NVIDIA products are sold subject to the NVIDIA standard terms and conditions of sale supplied at the time of order acknowledgement, unless otherwise agreed
in an individual sales agreement signed by authorized representatives of NVIDIA and customer (“Terms of Sale”). NVIDIA hereby expressly objects to applying any
customer general terms and conditions with regards to the purchase of the NVIDIA product referenced in this document. No contractual obligations are formed
either directly or indirectly by this document.
NVIDIA products are not designed, authorized, or warranted to be suitable for use in medical, military, aircraft, space, or life support equipment, nor in applications
where failure or malfunction of the NVIDIA product can reasonably be expected to result in personal injury, death, or property or environmental damage. NVIDIA
accepts no liability for inclusion and/or use of NVIDIA products in such equipment or applications and therefore such inclusion and/or use is at customer’s own risk.
NVIDIA makes no representation or warranty that products based on this document will be suitable for any specified use. Testing of all parameters of each product
is not necessarily performed by NVIDIA. It is customer’s sole responsibility to evaluate and determine the applicability of any information contained in this document,
ensure the product is suitable and fit for the application planned by customer, and perform the necessary testing for the application in order to avoid a default of
the application or the product. Weaknesses in customer’s product designs may affect the quality and reliability of the NVIDIA product and may result in additional
or different conditions and/or requirements beyond those contained in this document. NVIDIA accepts no liability related to any default, damage, costs, or problem
which may be based on or attributable to: (i) the use of the NVIDIA product in any manner that is contrary to this document or (ii) customer product designs.
No license, either expressed or implied, is granted under any NVIDIA patent right, copyright, or other NVIDIA intellectual property right under this document.
Information published by NVIDIA regarding third-party products or services does not constitute a license from NVIDIA to use such products or services or a warranty
or endorsement thereof. Use of such information may require a license from a third party under the patents or other intellectual property rights of the third party,
or a license from NVIDIA under the patents or other intellectual property rights of NVIDIA.
Reproduction of information in this document is permissible only if approved in advance by NVIDIA in writing, reproduced without alteration and in full compliance
with all applicable export laws and regulations, and accompanied by all associated conditions, limitations, and notices.
THIS DOCUMENT AND ALL NVIDIA DESIGN SPECIFICATIONS, REFERENCE BOARDS, FILES, DRAWINGS, DIAGNOSTICS, LISTS, AND OTHER DOCUMENTS
(TOGETHER AND SEPARATELY, “MATERIALS”) ARE BEING PROVIDED “AS IS.” NVIDIA MAKES NO WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR
OTHERWISE WITH RESPECT TO THE MATERIALS, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY, AND
FITNESS FOR A PARTICULAR PURPOSE. TO THE EXTENT NOT PROHIBITED BY LAW, IN NO EVENT WILL NVIDIA BE LIABLE FOR ANY DAMAGES, INCLUDING
WITHOUT LIMITATION ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL, PUNITIVE, OR CONSEQUENTIAL DAMAGES, HOWEVER CAUSED AND REGARDLESS OF
THE THEORY OF LIABILITY, ARISING OUT OF ANY USE OF THIS DOCUMENT, EVEN IF NVIDIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
Notwithstanding any damages that customer might incur for any reason whatsoever, NVIDIA’s aggregate and cumulative liability towards customer for the products
described herein shall be limited in accordance with the Terms of Sale for the product.
VESA DisplayPort
DisplayPort and DisplayPort Compliance Logo, DisplayPort Compliance Logo for Dual-mode Sources, and DisplayPort Compliance Logo for Active Cables are
trademarks owned by the Video Electronics Standards Association in the United States and other countries.
HDMI
HDMI, the HDMI logo, and High-Definition Multimedia Interface are trademarks or registered trademarks of HDMI Licensing LLC.
ARM
ARM, AMBA and ARM Powered are registered trademarks of ARM Limited. Cortex, MPCore and Mali are trademarks of ARM Limited. All other brands or product
names are the property of their respective holders. "ARM" is used to represent ARM Holdings plc; its operating company ARM Limited; and the regional subsidiaries
ARM Inc.; ARM KK; ARM Korea Limited.; ARM Taiwan Limited; ARM France SAS; ARM Consulting (Shanghai) Co. Ltd.; ARM Germany GmbH; ARM Embedded
Technologies Pvt. Ltd.; ARM Norway, AS and ARM Sweden AB.
OpenCL
OpenCL is a trademark of Apple Inc. used under license to the Khronos Group Inc.
Copyright
© 2021-2023 NVIDIA Corporation and affiliates. All rights reserved.