【TensorRT】execute_async VS execute_async_v2

        execute_async和execute_async_v2是tensorrt做异步推理时的api,其官方描叙上差别只有一句话。

execute_async(self: tensorrt.tensorrt.IExecutionContext, batch_size: int = 1, bindings: List[int], stream_handle: int, input_consumed: capsule = None) → bool

Asynchronously execute inference on a batch. This method requires a array of input and output buffers. The mapping from tensor names to indices can be queried using ICudaEngine::get_binding_index() .

Parameters

  • batch_size – The batch size. This is at most the value supplied when the ICudaEngine was built.

  • bindings – A list of integers representing input and output buffer addresses for the network.

  • stream_handle – A handle for a CUDA stream on which the inference kernels will be executed.

  • input_consumed – An optional event which will be signaled when the input buffers can be refilled with new data

execute_async_v2(self: tensorrt.tensorrt.IExecutionContext, bindings: List[int], stream_handle: int, input_consumed: capsule = None) → bool

Asynchronously execute inference on a batch. This method requires a array of input and output buffers. The mapping from tensor names to indices can be queried using ICudaEngine::get_binding_index() . This method only works for execution contexts built from networks with no implicit batch dimension.

Parameters

  • bindings – A list of integers representing input and output buffer addresses for the network.

  • stream_handle – A handle for a CUDA stream on which the inference kernels will be executed.

  • input_consumed – An optional event which will be signaled when the input buffers can be refilled with new data

但是,我想知道的是在实际使用过程中在速度上的差异,所以作了一个简单的实验对比。使用PointPillars中的rpn网络推理同一张点云图像。实验一中包含stream.synchronize()操作,实验二中注释掉stream.synchronize()操作,对比测试结果。

def inference_sync(context, bindings, inputs, outputs, stream, batch_size=1):
    # Transfer input data to the GPU.
    start=time.time()
    [cuda.memcpy_htod_async(inp.device, inp.host, stream) for inp in inputs]
    # Run inference.
    context.execute_async(batch_size=batch_size, bindings=bindings, stream_handle=stream.handle)
    # Transfer predictions back from the GPU.
    [cuda.memcpy_dtoh_async(out.host, out.device, stream) for out in outputs]
    # Synchronize the stream
    stream.synchronize()
    print("time",time.time()-start)
    # Return only the host outputs.
    return [out.host for out in outputs]

def inference_async_v2(context, bindings, inputs, outputs, stream):
    # Transfer input data to the GPU.
    start=time.time()
    [cuda.memcpy_htod_async(inp.device, inp.host, stream) for inp in inputs]
    # Run inference.      
    context.execute_async_v2(bindings=bindings, stream_handle=stream.handle)      
    # Transfer predictions back from the GPU.
    [cuda.memcpy_dtoh_async(out.host, out.device, stream) for out in outputs]     
    # Synchronize the stream
    stream.synchronize()
    print("v2 time",time.time()-start)
    # Return only the host outputs.
    return [out.host for out in outputs]          

实验一:做stream.synchronize()

execute_async(ms)execute_async_v2(ms)
10.01885676383972168 0.012494564056396484
20.0124475955963134770.012444019317626953
30.0126302242279052730.012173175811767578
40.012416124343872070.01211094856262207
50.0123796463012695310.01217961311340332

实验二:不做stream.synchronize()

execute_async(ms)execute_async_v2(ms)
10.006377458572387695 0.012206554412841797
20.0063621997833251950.012171268463134766
30.00640130043029785160.012173175811767578
40.0063607692718505860.01211094856262207
50.0063068866729736330.01217961311340332

【参考文献】

https://docs.nvidia.com/deeplearning/tensorrt/api/python_api/infer/Core/ExecutionContext.html

评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值