代码地址:https://github.com/hustvl/VAD
论文地址:https://arxiv.org/pdf/2303.12077
之前对VAD进行了论文解读和代码分析,本文对已经训练好的pytorch模型转换成onnx模型,然后将onnx模型转成TensorRT模型,最后完成python和C++下的部署,不同模型之间的对比可以参考pytorch深度学习模型推理和部署,主要修改不支持的算子和python原生计算。
1. 环境搭建
环境搭建以及数据准备参考github中的流程,python版本为3.8,pytorch版本为1.9.1+cu111,torchvision为0.10.1+cu111(原作者版本),TensorRT版本为10.0.0.6b,onnx为onnxruntime-gpu 1.19。以下是部分依赖项的参考版本号:
pip install torch==1.9.1+cu111 torchvision==0.10.1+cu111 torchaudio==0.9.1 -f https://download.pytorch.org/whl/torch_stable.html
mmcv-full==1.4.0
numpy==1.23.5
numba==0.48.0
llvmlite==0.31.0
mmdet==2.14.0
mmsegmentation==0.14.1
matplotlib==3.5.3
opencv-python==4.11.0.86
pandas==1.3.5
huggingface-hub==0.33.0
imageio==2.35.1
pillow==10.4.0
scikit-image==0.19.3
scikit-learn==1.3.2
scipy==1.10.1
setuptools==59.5.0
Shapely==1.8.5
six==1.17.0
timm==1.0.15
yapf==0.30.0
similaritymeasures==1.3.0
跑一下验证和推理的代码,如果出现version `GLIBCXX_3.4.30' not found,是因为加载了系统的libstdc++文件,版本不匹配,可以使用命令LD_PRELOAD=$CONDA_PREFIX/lib/libstdc++.so.6加载下conda目录下的。
CUDA_VISIBLE_DEVICES=0 python tools/test.py projects/configs/VAD/VAD_tiny_e2e.py ./models/VAD_tiny.pth --launcher none --eval bbox --tmpdir tmp
or
LD_PRELOAD=$CONDA_PREFIX/lib/libstdc++.so.6 CUDA_VISIBLE_DEVICES=0 python tools/test.py projects/configs/VAD/VAD_tiny_e2e.py ./models/VAD_tiny.pth --launcher none --eval bbox --tmpdir tmp
另外,如果数据只有图像数据,没有点云数据,修改下配置文件文件即可,去掉LoadPointsFromFile配置以及keys里面的“points”。作者的原始训练权重img_norm_cfg参数与代码中不一致,如果使用作者训练好的权重数据,一定要修改下参数。
test_pipeline = [
dict(type='LoadMultiViewImageFromFiles', to_float32=True),
# dict(type='LoadPointsFromFile',
# coord_type='LIDAR',
# load_dim=5,
# use_dim=5,
# file_client_args=file_client_args),
dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True, with_attr_label=True),
dict(type='CustomObjectRangeFilter', point_cloud_range=point_cloud_range),
dict(type='CustomObjectNameFilter', classes=class_names),
dict(type='NormalizeMultiviewImage', **img_norm_cfg),
# dict(type='PadMultiViewImage', size_divisor=32),
dict(
type='MultiScaleFlipAug3D',
img_scale=(1600, 900),
pts_scale_ratio=1,
flip=False,
transforms=[
dict(type='RandomScaleImageMultiViewImage', scales=[0.4]),
dict(type='PadMultiViewImage', size_divisor=32),
dict(type='CustomDefaultFormatBundle3D', class_names=class_names, with_label=False, with_ego=True),
dict(type='CustomCollect3D',\
keys=['gt_bboxes_3d', 'gt_labels_3d', 'img', 'fut_valid_flag',
'ego_his_trajs', 'ego_fut_trajs', 'ego_fut_masks', 'ego_fut_cmd',
'ego_lcf_feat', 'gt_attr_labels'])])
]
# img_norm_cfg = dict(
# mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
img_norm_cfg = dict(
mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False)
最终的测试结果如下:
LD_PRELOAD=$CONDA_PREFIX/lib/libstdc++.so.6 CUDA_VISIBLE_DEVICES=0 python tools/test.py projects/configs/VAD/VAD_tiny_e2e.py ./models/VAD_tiny.pth --launcher none --eval bbox --tmpdir tmp
projects.mmdet3d_plugin
WARNING!!!!, Only can be used for obtain inference speed!!!!
load checkpoint from local path: ./models/VAD_tiny.pth
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 6019/6019, 7.8 task/s, elapsed: 776s, ETA: 0s
-------------- Motion Prediction --------------
EPA_car: 0.5981434086669848
EPA_pedestrian: 0.2900418799441601
ADE_car: 0.7867509126663208
ADE_pedestrian: 0.748363733291626
FDE_car: 1.0752904415130615
FDE_pedestrian: 0.9466288089752197
MR_car: 0.12123025370990904
MR_pedestrian: 0.09382577877261705
-------------- Planning --------------
gt_car:4.503418636452432
gt_pedestrian:2.099042781793319
cnt_ade_car:3.430357491697597
cnt_ade_pedestrian:1.1779644461808947
cnt_fde_car:3.264700136745458
cnt_fde_pedestrian:1.0472748583707756
hit_car:2.8689197108810314
hit_pedestrian:0.9490134791951553
fp_car:0.29576089079898416
fp_pedestrian:0.5647587419417855
ADE_car:2.7831785678863525
ADE_pedestrian:0.8992874622344971
FDE_car:3.510500907897949
FDE_pedestrian:0.991380512714386
MR_car:0.39578042586442663
MR_pedestrian:0.09826137917562024
plan_L2_1s:0.46271315561940024
plan_L2_2s:0.7634988678673761
plan_L2_3s:1.1215393952043804
plan_obj_col_1s:0.0
plan_obj_col_2s:4.883766360617308e-05
plan_obj_col_3s:6.511688674886516e-05
plan_obj_box_col_1s:0.0020511818714592693
plan_obj_box_col_2s:0.0034674741160382887
plan_obj_box_col_3s:0.005795402842053307
fut_valid_flag:1.0
Formating bboxes of pts_bbox
Start to convert detection format...
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 6019/6019, 94.3 task/s, elapsed: 64s, ETA: 0s
data/nuscenes/nuscenes_map_anns_val.json exist, not update
Results writes to test/VAD_tiny_e2e/Fri_Jul_11_15_32_27_2025/pts_bbox/results_nusc.pkl
Evaluating bboxes of pts_bbox
mAP: 0.2698
mATE: 0.7018
mASE: 0.2977
mAOE: 0.6352
mAVE: 0.6103
mAAE: 0.2091
NDS: 0.3895
Eval time: 17.9s
Per-class results:
Object Class AP ATE ASE AOE AVE AAE
car 0.528 0.447 0.155 0.061 0.579 0.293
truck 0.209 0.644 0.229 0.150 0.600 0.335
bus 0.373 0.727 0.268 0.103 0.701 0.168
trailer 0.017 0.986 0.324 1.145 0.813 0.090
construction_vehicle 0.111 0.962 0.511 1.490 0.281 0.366
pedestrian 0.322 0.645 0.298 0.631 0.476 0.208
motorcycle 0.150 0.701 0.251 0.783 1.218 0.209
bicycle 0.230 0.578 0.276 1.228 0.216 0.005
traffic_cone 0.412 0.604 0.362 nan nan nan
barrier 0.345 0.724 0.303 0.127 nan nan
2. 导出onnx模型
onnx模型导出时追踪tensor变化,不支持python原生操作,代码中部分输入数据是按照numpy格式传入,需要进行修改改。同时由于采用pytorch 1.9,只支持到opset_version=13,因此部分算子也需要修改。
首先导出onnx代码部分如下,放置到test.py中模型建立之后即可。
def export_to_onnx(model, onnx_path):
model.eval()
with torch.no_grad():
# 构造 dummy 输入(注意形状与实际模型一致)
dummy_img = torch.randn(1, 6, 3, 384, 640) # B=1, T=6, C=3, H=384, W=640
dummy_meta_img_shape = torch.tensor([[384, 640, 3]] * 6) # (6, 3)
dummy_meta_lidar2img = torch.randn(6, 4, 4) # 6视角
dummy_meta_can_bus = torch.randn(18) # 一维
dummy_ego_his_trajs = torch.randn(1, 1, 2, 2) # unused
dummy_ego_lcf_feat = torch.randn(1, 1, 1, 9) # unused
dummy_prev_bev = torch.zeros(10000, 1, 256) #
input_names = [
"img", "meta_img_shape", "meta_lidar2img", "meta_can_bus",
"ego_his_trajs", "ego_lcf_feat", "prev_bev"
]
inputs = (
dummy_img.to(torch.float32),
dummy_meta_img_shape.to(torch.int32),
dummy_meta_lidar2img.to(torch.float32),
dummy_meta_can_bus.to(torch.float32),
dummy_ego_his_trajs.to(torch.float32),
dummy_ego_lcf_feat.to(torch.float32),
dummy_prev_bev.to(torch.float32)
)
# 输出名称
output_names = [
'bev_embed',
'all_cls_scores',
'all_bbox_preds',
'all_traj_preds',
'all_traj_cls_scores',
'map_all_cls_scores',
'map_all_bbox_preds',
'map_all_pts_preds',