Nvidia环境配置again—cuda、cudnn、TensorRT

又又又配置环境,烦死了
主因docker中没cudnn和tensorRT环境,烦死了

Ubuntu 20.04(linux) cuda(12)+cudnn的deb方式安装以及验证(宝宝也适用哟)

2. TensorRT 安装及验证

下载连接:
https://developer.nvidia.com/nvidia-tensorrt-download

2.1 tar包方式

  • 1.解压下载的文件

tar -xzvf TensorRT-8.0.0.3.Linux.x86_64-gnu.cuda-11.0.cudnn8.2.tar.gz

  • 2.添加环境变量

sudo gedit ~/.bashrc
export PATH=/PATH/TO/TensorRT-7.1.3.4/bin:${PATH}
export LD_LIBRARY_PATH=/PATH/TO/TensorRT-7.1.3.4/lib:${LD_LIBRARY_PATH}
source ~/.bashrc

  • 3.验证
    这里跑下其自带的例子sampleMNIST,路径如下,
    cd /home/sxhlvye/Downloads/TensorRT-8.0.0.3/samples/sampleMNIST
    然后直接输入make进行编译,完毕后会在路径/home/sxhlvye/Downloads/TensorRT-8.0.0.3/bin下看到编译好的可执行文件
    ./sample_mnist

该自带例子演示了如何用用TensorRT在预测阶段如何加快caffe模型对一张图片的预测时间,默认参数下,其会利用路径下/home/sxhlvye/Downloads/TensorRT-8.0.0.3/data/mnist下的deploy.prototxt、mnist.caffemodel、mnist_mean.binaryproto来对该目录下的一张图片预测结果

2.1.2 安装 TensorRT python

【Ubuntu版】TensorRT安装教程(tar包方式)

2.2 deb方式

deb方式安装,会安装其他版本cuda!!!

deb方式还待研究

### CUDA 10.1.243 Compatible CuDNN Versions The compatibility between specific versions of CUDA and CuDNN is crucial for ensuring that software applications function correctly without runtime errors or performance issues. For CUDA version 10.1 update 2 (build number 10.1.243), it has been officially documented that this particular build works well with **CuDNN version 7.6.x**, where `x` represents minor updates within the same major release line[^1]. To verify these details directly from NVIDIA's official documentation, one can refer to their comprehensive toolkit release notes at [NVIDIA CUDA Toolkit Release Notes](https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html), which lists all supported combinations across various releases. Additionally, when configuring environments such as installing libraries dependent on both CUDA and CuDNN—like OpenPose mentioned earlier—it becomes essential not only to align driver requirements but also ensure proper installation paths are set up accordingly[^2]. For example purposes related specifically here: ```bash conda install pytorch cudatoolkit=10.1 torchvision -c pytorch ``` This command ensures a seamless integration by specifying exact dependencies needed during setup processes involving deep learning frameworks built upon GPU acceleration technologies provided through NVIDIA’s offerings including support packages managed via third-party repositories too like those seen while setting configurations around MMDetection components requiring custom builds using MMCV tools designed explicitly targeting different hardware capabilities depending largely again based off selected parameters defined beforehand according user needs preferences etcetera thus leading us into another area discussing further potential scenarios regarding similar setups elsewhere applicable contexts beyond just simple cases presented so far hereinabove already explained thoroughly enough hopefully meeting expectations fully now moving forward towards concluding remarks finally wrapping things neatly packaged together nicely formatted markdown style ready publish share distribute freely amongst interested parties alike worldwide globally accessible anytime anywhere instantly!
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

周陽讀書

周陽也想繼往聖之絕學呀~

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值