将tlt训练自己的模型导出etlt模型,并放到jetson转换engine时候报错:
[ERROR] UffParser: Validator error: FirstDimTile_2: Unsupported operation _BatchTilePlugin_TRT [ERROR] Failed to parse the model, please check the encoding key to make sure it's correct [ERROR] Network must have at least one output [ERROR] Network validation failed. [ERROR] Unable to create engine 段错误 (核心已转储) 但是我可以肯定我的KEY是对的,但是就是错误。经过查询发现是需要替换一个文件
首先我们需要去https://developer.nvidia.com/tlt-get-started下载对应jetson平台的tlt-converter。在 jetson 上可以直接使用 tlt-converter 工具,将 eltt 模型转化成 engine。安装步骤如下:
sudo apt-get install libssl-dev
vi ~/.bashrc
export TRT_LIB_PATH=/usr/lib/aarch64-linux-gnu export TRT_INC_PATH=/usr/include/aarch64-linux-gnu
source ~/.bashrc
最后把自己tlt-convert放到/usr/bin/里面即可完成安装。
下面讲述怎么替换插件,首先进入cd /usr/lib/aarch64-linux-gnu
然后查看自己libnvinfer_plugin.so并且最好备份一下。我是这么备份
sudo mv libnvinfer_plugin.so libnvinfer_plugin.so.bak
sudo mv libnvinfer_plugin.so.7 libnvinfer_plugin.so.7.bak
sudo mv libnvinfer_plugin.so.7.1.3 libnvinfer_plugin.so.7.1.3.bak
然后去网站https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps/tree/master/TRT-OSS/Jetson/TRT7.1下载libnvinfer_plugin.so.7.1.3
然后把这个文件复制到/usr/lib/aarch64-linux-gnu
sudo copy libnvinfer_plugin.so.7.1.3 /usr/lib/aarch64-linux-gnu/
之后建立2个软连接
sudo in -s /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.3 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7
sudo in -s /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.3 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so
然后就可以用tlt-converter正常转换了。
参考文献:
1.https://zongxp.blog.csdn.net/article/details/107709786
2.https://blog.csdn.net/hello_dear_you/article/details/111224823
3.https://forums.developer.nvidia.com/t/tlt-converter-works-on-detecnet-but-not-on-yolo/128900/2