build and install onnx-tensorrt
environment
- jetson xavier nx
- jetpack4.4
uname -a Linux jetson xaveir-nx 4.9.140-tegra #1 SMP PREEMPT Tue Oct 27 21:02:46 PDT 2020 aarch64 aarch64 aarch64 GNU/Linux
update cmake (3.13 Later for onnx-tensorrt)
- Install what you need to build cmake.
sudo apt install libssl-dev libprotobuf-dev protobuf-compiler # Was neccessary for my environment for $ ./bootstrap
- Build and install the latest version of cmake
install onnx-tensorrt on jetosn xavier nx
- Build onnx-tensorrt using the following steps
git clone --recursive -b 7.1 https://github.com/onnx/onnx-tensorrt.git onnx_tensorrt cd onnx-tensorrt mkdir build && cd build cmake .. -DTENSORRT_ROOT=/usr/src/tensorrt -DCMAKE_INSTALL_PREFIX=/usr/ make -j8 sudo make install
※If you want to Build with Docker, please refer to here.
https://github.com/onnx/onnx-tensorrt/issues/385
convert model onnx to tensorrt
- How to execute
onnx2trt my_model.onnx -o my_engine.trt
install onnxruntime 1.4.0
wget https://nvidia.box.com/shared/static/8sc6j25orjcpl6vhq3a4ir8v219fglng.whl -O onnxruntime_gpu-1.4.0-cp36-cp36m-linux_aarch64.whl pip3 install onnxruntime_gpu-1.4.0-cp36-cp36m-linux_aarch64.whl
install onnx_tensorrt.backend
- PATH to CUDA.
#Add a blank, then these 2 lines # (letting your Nano know where CUDA is) to the bottom of the file. sudo nano ~/.bashrc export PATH=${PATH}:/usr/local/cuda/bin export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda/lib64 source ~/.bashrc # test nvcc path done nvcc --version
- In order to install "onnx_tensorrt.backend" on Jetson XavierNX, you need to change the following two files in onnx-tensorrt. The differences are as follows.
--- a/NvOnnxParser.h +++ b/NvOnnxParser.h @@ -31,6 +31,10 @@ #define NV_ONNX_PARSER_MINOR 1 #define NV_ONNX_PARSER_PATCH 0 +#ifndef TENSORRTAPI +#define TENSORRTAPI +#endif // TENSORRTAPI + static const int NV_ONNX_PARSER_VERSION = ((NV_ONNX_PARSER_MAJOR * 10000) + (NV_ONNX_PARSER_MINOR * 100) + NV_ONNX_PARSER_PATCH); //! \typedef SubGraph_t diff --git a/setup.py b/setup.py index 8ffa543..d6244a3 100644 --- a/setup.py +++ b/setup.py @@ -59,10 +59,11 @@ EXTRA_COMPILE_ARGS = [ '-std=c++11', '-DUNIX', '-D__UNIX', - '-m64', '-fPIC', '-O2', '-w', + '-march=armv8-a+crypto', + '-mcpu=cortex-a57+crypto', '-fmessage-length=0', '-fno-strict-aliasing', '-D_FORTIFY_SOURCE=2',
- After the above preparation, I was able to install it with the following command.
cd onnx-tensorrt sudo apt install swig pip3 install pycuda sudo python3 setup.py install
- In python3, if import onnx_tensorrt.backend as backend succeeds, install succeeds.
python3 >>import onnx_tensorrt.backend as backend
onnx->TensorRT変換するときに便利なサイト
- TopK layer(2次元)や,Dobule型は2021/1現在,TensorRT変換できないので対処する記述が必要.
PyTorch documentation — PyTorch 1.10 documentation
- onnx->TensorRT変換のErrがどこで起きてるか探しやすい.Ctrl+Fでlayer番号で検索できる.
- Nvidiaのフォーラムに行けば大体似たようなErrに遭遇しているので,きっかけにはなる.
NVIDIA Developer Forums - NVIDIA Developer Forums
- onnx-tensorrt/issuesも活発でよい質問多い.
- onnxモデルをきれいにしてくれるので使ってもよいが,今回の場合は,綺麗にしすぎてTensorRT変換が出来なくなった.
- --input-shapeで動的サイズのinputを,静的サイズに変更できるのは便利な時があるかもしれない.
- Jetson XavierNXでonnx-tensorrtの環境を作った後だとうまくinstallできなかったので,別のPCで環境作った.
リンク