How to build mxnet with tensorrt support?

Hi @fullfanta, thank you for your instruction. I am following the links to build mxnet with tensorrt on Jetson TX2 machine. But there’s a error about type conversion when building the onnx-tensorrt. Can you give me some suggestions to solve this issue? Thanks a lot.

My system have CUDA9.0, cuDNN 7.0, TensroRT 4.0 and libnvinfer=4.1.3.

The cmake summary information is as follows:

- ******** Summary ********
--   CMake version         : 3.5.1
--   CMake command         : /usr/bin/cmake
--   System                : Linux
--   C++ compiler          : /tmp/ccache-redirects/g++
--   C++ compiler version  : 5.4.0
--   CXX flags             :  -Wall -Wnon-virtual-dtor
--   Build type            : Release
--   Compile definitions   : ONNX_NAMESPACE=onnx2trt_onnx
--   CMAKE_PREFIX_PATH     : 
--   CMAKE_INSTALL_PREFIX  : /usr/local
--   CMAKE_MODULE_PATH     : 
-- 
--   ONNX version          : 1.3.0
--   ONNX NAMESPACE        : onnx2trt_onnx
--   ONNX_BUILD_TESTS      : OFF
--   ONNX_BUILD_BENCHMARKS : OFF
--   ONNX_USE_LITE_PROTO   : OFF
--   ONNXIFI_DUMMY_BACKEND : OFF
-- 
--   Protobuf compiler     : /usr/local/bin/protoc
--   Protobuf includes     : /usr/local/include
--   Protobuf libraries    : optimized;/usr/local/lib/libprotobuf.a;debug;/usr/local/lib/libprotobuf.a;-pthread
--   BUILD_ONNX_PYTHON     : OFF
-- Found CUDA: /usr/local/cuda-9.0 (found version "9.0") 
-- Found CUDNN: /usr/include  
-- Found TensorRT headers at /usr/include/aarch64-linux-gnu
-- Find TensorRT libs at /usr/lib/aarch64-linux-gnu/libnvinfer.so;/usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so
-- Found TENSORRT: /usr/include/aarch64-linux-gnu  
-- Configuring done
-- Generating done
-- Build files have been written to: /home/nvidia/Libs/mxnet/3rdparty/onnx-tensorrt/build

and the Error message is as follows:

[ 73%] Linking CXX shared library libnvonnxparser_runtime.so
In file included from /home/nvidia/Libs/mxnet/3rdparty/onnx-tensorrt/ModelImporter.cpp:26:0:
/home/nvidia/Libs/mxnet/3rdparty/onnx-tensorrt/onnx2trt_utils.hpp: In function ‘bool onnx2trt::convert_onnx_weights(const onnx2trt_onnx::TensorProto&, onnx2trt::ShapedWeights*)’:
/home/nvidia/Libs/mxnet/3rdparty/onnx-tensorrt/onnx2trt_utils.hpp:230:61: error: invalid conversion from ‘int’ to ‘onnx2trt::ShapedWeights::DataType {aka onnx2trt_onnx::TensorProto_DataType}’ [-fpermissive]
   onnx2trt::ShapedWeights trt_weights(dtype, data_ptr, shape);
                                                             ^
In file included from /home/nvidia/Libs/mxnet/3rdparty/onnx-tensorrt/onnx2trt.hpp:26:0,
                 from /home/nvidia/Libs/mxnet/3rdparty/onnx-tensorrt/ImporterContext.hpp:25,
                 from /home/nvidia/Libs/mxnet/3rdparty/onnx-tensorrt/ModelImporter.hpp:26,
                 from /home/nvidia/Libs/mxnet/3rdparty/onnx-tensorrt/ModelImporter.cpp:23:
/home/nvidia/Libs/mxnet/3rdparty/onnx-tensorrt/ShapedWeights.hpp:38:12: note:   initializing argument 1 of ‘onnx2trt::ShapedWeights::ShapedWeights(onnx2trt::ShapedWeights::DataType, void*, nvinfer1::Dims)’
   explicit ShapedWeights(DataType type, void* values, nvinfer1::Dims shape_);
            ^
/home/nvidia/Libs/mxnet/3rdparty/onnx-tensorrt/ModelImporter.cpp: In function ‘onnx2trt::Status onnx2trt::importInput(onnx2trt::ImporterContext*, const onnx2trt_onnx::ValueInfoProto&, nvinfer1::ITensor**)’:
/home/nvidia/Libs/mxnet/3rdparty/onnx-tensorrt/ModelImporter.cpp:53:54: error: invalid conversion from ‘google::protobuf::int32 {aka int}’ to ‘onnx2trt_onnx::TensorProto::DataType {aka onnx2trt_onnx::TensorProto_DataType}’ [-fpermi
ssive]
   ASSERT(convert_dtype(onnx_tensor_type.elem_type(), &trt_dtype),
                                                      ^
In file included from /home/nvidia/Libs/mxnet/3rdparty/onnx-tensorrt/ModelImporter.cpp:26:0:
/home/nvidia/Libs/mxnet/3rdparty/onnx-tensorrt/onnx2trt_utils.hpp:115:13: note:   initializing argument 1 of ‘bool onnx2trt::convert_dtype(onnx2trt_onnx::TensorProto::DataType, nvinfer1::DataType*)’
 inline bool convert_dtype(::ONNX_NAMESPACE::TensorProto::DataType onnx_dtype,
             ^
/home/nvidia/Libs/mxnet/3rdparty/onnx-tensorrt/ModelImporter.cpp: In member function ‘onnx2trt::Status onnx2trt::ModelImporter::importModel(const onnx2trt_onnx::ModelProto&, uint32_t, const onnxTensorDescriptorV1*)’:
/home/nvidia/Libs/mxnet/3rdparty/onnx-tensorrt/ModelImporter.cpp:324:70: error: invalid conversion from ‘google::protobuf::int32 {aka int}’ to ‘onnx2trt_onnx::TensorProto::DataType {aka onnx2trt_onnx::TensorProto_DataType}’ [-fperm
issive]
In file included from /home/nvidia/Libs/mxnet/3rdparty/onnx-tensorrt/ModelImporter.cpp:26:0:
/home/nvidia/Libs/mxnet/3rdparty/onnx-tensorrt/onnx2trt_utils.hpp:115:13: note:   initializing argument 1 of ‘bool onnx2trt::convert_dtype(onnx2trt_onnx::TensorProto::DataType, nvinfer1::DataType*)’
 inline bool convert_dtype(::ONNX_NAMESPACE::TensorProto::DataType onnx_dtype,
             ^
CMakeFiles/nvonnxparser.dir/build.make:86: recipe for target 'CMakeFiles/nvonnxparser.dir/ModelImporter.cpp.o' failed
make[2]: *** [CMakeFiles/nvonnxparser.dir/ModelImporter.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs....