How to build mxnet with tensorrt support?

Hi, I noticed the USE_TENSORRT option in CMakeLists.txt and tried to compile mxnet from source with the cmd like below

cmake -GNinja -DUSE_CUDA=ON -DUSE_MKL_IF_AVAILABLE=OFF -DUSE_OPENCV=ON -DUSE_CUDNN=ON -DUSE_TENSORRT=ON ..

But get some error messages below

CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
ONNX_LIBRARY
    linked by target "im2rec" in directory /home/lhy/Documents/Lib/incubator-mxnet
    linked by target "mxnet" in directory /home/lhy/Documents/Lib/incubator-mxnet
    linked by target "mxnet_unit_tests" in directory /home/lhy/Documents/Lib/incubator-mxnet/tests
    linked by target "mlp_cpu" in directory /home/lhy/Documents/Lib/incubator-mxnet/cpp-package/example
    linked by target "lenet_with_mxdataiter" in directory /home/lhy/Documents/Lib/incubator-mxnet/cpp-package/example
    linked by target "alexnet" in directory /home/lhy/Documents/Lib/incubator-mxnet/cpp-package/example
    linked by target "mlp_gpu" in directory /home/lhy/Documents/Lib/incubator-mxnet/cpp-package/example
    linked by target "lenet" in directory /home/lhy/Documents/Lib/incubator-mxnet/cpp-package/example
    linked by target "googlenet" in directory /home/lhy/Documents/Lib/incubator-mxnet/cpp-package/example
    linked by target "charRNN" in directory /home/lhy/Documents/Lib/incubator-mxnet/cpp-package/example
    linked by target "inception_bn" in directory /home/lhy/Documents/Lib/incubator-mxnet/cpp-package/example
    linked by target "mlp" in directory /home/lhy/Documents/Lib/incubator-mxnet/cpp-package/example
    linked by target "resnet" in directory /home/lhy/Documents/Lib/incubator-mxnet/cpp-package/example
    linked by target "image-classification-predict" in directory /home/lhy/Documents/Lib/incubator-mxnet/example/image-classification/predict-cpp
ONNX_PROTO_LIBRARY
    linked by target "im2rec" in directory /home/lhy/Documents/Lib/incubator-mxnet
    linked by target "mxnet" in directory /home/lhy/Documents/Lib/incubator-mxnet
    linked by target "mxnet_unit_tests" in directory /home/lhy/Documents/Lib/incubator-mxnet/tests
    linked by target "mlp_cpu" in directory /home/lhy/Documents/Lib/incubator-mxnet/cpp-package/example
    linked by target "lenet_with_mxdataiter" in directory /home/lhy/Documents/Lib/incubator-mxnet/cpp-package/example
    linked by target "alexnet" in directory /home/lhy/Documents/Lib/incubator-mxnet/cpp-package/example
    linked by target "mlp_gpu" in directory /home/lhy/Documents/Lib/incubator-mxnet/cpp-package/example
    linked by target "lenet" in directory /home/lhy/Documents/Lib/incubator-mxnet/cpp-package/example
    linked by target "googlenet" in directory /home/lhy/Documents/Lib/incubator-mxnet/cpp-package/example
    linked by target "charRNN" in directory /home/lhy/Documents/Lib/incubator-mxnet/cpp-package/example
    linked by target "inception_bn" in directory /home/lhy/Documents/Lib/incubator-mxnet/cpp-package/example
    linked by target "mlp" in directory /home/lhy/Documents/Lib/incubator-mxnet/cpp-package/example
    linked by target "resnet" in directory /home/lhy/Documents/Lib/incubator-mxnet/cpp-package/example
    linked by target "image-classification-predict" in directory /home/lhy/Documents/Lib/incubator-mxnet/example/image-classification/predict-cpp

I did not find any related doc in this page. Any help is appreciated.

Install several things such as protobuf, tensorRT library, onnx, onnx-tensorrt.

Follow these two links:


Thank you. The problem has been solved now.

Hi @fullfanta, thank you for your instruction. I am following the links to build mxnet with tensorrt on Jetson TX2 machine. But there’s a error about type conversion when building the onnx-tensorrt. Can you give me some suggestions to solve this issue? Thanks a lot.

My system have CUDA9.0, cuDNN 7.0, TensroRT 4.0 and libnvinfer=4.1.3.

The cmake summary information is as follows:

- ******** Summary ********
--   CMake version         : 3.5.1
--   CMake command         : /usr/bin/cmake
--   System                : Linux
--   C++ compiler          : /tmp/ccache-redirects/g++
--   C++ compiler version  : 5.4.0
--   CXX flags             :  -Wall -Wnon-virtual-dtor
--   Build type            : Release
--   Compile definitions   : ONNX_NAMESPACE=onnx2trt_onnx
--   CMAKE_PREFIX_PATH     : 
--   CMAKE_INSTALL_PREFIX  : /usr/local
--   CMAKE_MODULE_PATH     : 
-- 
--   ONNX version          : 1.3.0
--   ONNX NAMESPACE        : onnx2trt_onnx
--   ONNX_BUILD_TESTS      : OFF
--   ONNX_BUILD_BENCHMARKS : OFF
--   ONNX_USE_LITE_PROTO   : OFF
--   ONNXIFI_DUMMY_BACKEND : OFF
-- 
--   Protobuf compiler     : /usr/local/bin/protoc
--   Protobuf includes     : /usr/local/include
--   Protobuf libraries    : optimized;/usr/local/lib/libprotobuf.a;debug;/usr/local/lib/libprotobuf.a;-pthread
--   BUILD_ONNX_PYTHON     : OFF
-- Found CUDA: /usr/local/cuda-9.0 (found version "9.0") 
-- Found CUDNN: /usr/include  
-- Found TensorRT headers at /usr/include/aarch64-linux-gnu
-- Find TensorRT libs at /usr/lib/aarch64-linux-gnu/libnvinfer.so;/usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so
-- Found TENSORRT: /usr/include/aarch64-linux-gnu  
-- Configuring done
-- Generating done
-- Build files have been written to: /home/nvidia/Libs/mxnet/3rdparty/onnx-tensorrt/build

and the Error message is as follows:

[ 73%] Linking CXX shared library libnvonnxparser_runtime.so
In file included from /home/nvidia/Libs/mxnet/3rdparty/onnx-tensorrt/ModelImporter.cpp:26:0:
/home/nvidia/Libs/mxnet/3rdparty/onnx-tensorrt/onnx2trt_utils.hpp: In function ‘bool onnx2trt::convert_onnx_weights(const onnx2trt_onnx::TensorProto&, onnx2trt::ShapedWeights*)’:
/home/nvidia/Libs/mxnet/3rdparty/onnx-tensorrt/onnx2trt_utils.hpp:230:61: error: invalid conversion from ‘int’ to ‘onnx2trt::ShapedWeights::DataType {aka onnx2trt_onnx::TensorProto_DataType}’ [-fpermissive]
   onnx2trt::ShapedWeights trt_weights(dtype, data_ptr, shape);
                                                             ^
In file included from /home/nvidia/Libs/mxnet/3rdparty/onnx-tensorrt/onnx2trt.hpp:26:0,
                 from /home/nvidia/Libs/mxnet/3rdparty/onnx-tensorrt/ImporterContext.hpp:25,
                 from /home/nvidia/Libs/mxnet/3rdparty/onnx-tensorrt/ModelImporter.hpp:26,
                 from /home/nvidia/Libs/mxnet/3rdparty/onnx-tensorrt/ModelImporter.cpp:23:
/home/nvidia/Libs/mxnet/3rdparty/onnx-tensorrt/ShapedWeights.hpp:38:12: note:   initializing argument 1 of ‘onnx2trt::ShapedWeights::ShapedWeights(onnx2trt::ShapedWeights::DataType, void*, nvinfer1::Dims)’
   explicit ShapedWeights(DataType type, void* values, nvinfer1::Dims shape_);
            ^
/home/nvidia/Libs/mxnet/3rdparty/onnx-tensorrt/ModelImporter.cpp: In function ‘onnx2trt::Status onnx2trt::importInput(onnx2trt::ImporterContext*, const onnx2trt_onnx::ValueInfoProto&, nvinfer1::ITensor**)’:
/home/nvidia/Libs/mxnet/3rdparty/onnx-tensorrt/ModelImporter.cpp:53:54: error: invalid conversion from ‘google::protobuf::int32 {aka int}’ to ‘onnx2trt_onnx::TensorProto::DataType {aka onnx2trt_onnx::TensorProto_DataType}’ [-fpermi
ssive]
   ASSERT(convert_dtype(onnx_tensor_type.elem_type(), &trt_dtype),
                                                      ^
In file included from /home/nvidia/Libs/mxnet/3rdparty/onnx-tensorrt/ModelImporter.cpp:26:0:
/home/nvidia/Libs/mxnet/3rdparty/onnx-tensorrt/onnx2trt_utils.hpp:115:13: note:   initializing argument 1 of ‘bool onnx2trt::convert_dtype(onnx2trt_onnx::TensorProto::DataType, nvinfer1::DataType*)’
 inline bool convert_dtype(::ONNX_NAMESPACE::TensorProto::DataType onnx_dtype,
             ^
/home/nvidia/Libs/mxnet/3rdparty/onnx-tensorrt/ModelImporter.cpp: In member function ‘onnx2trt::Status onnx2trt::ModelImporter::importModel(const onnx2trt_onnx::ModelProto&, uint32_t, const onnxTensorDescriptorV1*)’:
/home/nvidia/Libs/mxnet/3rdparty/onnx-tensorrt/ModelImporter.cpp:324:70: error: invalid conversion from ‘google::protobuf::int32 {aka int}’ to ‘onnx2trt_onnx::TensorProto::DataType {aka onnx2trt_onnx::TensorProto_DataType}’ [-fperm
issive]
In file included from /home/nvidia/Libs/mxnet/3rdparty/onnx-tensorrt/ModelImporter.cpp:26:0:
/home/nvidia/Libs/mxnet/3rdparty/onnx-tensorrt/onnx2trt_utils.hpp:115:13: note:   initializing argument 1 of ‘bool onnx2trt::convert_dtype(onnx2trt_onnx::TensorProto::DataType, nvinfer1::DataType*)’
 inline bool convert_dtype(::ONNX_NAMESPACE::TensorProto::DataType onnx_dtype,
             ^
CMakeFiles/nvonnxparser.dir/build.make:86: recipe for target 'CMakeFiles/nvonnxparser.dir/ModelImporter.cpp.o' failed
make[2]: *** [CMakeFiles/nvonnxparser.dir/ModelImporter.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs....

Hi, I meet the same issue, and I follow the @fullfanta suggestion, but I still failed.
I build the MXNet form source follow the below step:

Download the mxnet

Build the tensorrt follow https://github.com/apache/incubator-mxnet/blob/master/ci/docker/install/tensorrt.sh. because my cuda=10.1, so the sh file have a few difference.
In this step, I meet new issue as below (libnvinfer5 : Depends: libcublas10 but it is not installable)

I go to the /home/jakin/Downloads/incubator-mxnet/3rdparty/onnx-tensorrt dorectory. and build the onnx, and onnx-tensorrt follow https://github.com/apache/incubator-mxnet/blob/master/ci/docker/runtime_functions.sh#L571

Build the MXnet with TensorRT using the same promote as you.

Pls, any problem in this way?
can you give me some suggestion ? thank you very much.