How to use tensorRT to improve the inference speed for mxnet model, Thanks
The easiest way to move MXNet model to TensorRT would be through ONNX. Basically you’d export your model as ONNX and import ONNX as TensorRT. However exporting from MXNet to ONNX is WIP and the proposed API can be found here.
Hi, @safrooze, just as you say, exporting from MXNet to ONNX is WIP. If I want to do it now, is there some other ways?