I’ve a code in MXNET which I exported to ONNX, then from ONNX imported to TensorRT.
I’m using onnx-tensorrt(https://github.com/onnx/onnx-tensorrt) in order to run the inference.
I’ve got an output after using
- trt_outputs = common.do_inference(context, bindings=bindings, inputs=inputs, outputs=outputs, stream=stream)
Also I’ve got an output when I do a forward pass in my MXNET (on that output I find the bboxes values for the face).
Question: How can I convert the TensorRT’s inference output to match the MXNET’s inference output so I can classify the faces with the bboxes?
Also, maybe I don’t look at the right place and I need to ignore MXNET’s output and interpret ONNX’s output and use that instead? (I also verified that ONNX has the same output)