I run the example code, the inputs are the same, but the output are different when using tensorrt
Thanks a lot for reporting this issue. Which model did you try? The one from the tutorial, so
Can you retry with a different model like
The result of vgg16 is right, I don’t know why
Here is the testing code
import mxnet as mx from mxnet.gluon.model_zoo import vision import os # Create sample input batch_shape = (1, 3, 224, 224) input = mx.nd.ones(batch_shape) model = vision.vgg16(pretrained=True) model.hybridize() model.forward(mx.nd.zeros(batch_shape)) model.export('model') sym, arg_params, aux_params = mx.model.load_checkpoint('model', 0) # Execute with MXNet os.environ['MXNET_USE_TENSORRT'] = '0' executor = sym.simple_bind(ctx=mx.gpu(0), data=batch_shape, grad_req='null', force_rebind=True) executor.copy_params_from(arg_params, aux_params) y_gen = executor.forward(is_train=False, data=input) print('MXNet output') print(y_gen.asnumpy()) # Execute with TensorRT os.environ['MXNET_USE_TENSORRT'] = '1' arg_params.update(aux_params) all_params = dict([(k, v.as_in_context(mx.gpu(0))) for k, v in arg_params.items()]) executor = mx.contrib.tensorrt.tensorrt_bind(sym, ctx=mx.gpu(0), all_params=all_params, data=batch_shape, grad_req='null', force_rebind=True) y_gen = executor.forward(is_train=False, data=input) print('Tensorrt output') print(y_gen.asnumpy())
I also tested it for myself: in my case it worked for VGG16 but for none of the ResNets models. I assume that the problem must be related to the network architecture of ResNet. I will file an issue on Github, so that some experts can follow it up.