Object Detection Results different between GPU and CPU

Hello everyone,

I am very new to mxnet, and now trying to make object detection on Jetson Nano.
It is able to make object detection using CPU, but I got some problem when I try to switch to GPU.
I got this below error while ploting the bounding boxex when I just set ctx=mx.gpu(0)
And there are many nan value from bbox[0] which are different from using CPU.
What can I do to fix it? Many thanks.

Best regards,
Ambrose

error:
Traceback (most recent call last):
File “/home/nvidia/test.py”, line 20, in
ax = viz.plot_bbox(image, bbox[0], score[0], cid[0], thresh=0.5, class_names=net.classes)
File “build/bdist.linux-aarch64/egg/gluoncv/utils/viz/bbox.py”, line 88, in plot_bbox
ValueError: cannot convert float NaN to integer

print(bbox[0]) using GPU:
[[ 4.24356628e+02 1.58536743e+02 1.08492798e+03 3.55330750e+02]
[ 4.05682373e+02 1.89416489e+02 4.45970764e+02 2.22044357e+02]
[ nan nan nan nan]
[ -9.71876457e+33 nan 9.71876457e+33 nan]
[ nan nan nan nan]
[ -5.42979105e+24 nan 5.42979105e+24 nan]

[ -1.00000000e+00 -1.00000000e+00 -1.00000000e+00 -1.00000000e+00]]
<NDArray 100x4 @gpu(0)>

print(bbox[0]) using CPU:
[[ 4.05682404e+02 1.89416489e+02 4.45970734e+02 2.22044357e+02]
[ 4.21186981e+02 2.06683792e+02 4.43701996e+02 2.29541519e+02]
[ 7.14498779e+02 -1.87945862e+01 1.03049756e+03 2.91979553e+02]
[ -1.98881989e+01 -2.76248703e+01 3.14135437e+02 2.78459137e+02]
[ 6.90192932e+02 1.92309967e+02 1.03272510e+03 5.44604492e+02]
[ 3.93696350e+02 1.81550446e+02 4.52738220e+02 2.40718079e+02]
[ 6.36820984e+02 -5.83842468e+00 1.08037878e+03 2.51739731e+02]
[ 1.85760498e+00 2.11665085e+02 4.04285797e+02 4.09624023e+02]

[ -1.00000000e+00 -1.00000000e+00 -1.00000000e+00 -1.00000000e+00]]
<NDArray 100x4 @cpu(0)>

below is the code:

import os
os.environ[‘MXNET_CUDNN_AUTOTUNE_DEFAULT’] = ‘0’
import time
from matplotlib import pyplot as plt
import numpy as np
import mxnet as mx
from mxnet import autograd, gluon
import gluoncv as gcv
from gluoncv.utils import download, viz

ctx = mx.cpu()
#ctx = mx.gpu(0)

classes = [‘a’, ‘b’, ‘c’, ‘d’, ‘e’]
net = gcv.model_zoo.get_model(‘ssd_512_mobilenet1.0_custom’, classes=classes, pretrained_base=False, ctx=ctx)
net.load_parameters(‘ssd_512_mobilenet1.0_test.params’, ctx=ctx)
x, image = gcv.data.transforms.presets.ssd.load_test(‘test.png’, short=512)
x = x.as_in_context(ctx)
cid, score, bbox = net(x)
print(bbox[0])
ax = viz.plot_bbox(image, bbox[0], score[0], cid[0], thresh=0.5, class_names=net.classes)
plt.show()

Updated

Just tested in a desktop PC with the same code. It can successfully detect objects as expected using GPU with the same code.

In desktop PC, it is using MXNet 1.5.0 and GluonCV 0.4.0, GPU is 1080ti
In Jetson Nano, it is using MXNet 1.4.1 and GluonCV 0.5.0

The “ssd_512_mobilenet1.0_test.params” was finetune a pretrained model follow this tutorial using the desktop PC
https://gluon-cv.mxnet.io/build/examples_detection/finetune_detection.html

Thanks a lot!

Ambrose