From the Gluon Model Zoo:
All pre-trained models expect input images normalized in the same way, i.e. mini-batches of 3-channel RGB images of shape (N x 3 x H x W), where N is the batch size, and H and W are expected to be at least 224. The images have to be loaded in to a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225]
However, when I look at the train scripts for resnet50_v2 (i.e. https://github.com/apache/incubator-mxnet/blob/master/example/gluon/image_classification.py and https://github.com/apache/incubator-mxnet/blob/master/example/gluon/data.py)
There is no code line showing that the image is converted to range (0,1) (i.e.
image = image/255) in the
get_imagenet_iterator. Please correct me if I’m wrong.