I was trying to adapt the quantization example at https://github.com/apache/incubator-mxnet/tree/master/example/quantization for an object detection scenario, using the vgg16_reduced of the (now deprecated) mxnet-ssd github repository.
Is this possible? The quantization process works without problems but when I try to load the quantized model with the usual sequence mx.model.load_checkpoint - mx.mod.Module - mod.bind i receive an error: Check failed: data shape[C] % 4 == 0U (3 vs 0) for 8bit cudnn conv, the number of channel must be multiple of 4.
I compared a bit the the original example of inference with mine and the data shapes seem to be pretty similar. (3, 244, 244 vs 3,300,300)