Following the recent changes in the amp package I tried to convert a trained mxnet model from float32 to float16 using the convert_model script from https://github.com/apache/incubator-mxnet/pull/15118
However, I got an error that the data type is unsupported:
Error in operator retinanet0_multiboxprior0: [16:53:35] /home/local/ANT/dfferstl/software/incubator-mxnet/include/mxnet/operator.h:228: Check failed: in_type->at(i) == mshadow::default_type_flag || in_type->at(i) == -1: Unsupported data type 2
or if I try to use the implicit cast from [solved] Network in float16 I get the same error at the batchnorm gamma:
MXNetError: Error in operator retinanet0_batchnorm0_fwd: [16:52:35] /home/local/ANT/dfferstl/software/incubator-mxnet/src/operator/nn/batch_norm.cc:370: Check failed: (*in_type)[i] == dtype_param (2 vs. 0) : This layer requires uniform type. Expected 'float32' v.s. given 'float16' at 'gamma'
Is this simply not yet supported or is there any way to solve this?