How is the support for quantization in MXNet? I am trying to find a better framework than TensorLite because its support is abysmal.
Are the quantization methods available here (https://github.com/apache/incubator-mxnet/tree/master/example/quantization) only for research and experimentation purpose? Or will it lead to actual speed increase when deployed in mobile? There is no time comparison made so I presume it is just for research and experimentation?