Support for various quantisation

Hello,

Is there any page summarising support for processing with quantised values (fp16/int16/int8)?

Something similar to this one:
https://intel.github.io/mkl-dnn/

If not,
there is some support for fp16. For both Inference/training? For all, or only some, of MXNet operators?
What about quantisation to int8/int16?

Bartosz

please refer the below link: