Support for various quantisation


Is there any page summarising support for processing with quantised values (fp16/int16/int8)?

Something similar to this one:

If not,
there is some support for fp16. For both Inference/training? For all, or only some, of MXNet operators?
What about quantisation to int8/int16?


please refer the below link: