Inference using float16

From here https://mxnet.incubator.apache.org/versions/master/faq/float16.html, it claims that “Ensuring that Batchnorm performs reduction in float32 is handled by default in both Gluon and Module APIs”. May I ask where I can get the implementation of this logic?

Hi @ChangMarkLiu, you can check the code here: https://github.com/apache/incubator-mxnet/blob/master/src/operator/batch_norm_v1-inl.h#L285