Autograd affects evaluation

I encountered a very strange problem. When I evaluate the net, autograd.record() has impact on the evaluation accuracy.
If I write

pred=net(data)

the accuracy is only 0.04
However, when I write

with autograd.record():
pred = net(data)

Then, the accuracy becomes 0.36 (This value I think is correct and reasonable). This seems to be very strange, since I know that during evaluation, with autograd.record should not be there. Can anyone help explain? Thanks.

Autograd scope changes the behavior of layers such as dropout and batchnorm which are designed to behave differently between training and test. If you comment about your network architecture, I may be able to provide more specific help.

I am having a similar issue with the ResNet50_v2 model from the model zoo here: https://mxnet.incubator.apache.org/api/python/gluon/model_zoo.html

I am confused about why this happens, is there a specific way to load a model for inference mode? Or do I need to wrap every forward call with “with autograd.record()”?