Training with frozen BatchNorm running_mean and running_var

I am using MxNet 1.0 with Gluon, and need to freeze the running_mean and running_var of the BatchNorm layers during training. The BatchNorm layers are the ones in the ResNet model, for which I have loaded the parameters from the disk using the model_zoo API.

How can I do that?

More in detail, I want my training block:
with autograd.record(train_mode=True):
out = net(input)
loss = lossFunction(out, out_GT)
loss.backward()
to just compute the gradients and update the associated parameters, and keep frozen all the parameters of the BatchNorm layers. I did set the grad_req=‘null’ for the gamma and beta parameters of the BatchNorm layers, but cannot find a way to freeze also the running means/vars. I tried to set autograd.record(train_mode=False) (as done in TensorFlow and PyTorch), but the gradients do not seem to be updated correctly.

Also, is there a preferred way to modify some parameters of the BatchNorm layers (like the momentum, which eventually I would decrease) AFTER the model has been created and parameters loaded from the disk with model_zoo?

Thanks,
Alessandro

Have you solved this problem, I am having the same issue.