What is the recommended operation to protect some weights from being changed by the trainer in MxNet?
As far as I know, if I want to protect some weights in TenserFlow, I should prevent them from being passed to the optimizer. So, I do the same in MxNet with following codes.
all_params = net.collect_params()
while True:
firstKey = next(iter(all_params._params))
if 'resnet' not in firstKey:
break
all_params._params.popitem(last = False)
trainer =
mx.gluon.Trainer(all_params,‘sgd’)
The variable ”all_params._params” belongs to a rare type called ”OrderedDict.” I think it means that the order in this dictionary is very important. I should not change the order. As shown above, I can only remove some parameters from the beginning of the network. It is very inconvenient. The ”params” gets a ”underline _” at the beginning, which means it should not be charged by the general user.
I do not receive any errors, but I wonder this is not the recommended operation.