I’m training a image classification model using the
A brief illustration of my model is as below:
Besides the prediction and label, the loss function accepts a
weights from the intermediate features
However, I don’t want the gradient to propagate through the
In other words, I need the
weights to acting as a static ndarray (just like the Lable),
rather than a Variable.
So how can I disable the gradient back-prop through the variable
Currently I use the
feature = conv(data) weights = linear1(feature) weights = mx.symbol.BlockGrad(weights) prediction = linear2(feature) loss = softmax_loss(prediction, target)
Is it correct to do so?