How to implement the addtion of grad in the backback-propagating,how to add extra term (which is the gradient to middle net layer output) to the network

i am chinese in Beijing, the website below is more info about my question: https://discuss.gluon.ai/t/topic/7499

i use the mxnet and python but no gluon to implement the paper !
yes no gluon ,just use the module.
mxnet%20grad
i use the ‘group’ function ,in custome operator feature_loss’s backward part ,i give the extra grad to the network, it work ! but not the effect of the matconvnet…
i want to know if there is some another way to implement the addtion of grad in the backback-propagating.
Thanks your reply, i think you must master the mxnet better than me who just leave the school and learn the deep-learning tool for a month,

keep touch for bettter forward!

Hi,

I don’t know how you can do so with symbol, but with gluon you can add the gradient directly to the parameters gradient.

extra_grad = ... # this is your extra gradient. 

for param in network.collect_params().values(): 
    param.grad += extra_grad

Now, I think you should add this before the trainer.step operation, something like:

for data in datagen:
    input, label = data
    input = input.as_in_context(ctx)
    label = label.as_in_context(ctx)
    with autograd.record():
        preds = network(input)
        loss = some_loss(preds,label)
   loss.backward() # calculate gradients of parameters..
   # Add  extra gradient term that you've somehow calculated
   for param in network.collect_params().values(): 
       param.grad += extra_grad

   trainer.step(data[0].size[0])

Experts on mxnet can help more. Hope this is somehow helpful. I haven’t tested the code, writing from memory.

this is my new progress! can u give me some advice … , MAY THE FORCE BE WITH YOU!