Autograd.record error about assigning to NDArray

I am getting an error while executing the following code:

with autograd.record():
    outY = network(batchInput)
    totalLoss = nd.mean((outY - batchOutput)**2) + network.cost()

The error message says “Check failed: AGInfo::IsNone(*(outputs[i])) Assigning to NDArrays that are already in a computational graph will cause undefined behavior when evaluating gradients. Please call backward first to clear the graph or do this out side of a record section.”.

Any insight into what could cause this error and how to fix it?

Thanks a lot!

I figured out what was causing those error messages. I am posting my findings here for the benefit of anyone else who may run into this issue.

In the “network” function, I had code like the following:

blah = input.slice_axis(axis=1, begin=inXDim-1, end=inXDim)
blah = blah.reshape((numInputs,))

Seems like we can’t have re-assignments like this to the same variable in code that is going to be executed inside autograd.record.

Another variation of the same issue I had was within my network.cost() method where the code was:

costs = 0
for layer in self.layers:
    costs += layer.cost()

Here, the variable “costs” gets re-assigned each time through the loop.

I got around the issue by using a different variable in the first case above, and by unrolling the loop in the second case.

Any confirmation/additional comments by more knowledgeable people would be great.

Hi, could you give a minimum snippet that runs into this issue? I cannot reproduce.

Just putting something like the code I quoted in my reply above should reproduce it.

I meet the same error.
And when I replace costs += with costs = costs +, it works correctly.

1 Like