Autograd fails when slicing

Why does the second code block below fail? It seems that assigning to a slice of a variable during mx.autograd.record() will cause backward() to fail.

>>> import mxnet as mx
>>> mx.__version__
'1.0.0'
>>> # succeeds
... x = mx.nd.array([1,2,3,4])
>>> x.attach_grad()
>>> with mx.autograd.record():
...     y = x * x + 1
... 
>>> y.backward()
>>> 
>>> 
>>> # fails - why?
... a = mx.nd.array([1,2,3,4])
>>> a.attach_grad()
>>> b = mx.nd.zeros_like(a)
>>> with mx.autograd.record():
...     b[0:4] = a * a + 1
... 
>>> b.backward()
[14:52:03] /home/travis/build/dmlc/mxnet-distro/mxnet-build/dmlc-core/include/dmlc/logging.h:308: [14:52:03] src/imperative/imperative.cc:375: Check failed: !AGInfo::IsNone(*i) Cannot differentiate node because it is not in a computational graph. You need to set is_recording to true or use autograd.record() to save computational graphs for backward. If you want to differentiate the same graph twice, you need to pass retain_graph=True to backward.

Stack trace returned 10 entries:
[bt] (0) /home/local/ANT/gautierp/anaconda3/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x17c54c) [0x7fb29b14854c]
[bt] (1) /home/local/ANT/gautierp/anaconda3/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x23ba221) [0x7fb29d386221]
[bt] (2) /home/local/ANT/gautierp/anaconda3/lib/python3.6/site-packages/mxnet/libmxnet.so(MXAutogradBackwardEx+0x778) [0x7fb29d2c9868]
[bt] (3) /home/local/ANT/gautierp/anaconda3/lib/python3.6/lib-dynload/_ctypes.cpython-36m-x86_64-linux-gnu.so(ffi_call_unix64+0x4c) [0x7fb2ae228550]
[bt] (4) /home/local/ANT/gautierp/anaconda3/lib/python3.6/lib-dynload/_ctypes.cpython-36m-x86_64-linux-gnu.so(ffi_call+0x1f5) [0x7fb2ae227cf5]
[bt] (5) /home/local/ANT/gautierp/anaconda3/lib/python3.6/lib-dynload/_ctypes.cpython-36m-x86_64-linux-gnu.so(_ctypes_callproc+0x3dc) [0x7fb2ae21f83c]
[bt] (6) /home/local/ANT/gautierp/anaconda3/lib/python3.6/lib-dynload/_ctypes.cpython-36m-x86_64-linux-gnu.so(+0x9da3) [0x7fb2ae217da3]
[bt] (7) /home/local/ANT/gautierp/anaconda3/bin/../lib/libpython3.6m.so.1.0(_PyObject_FastCallDict+0x9e) [0x7fb2b0516ade]
[bt] (8) /home/local/ANT/gautierp/anaconda3/bin/../lib/libpython3.6m.so.1.0(+0x1482bb) [0x7fb2b05f32bb]
[bt] (9) /home/local/ANT/gautierp/anaconda3/bin/../lib/libpython3.6m.so.1.0(_PyEval_EvalFrameDefault+0x26fd) [0x7fb2b05f615d]

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/local/ANT/gautierp/anaconda3/lib/python3.6/site-packages/mxnet/ndarray/ndarray.py", line 2002, in backward
    ctypes.c_void_p(0)))
  File "/home/local/ANT/gautierp/anaconda3/lib/python3.6/site-packages/mxnet/base.py", line 146, in check_call
    raise MXNetError(py_str(_LIB.MXGetLastError()))
mxnet.base.MXNetError: [14:52:03] src/imperative/imperative.cc:375: Check failed: !AGInfo::IsNone(*i) Cannot differentiate node because it is not in a computational graph. You need to set is_recording to true or use autograd.record() to save computational graphs for backward. If you want to differentiate the same graph twice, you need to pass retain_graph=True to backward.

Stack trace returned 10 entries:
[bt] (0) /home/local/ANT/gautierp/anaconda3/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x17c54c) [0x7fb29b14854c]
[bt] (1) /home/local/ANT/gautierp/anaconda3/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x23ba221) [0x7fb29d386221]
[bt] (2) /home/local/ANT/gautierp/anaconda3/lib/python3.6/site-packages/mxnet/libmxnet.so(MXAutogradBackwardEx+0x778) [0x7fb29d2c9868]
[bt] (3) /home/local/ANT/gautierp/anaconda3/lib/python3.6/lib-dynload/_ctypes.cpython-36m-x86_64-linux-gnu.so(ffi_call_unix64+0x4c) [0x7fb2ae228550]
[bt] (4) /home/local/ANT/gautierp/anaconda3/lib/python3.6/lib-dynload/_ctypes.cpython-36m-x86_64-linux-gnu.so(ffi_call+0x1f5) [0x7fb2ae227cf5]
[bt] (5) /home/local/ANT/gautierp/anaconda3/lib/python3.6/lib-dynload/_ctypes.cpython-36m-x86_64-linux-gnu.so(_ctypes_callproc+0x3dc) [0x7fb2ae21f83c]
[bt] (6) /home/local/ANT/gautierp/anaconda3/lib/python3.6/lib-dynload/_ctypes.cpython-36m-x86_64-linux-gnu.so(+0x9da3) [0x7fb2ae217da3]
[bt] (7) /home/local/ANT/gautierp/anaconda3/bin/../lib/libpython3.6m.so.1.0(_PyObject_FastCallDict+0x9e) [0x7fb2b0516ade]
[bt] (8) /home/local/ANT/gautierp/anaconda3/bin/../lib/libpython3.6m.so.1.0(+0x1482bb) [0x7fb2b05f32bb]
[bt] (9) /home/local/ANT/gautierp/anaconda3/bin/../lib/libpython3.6m.so.1.0(_PyEval_EvalFrameDefault+0x26fd) [0x7fb2b05f615d]

You are correct that assigning to slices does fail to include the slice in the computational graph. What you seem to be trying to do is to have an implicit concatenate operation. The proper way to combine different outputs as slices of a single array is to explicitly use mx.nd.concatenate().

@safrooze I meet the same error. And I found that the error occurs whenever I use the function ‘md.nd.concat’ in ‘with mx.autograd.record()’.

Do you know how to solve this problem?

@Can not sure how you’re using concatenate, but the following code works:

a = mx.nd.array([1,2,3,4])
a.attach_grad()

b = mx.nd.array([5,6,7,8])
b.attach_grad()

with mx.autograd.record():
    c = a * a + 1
    d = b * b + 1
    e = nd.concat(c, d, dim=0)
e.backward()

print(a.grad)
print(b.grad)

Outputs:

[ 2.  4.  6.  8.]
<NDArray 4 @cpu(0)>

[ 10.  12.  14.  16.]
<NDArray 4 @cpu(0)>