Who can help me solve this error?(batch_loss.backward() error when use slice)


MXNetError: [20:00:12] D:\Program Files (x86)\Jenkins\workspace\mxnet\mxnet\src\imperative\imperative.cc:372: Check failed: !AGInfo::IsNone(*i) Cannot differentiate node because it is not in a computational graph. You need to set is_recording to true or use autograd.record() to save computational graphs for backward. If you want to differentiate the same graph twice, you need to pass retain_graph=True to backward.

I used slice operator in some calculations.

I just started using mxnet, and I’m not sure what the error is.Please help me.


Not sure what the parameters for embedding_weights or encoder_state are. But it appears that none of the NDArray variables used in calculating batch_loss have gradients attached to them (through NDArray.attach_grad() call). If none of the inputs to an operator somehow link back to an NDArray with gradient storage, no computational graph will be stored for the resulting NDArray (because there are no parameters in the computational graph that appear to need gradient calculation through backward).

You can either go through this Gluon Tutorial to get a better understanding, or if you can’t figure out your problem, perhaps provide a small code that reproduces your error so I can be more effective in helping you.