Autograd affects evaluation


#1

I encountered a very strange problem. When I evaluate the net, autograd.record() has impact on the evaluation accuracy.
If I write

pred=net(data)

the accuracy is only 0.04
However, when I write

with autograd.record():
pred = net(data)

Then, the accuracy becomes 0.36 (This value I think is correct and reasonable). This seems to be very strange, since I know that during evaluation, with autograd.record should not be there. Can anyone help explain? Thanks.


#2

Autograd scope changes the behavior of layers such as dropout and batchnorm which are designed to behave differently between training and test. If you comment about your network architecture, I may be able to provide more specific help.