In dcgan.py, the parameters of netD are updated as training G?


#1

In dcgan.py, the code for updating G is as followings.
print(netD(fake))
with autograd.record():
output = netD(fake)
output = output.reshape((-1, 2))
errG = loss(output, real_label)
errG.backward()
trainerG.step(opt.batch_size)
print(netD(fake))
I print the output of netD before and after updating G, I found they have different outputs, which means the netD was also updated? In my opinion, only the parameters of G should be updated as excuting trainerG.step(). So what’s the problem?


#2

When trainerG is created, which parameters are passed to initializer?


#3

The trainerG is created for netG. But I found the problem. It was casued by BN.
Before updating G, netD is under predict mode. After updating G, netD turns to train mode because of autograd operator. For different mode, netD with BN layers has different output.