In dcgan.py, the parameters of netD are updated as training G?

In dcgan.py, the code for updating G is as followings.
print(netD(fake))
with autograd.record():
output = netD(fake)
output = output.reshape((-1, 2))
errG = loss(output, real_label)
errG.backward()
trainerG.step(opt.batch_size)
print(netD(fake))
I print the output of netD before and after updating G, I found they have different outputs, which means the netD was also updated? In my opinion, only the parameters of G should be updated as excuting trainerG.step(). So what’s the problem?

When trainerG is created, which parameters are passed to initializer?

The trainerG is created for netG. But I found the problem. It was casued by BN.
Before updating G, netD is under predict mode. After updating G, netD turns to train mode because of autograd operator. For different mode, netD with BN layers has different output.

2 Likes