You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
According to the paper, for training the augmentation network, the adversarial losses has a minus sign ahead.
Why in the code, Line 160 is commented? And why Line 163 is not loss_aug_net = -1*loss_aug + loss_div + loss_diversity
Hi RizhaoCai. Recently, I tried to run the source code and repeat the experiment. I have the same question as yours. Do you have any insight why it uses the input_aug.register_hook(lambda grad: grad * (-model.args.adv_weight_stn)) but not loss_aug_net = -1*loss_aug + loss_div + loss_diversity?
The implementation is to make training efficient. By using the hooker to reverse gradient before flowing from the target network to augmentation network, the two networks can be updated in one forward and backward.
Hi,
Thanks for contributing the source code.
I have some questions about the loss calculation in
online-augment/train_aug_stn.py
Line 154 in 9d46ab4
According to the paper, for training the augmentation network, the adversarial losses has a minus sign ahead.
Why in the code, Line 160 is commented?
And why Line 163 is not
loss_aug_net = -1*loss_aug + loss_div + loss_diversity
The text was updated successfully, but these errors were encountered: