Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Gradient calculation issue about PGD attacks #85

Open
rshaojimmy opened this issue Nov 30, 2020 · 1 comment
Open

Gradient calculation issue about PGD attacks #85

rshaojimmy opened this issue Nov 30, 2020 · 1 comment

Comments

@rshaojimmy
Copy link

rshaojimmy commented Nov 30, 2020

First of all, I would like to thank you for this incredible work!

I suppose that the gradient of loss should be calculated w.r.t the input image instead of the perturbations (e.g. delta in the following codes) in each iteration of PGD attack. May I know why the gradient of loss is calculated w.r.t the perturbations (e.g. delta.grad.data.sign()) in each iteration?

Thanks.

if delta_init is not None:
    delta = delta_init
else:
    delta = torch.zeros_like(xvar)

delta.requires_grad_()
for ii in range(nb_iter):
    outputs = predict(xvar + delta)
    loss = loss_fn(outputs, yvar)
    if minimize:
        loss = -loss

    loss.backward()
    if ord == np.inf:
        grad_sign = delta.grad.data.sign()
        delta.data = delta.data + batch_multiply(eps_iter, grad_sign)
        delta.data = batch_clamp(eps, delta.data)
        delta.data = clamp(xvar.data + delta.data, clip_min, clip_max
                           ) - xvar.data
@LLeavesG
Copy link

I think the two approaches are equivalent,because x is fixed, so grad(x + delta) = grad(delta)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants