Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

the target of the bim attack in the code? #1

Open
SYLL-star opened this issue Jul 14, 2021 · 3 comments
Open

the target of the bim attack in the code? #1

SYLL-star opened this issue Jul 14, 2021 · 3 comments

Comments

@SYLL-star
Copy link

SYLL-star commented Jul 14, 2021

  1. Why in the code, the y of the undercover attack in the MLP stage is 0 and 1, instead of predicting the label ?the paper mentioned that the target of the undercover attack is the prediction of the model.
  2. Why doesn't undercoverNet need to open the test mode? undercoverNet.eval()?
@persistz
Copy link

persistz commented Aug 7, 2021

I also find the problem mentioned by @SYLL-star in 1 and I think it should be marked as a bug.
The related code as shown below:

normal_x = torch.cat(normal_samples, dim=0)
adversarial_x = torch.cat(adversarial_samples, dim=0)
normal_y = torch.zeros(normal_x.shape[0]).long()
adversarial_y = torch.ones(adversarial_x.shape[0]).long()

and

x, y = x.to(device), y.to(device)
undercover_adv = undercover_gradient_attacker.fgsm(x, x, False, 1/255)

This is a critical bug as which gives the defender a priori knowledge that a benign sample will be attacked to label 0, while an adversarial example will be attacked to label 1.

Although this bug is important, it is easy to fix. I can provide a pull request for the bug if you need, but there is no guarantee that the result obtained will be as good as which methioned in the original paper, and by my own implementation, I found that there is some gap between the two results.

@SYLL-star
Copy link
Author

I also find the problem mentioned by @SYLL-star in 1 and I think it should be marked as a bug.
The related code as shown below:

normal_x = torch.cat(normal_samples, dim=0)
adversarial_x = torch.cat(adversarial_samples, dim=0)
normal_y = torch.zeros(normal_x.shape[0]).long()
adversarial_y = torch.ones(adversarial_x.shape[0]).long()

This is a critical bug as which gives the defender a priori knowledge that a benign sample will be attacked to label 0, while an adversarial example will be attacked to label 1.

Although this bug is important, it is easy to fix. I can provide a pull request for the bug if you need, but there is no guarantee that the result obtained will be as good as which methioned in the original paper, and by my own implementation, I found that there is some gap between the two results.

I also modified this code according to the description of his paper, and the final result is also very different from the table in the paper. If possible, can I take a look at your pull request, thank you very much !

@persistz
Copy link

I also modified this code according to the description of his paper, and the final result is also very different from the table in the paper. If possible, can I take a look at your pull request, thank you very much!

Sure, you can feel free to contact me by email. I‘d like to provide relevant codes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants