Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

weird FGSM accuracy on MNIST clean data #111

Open
chhyun opened this issue Jul 12, 2023 · 3 comments
Open

weird FGSM accuracy on MNIST clean data #111

chhyun opened this issue Jul 12, 2023 · 3 comments

Comments

@chhyun
Copy link

chhyun commented Jul 12, 2023

I tried FGSM attack on MNIST clean dataset, and I got 49% accuracy,

which is too large compared to 6.4% [Madry, https://arxiv.org/pdf/1706.06083.pdf]

Am i missing something?

I'd like to ask if anyone else has done a FGSM attack against mnist, what performance you got?.

@ZhangYuef
Copy link

ZhangYuef commented Sep 12, 2023

Hi @chhyun ,I am facing the same problem as you. I got too low accuracy in my case for FGSM (epsilon=0.1, 0.3):

# attack type: GradientSignAttack
# attack kwargs: loss_fn=CrossEntropyLoss()
#                eps=0.1
#                clip_min=0.0
#                clip_max=1.0
#                targeted=False
# data: mnist_test, 10000 samples
# model: MNIST LeNet5 standard training
# accuracy: 98.89%
# adversarial accuracy: 79.96%
# attack success rate: 20.04%
# attack type: GradientSignAttack
# attack kwargs: loss_fn=CrossEntropyLoss()
#                eps=0.3
#                clip_min=0.0
#                clip_max=1.0
#                targeted=False
# data: mnist_test, 10000 samples
# model: MNIST LeNet5 standard training
# accuracy: 98.89%
# adversarial accuracy: 0.98%
# attack success rate: 99.02%

My guess here is how the epsilon is calculated. Should we normalized epsilon as epsilon/255 ?

@chhyun chhyun closed this as completed Oct 1, 2023
@chhyun chhyun reopened this Oct 1, 2023
@chhyun
Copy link
Author

chhyun commented Oct 1, 2023

Hi @chhyun ,I am facing the same problem as you. I got too low accuracy in my case for FGSM (epsilon=0.1, 0.3):

# attack type: GradientSignAttack
# attack kwargs: loss_fn=CrossEntropyLoss()
#                eps=0.1
#                clip_min=0.0
#                clip_max=1.0
#                targeted=False
# data: mnist_test, 10000 samples
# model: MNIST LeNet5 standard training
# accuracy: 98.89%
# adversarial accuracy: 79.96%
# attack success rate: 20.04%
# attack type: GradientSignAttack
# attack kwargs: loss_fn=CrossEntropyLoss()
#                eps=0.3
#                clip_min=0.0
#                clip_max=1.0
#                targeted=False
# data: mnist_test, 10000 samples
# model: MNIST LeNet5 standard training
# accuracy: 98.89%
# adversarial accuracy: 0.98%
# attack success rate: 99.02%

My guess here is how the epsilon is calculated. Should we normalized epsilon as epsilon/255 ?

Hi @ZhangYuef. I used 0.3 as epsilon to FGSM attack my natural trained MNIST model and got 49% adversarial accuracy.
It's somewhat strange to see such different results in two experiments using the same epsilon value.
How many epochs did you train and which checkpoint did you use for the result?

@Djmcflush
Copy link

Please dump full hyper parameters. The variance between your result and the expected is far beyond the margin of error.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants