Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fast gradient Method doesn't work for double precision: Suggestion for the fix #1224

Open
williampiat3 opened this issue Dec 17, 2021 · 1 comment

Comments

@williampiat3
Copy link

The issue tracker should only be used to report bugs or feature requests. If you are looking for support from other library users, please ask a question on StackOverflow.

Describe the bug
I used the PGD method in this folder but I use double precision in my models thus encountered an error while using the attack
cleverhans/cleverhans/torch/attacks/projected_gradient_descent.py

I found a way of fixing the problem:
line 74 in cleverhans/cleverhans/torch/attacks/fast_gradient_method.py
instead of:

x = x.clone().detach().to(torch.float).requires_grad_(True)

I put

x = x.clone().detach().to(x.dtype).requires_grad_(True)

My solution then covers double precision and simple precision

@kylematoba
Copy link

this also breaks for torch.float16.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants