Skip to content

Latest commit

 

History

History
23 lines (17 loc) · 1.33 KB

README.md

File metadata and controls

23 lines (17 loc) · 1.33 KB

Adversarial Box - Pytorch Adversarial Attack and Training

Luyu Wang and Gavin Ding, Borealis AI

Motivation?

CleverHans comes in handy for Tensorflow. However, PyTorch does not have the luck at this moment. Foolbox supports multiple deep learning frameworks, but it lacks many major implementations (e.g., black-box attack, Carlini-Wagner attack, adversarial training). We feel there is a need to write an easy-to-use and versatile library to help our fellow researchers and engineers.

We have a much more updated version called AdverTorch. You can find most of the popular attacks there. This repo will not be maintained anymore.

Usage

from adversarialbox.attacks import FGSMAttack
adversary = FGSMAttack(model, epsilon=0.1)
X_adv = adversary.perturb(X_i, y_i)

Examples

  1. MNIST with FGSM (code)
  2. Adversarial Training on MNIST (code)
  3. MNIST using a black-box attack (code)

List of supported attacks

  1. FGSM
  2. PGD
  3. Black-box