Skip to content

jeromerony/augmented_lagrangian_adversarial_attacks

Repository files navigation

This repository contains the experiments for the paper "Augmented Lagrangian Adversarial Attacks" https://arxiv.org/abs/2011.11857. This does not contain the ALMA attack proposed in the paper, which is implemented in adversarial-library.

Requirements

Additional required data

The model state dicts for MNIST, CIFAR10 and ImageNet are fetched from various locations.

To ease reproducibility, we use the robustbench library to fetch the models for CIFAR10 (no action required here). For MNIST and ImageNet, the models can be fetched from their original repositories, however, we provide the models in a separate zip file to simplify the process. The zip file can downloaded at: https://zenodo.org/record/6549010, or using the direct download link https://zenodo.org/record/6549010/files/ALMA_models_data.zip.

This zip file also contains the 1000 randomly selected images from the ImageNet validation set. These images have already been pre-processed (center-crop of 224x224) and stored into a pytorch Tensor.

Once downloaded, the files should be extracted at the root of this repository.

Experiments

To run the experiments on MNIST, CIFAR10 and ImageNet, execute the scripts:

  • python minimal_attack_mnist.py
  • python minimal_attack_cifar10.py
  • python minimal_attack_imagenet.py

These scripts assume that the code is run on the first visible cuda enabled device. Changing torch.device('cuda:0') to torch.device('cpu') allows to run them on CPU, however, this will be extremely slow. These scripts also assume that there is about 16GB of available video memory on the cuda device. For smaller memory sizes, batch_size can be reduced.

All the results will be saved in the results directory as .pt files containing python dictionaries with information related to the attacks.

Results

To extract all the results in a readable .csv file, use the compile_results.py script. This script contains a configuration of all the attacks run. If only a part of the experiments were performed, part of the config can be commented to account for it. This will create one .csv file per dataset and save them in the results directory.

Curves

To plot the robust accuracy curves, the scripts plot_results_mnist.py, plot_results_cifar10.py, plot_results_imagenet.py can be executed. This will save the curves in the results/curves folder.

Citation

@InProceedings{rony2020augmented,
    author    = {Rony, J{\'e}r{\^o}me and Granger, Eric and Pedersoli, Marco and {Ben Ayed}, Ismail},
    title     = {Augmented Lagrangian Adversarial Attacks},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2021},
    pages     = {7738-7747}
}

About

Code for the ICCV 2021 paper "Augmented Lagrangian Adversarial Attacks"

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages