Skip to content

BashivanLab/afd

Repository files navigation

Official code for Adversarial Feature Desensitization (AFD).

https://arxiv.org/abs/2006.04621

You can run training procedure by calling afd_train.py. It currently supports MNIST, CIFAR10, and CIFAR100 datasets. We have tested the code with ResNet18 on MNIST, CIFAR10 and CIFAR100.

Example:

python afd_train.py --dataset=cifar10 --enc_model=resnet18norm --save_path=[SAVE_PATH]

Model checkpoints:

Download the pretrained models from the links below:

MNIST-checkpoint

CIFAR10-checkpoint

CIFAR100-checkpoint

Use notebooks/test.ipynb to run attacks on the pretrained models.

Reference

@inproceedings{bashivan2021adversarial,
  title={Adversarial Feature Desensitization},
  author={Bashivan, Pouya and Bayat, Reza and Ibrahim, Adam and Ahuja, Kartik and Faramarzi, Mojtaba and Laleh, Touraj and Richards, Blake and Rish, Irina},
  journal={arXiv preprint arXiv:2006.04621},
  booktitle={NeurIPS},
  year={2021}
}

About

Code for adversarial feature desensitization

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published