Skip to content

lmunoz-gonzalez/Poisoning-Attacks-with-Back-gradient-Optimization

Repository files navigation

Poisoning Attacks with Back-gradient Optimization

Matlab code with an example of the poisoning attack described in the paper "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization." The code includes the attack against Adaline, Logistic Regression and a small MultiLayer Perceptron for MNIST dataset (using digits 1 and 7).

Use

To generate the random training/validation splits, first run the script createSplits.m in the "MNIST_splits" folder. Then, the scripts to run the attacks against Adaline, logistic regression and the MLP are testAttackAdalineMNIST.m, testAttackLRmnist.m and testAttackMLPmnist.m respectively.

Citation

Please cite this paper if you use the code in this repository as part of a published research project.

@inproceedings{munoz2017towards,
  title={{Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization}},
  author={Mu{\~n}oz-Gonz{\'a}lez, Luis and Biggio, Battista and Demontis, Ambra and Paudice, Andrea and Wongrassamee, Vasin and Lupu, Emil C and Roli, Fabio},
  booktitle={Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security},
  pages={27--38},
  year={2017}
}

Related papers

You may also be interested some of our related papers on data poisoning:

About the authors

This research work has been a collaboration between the Resilient Information Systems Security (RISS) group at Imperial College London and the Pattern Recognition and Applications (PRA) Lab at the University of Cagliari.

About

Example of the attack described in the paper "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published