Skip to content

hallojs/ml-pepr

Repository files navigation

ML-PePR: Pentesting Privacy and Robustness (Beta)

docs_pages_workflow publish_pypi_workflow black python_versions

ML-PePR stands for Machine Learning Pentesting for Privacy and Robustness and is a Python library for evaluating machine learning models. PePR is easily extensible and hackable. PePR's attack runner allows structured pentesting, and the report generator produces straightforward privacy and robustness reports (LaTeX/PDF) from the attack results.

Full documentation 📚 including a quick-start guide 🏃💨

Caution, we cannot guarantee the correctness of PePR. Always do check the plausibility of your results!

Installation

We offer various installation options. Follow the instructions below to perform the desired installation. If you want to install the latest developer version, please use the code-repository of this library. The current release is only tested with Python 3.6.

Repository

  1. Clone the repository.
  2. Cd to project directory: cd ml-pepr
  3. Run in the terminal: pip install .

PyPi

pypi-typical: pip install mlpepr

Docker

To use PePR inside a docker container, build a CPU or GPU image. Note that your system must be set up for GPU use: TensorFlow Docker Requirements.

Build the docker image:

  1. Clone the repository: git clone https://github.com/hallojs/ml-pepr.git
  2. Cd to project directory: cd ml-pepr
  3. Build the docker image: docker build -t <image name> . -f Dockerfile-tf-<cpu or gpu>

Attack Catalog

PePR offers the following attacks:

Attack Type Google Colab
1 Membership Inference Attack (mia) Privacy (Black Box) nb0_
2 Direct Generalized Membership Inference Attack (gmia) Privacy (Black Box) nb1_
3 Foolbox Attacks Robustness nb2_
4 Adversarial Robustness Toolbox (ART) Attacks Robustness/Privacy nb3_

License

The entire content of the repository is licensed under GPLv3. The workflow for generating the documentation was developed by Michael Altfield and was modified and extended for our purposes.

References


  1. Shokri, Reza, et al. "Membership inference attacks against machine learning models." 2017 IEEE Symposium on Security and Privacy (SP). IEEE, 2017.

  2. Long, Yunhui, et al. "Understanding membership inferences on well-generalized learning models." arXiv preprint arXiv:1802.04889 (2018).

  3. Foolbox: A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX.

  4. ART: Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams