ML-PePR stands for Machine Learning Pentesting for Privacy and Robustness and is a Python library for evaluating machine learning models. PePR is easily extensible and hackable. PePR's attack runner allows structured pentesting, and the report generator produces straightforward privacy and robustness reports (LaTeX/PDF) from the attack results.
Full documentation 📚 including a quick-start guide 🏃💨
Caution, we cannot guarantee the correctness of PePR. Always do check the plausibility of your results!
We offer various installation options. Follow the instructions below to perform the desired installation. If you want to install the latest developer version, please use the code-repository of this library. The current release is only tested with Python 3.6.
- Clone the repository.
- Cd to project directory:
cd ml-pepr
- Run in the terminal:
pip install .
pypi-typical: pip install mlpepr
To use PePR inside a docker container, build a CPU or GPU image. Note that your system must be set up for GPU use: TensorFlow Docker Requirements.
Build the docker image:
- Clone the repository:
git clone https://github.com/hallojs/ml-pepr.git
- Cd to project directory:
cd ml-pepr
- Build the docker image:
docker build -t <image name> . -f Dockerfile-tf-<cpu or gpu>
PePR offers the following attacks:
Attack | Type | Google Colab |
---|---|---|
1 Membership Inference Attack (mia) | Privacy (Black Box) | _ |
2 Direct Generalized Membership Inference Attack (gmia) | Privacy (Black Box) | _ |
3 Foolbox Attacks | Robustness | _ |
4 Adversarial Robustness Toolbox (ART) Attacks | Robustness/Privacy | _ |
The entire content of the repository is licensed under GPLv3. The workflow for generating the documentation was developed by Michael Altfield and was modified and extended for our purposes.
Shokri, Reza, et al. "Membership inference attacks against machine learning models." 2017 IEEE Symposium on Security and Privacy (SP). IEEE, 2017.↩
Long, Yunhui, et al. "Understanding membership inferences on well-generalized learning models." arXiv preprint arXiv:1802.04889 (2018).↩
Foolbox: A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX.↩
ART: Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams↩