Skip to content

Experiments of the bachlor's thesis "Quantitive Evaluation of the Expected Antagonism of Explainability and Privacy". Two explainers are tested against privacy attacks.

Notifications You must be signed in to change notification settings

lange-martin/xai-privacy

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

XAIandDataPrivacy

This repository includes the experiments of the bachlor's thesis "Quantitive Evaluation of the Expected Antagonism of Explainability and Privacy". I show that the scikit-learn implementation of Individual Conditional Expectation can enable the privacy attacks membership inference and training data extraction. Additionally, counterfactuals from the explainer DiCE are shown to enable training data extraction if the method kd-tree is used. The experiments use the datasets "Adult Data Set" by UCI and "Logistic regression To predict heart disease" from Kaggle.

Execute with Docker

Run command docker build -t xaidataprivacy . in the directory with the Dockerfile.

Then you can run the image mounted to your working directory with the following command: docker run -p 8888:8888 -v WORKING_DIRECTORY:/home/jovyan/work xaidataprivacy. The working directory should be the directory of this repository.

Open the jupyter notebook at the URL specified in the command prompt.

About

Experiments of the bachlor's thesis "Quantitive Evaluation of the Expected Antagonism of Explainability and Privacy". Two explainers are tested against privacy attacks.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published