Skip to content

sharmi1206/Membership_Inference_Attack_DP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 

Repository files navigation

Membership_Inference_Attack_DP

Acknowledgemnet and Reference : This code relies on the research work done as https://www.biorxiv.org/content/10.1101/2020.08.03.235416v1.full. and its github https://github.com/work-hard-play-harder/DP-MIA

This repository adds an extra feature of testing privacy related ML attacks by building multi-class classificatuion models using Differential privacy.

This repository contains 2 models CNN and LSTM trained using Multi-class classification problem using DIfferential Privacy (with loss function as SparseCategoricalCrossentropy)

Requirements

Python 3.5 or higher TensorFlow 1.14 or 1.15

pip install tensorflow-privacy

Steps to execute

  1. After downloading the code , fill the path ='' with your current direectory path
  2. Dataset is already present with the cloned repository , does not need download. It uses daata from http://archive.ics.uci.edu/ml/datasets/Adult
  3. LSTM and CNN (conv1D) models are used to train the shadow and attack model

alt text

  1. L1 (Kernel Regularization parameter) can be tweaked to diminish the effect of overfitting and lower the attack accuracy.
  2. Both the models are trained with Differential privacy using DPGradientDescentGaussianOptimizer