Skip to content

dayu11/Differentially-Private-Deep-Learning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

55 Commits
 
 
 
 
 
 

Repository files navigation

Differentially-Private-Deep-Learning

Update 11/18/2021

Update the results of fine-tuning RoBERTa-large with large batchsize and full precision.

Update 09/01/2021

Our code for fine-tuning BERT models with differential privacy now supports loading official RoBERTa checkpoints.

Readme

This repo provides some example code to help you get started with differentially private deep learning.

Our implementation uses Pytorch. We cover several algorithms including Differentially Private SGD [1], Gradient Embedding Perturbation [2], and Reparametrized Gradient Perturbation [3].

In the vision folder, we implement the algorithms in [1,2,3] to train deep ResNets on benchmark vision datasets.

In the language folder, we implement the algorithm in [3] to fine-tune BERT models on four tasks from the GLUE benchrmark.

References

[1]: Deep learning with differential privacy. Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. In ACM SIGSAC Conference on Computer and Communications Security, 2016.

[2]: Do Not Let Privacy Overbill Utility: Gradient Embedding Perturbation for Private Learning. Da Yu, Huishuai Zhang, Wei Chen, and Tie-Yan Liu. In International Conference on Learning Representations (ICLR), 2021.

[3]: Large Scale Private Learning via Low-rank Reparametrization. Da Yu, Huishuai Zhang, Wei Chen, Jian Yin, and Tie-Yan Liu. In International Conference on Machine Learning (ICML), 2021.

About

This repo implements several algorithms for learning with differential privacy.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published