A repository for reproducing the methods and experiments, presented in this paper, for understanding adversarial robustness based on a notion of label uncertainty. Created by Xiao Zhang.
The code was developed using Python3 on Anaconda
-
Install Pytorch 1.6.0:
conda update -n base conda && conda install pytorch=1.6.0 torchvision -c pytorch -y
-
Install other dependencies:
pip install waitGPU && conda install -c conda-forge cleanlab imageio
-
00_data
folder containes the CIFAR-10H dataset -
01_label_uncertainty
folder containes the codes for visualizing label uncertainty (Figures 2 and Figure 6) and error region label uncertainty of classification models(Figure 3)- visualize label uncertainty on CIFAR-10
python visualize.py
- pretrain CIFAR-10 classifiers
python train_cifar10.py --attack none && python train_cifar10.py --attack pgd
- compute error region statistics
python err_stats.py
- visualize label uncertainty on CIFAR-10
-
02_concentration_estimation
folder containes the codes for obtaining our intrinsic robustness estimates (Figure 4 and Table 1)python concentration_lu_ball.py
-
03_abstaining_classifier
folder containes the codes for the experiments on abstaining classifier (Figures 5)python eval.py && python plot.py
-
04_confident_learning
folder containes the codes (adapted from cleanlab) for the experiments on estimating label error sets using confident learning (Figures 7 and Figure 8)- prepare training and testing datasets
cd data && bash prepare_dataset.bash
- pretrain CIFAR-10 classifier
python cifar10_train_crossval.py
- Estimate label error sets using confident learning and visualize the difference with CIFAR-10H
python estimate_label_errors_test.py
- prepare training and testing datasets