This is a Tensorflow 2 implementation for our DNOW Workshop paper "Reconstructive Training for Real-World Robustness in Image Classification". https://openaccess.thecvf.com/content/WACV2022W/DNOW/html/Patrick_Reconstructive_Training_for_Real-World_Robustness_in_Image_Classification_WACVW_2022_paper.html
Reconstructive training uses a Docker container to train the defense. We have provided a Docker container at https://hub.docker.com/repository/docker/utsavisionailab/reconstructivetraining.
We have also provided the Dockerfile if you prefer to build the image manually.
cd ReconstructiveTraining
docker build -t <youruser>/reconstructivetraining .
Once done, run the Docker image. Below is an example command to run the Docker container.
docker run --gpus all --rm --shm-size 64G -it -u $(id -u):$(id -g) -v "$(pwd)":/app utsavisionailab/reconstructivetraining:latest
To train the defense, first we need to warm-up the generator using the following command:
python3 generator.py \
--original_input_dir PATH_TO_ORIGINAL_IMAGES_DIRECTORY \
--attacked_input_dir PATH_TO_ATTACKED_IMAGES_DIRECTORY \
--weights_output_dir PATH_TO_STORE_MODEL_WEIGHTS \
--logs_output_dir PATH_TO_STORE_TENSORBOARD_LOGS \
--image_size HEIGHT WIDTH COLOR \
--batch_size BATCH_SIZE \
--epochs NUM_OF_EPOCHS_TO_TRAIN \
Before we can finish training, we need to set-up a discriminator config file. Here would be an example config file for VGG19.
{
"module": "vgg19",
"model": "VGG19",
"weights": "imagenet",
"classes": 1000,
"image_width": 224,
"image_height": 224,
"image_channels": 3,
"clip_min": [
-103.939,
-116.779,
-123.68
],
"clip_max": [
151.061,
138.22101,
131.32
],
"mode": "caffe",
"verbose": 1,
"workers": 36
}
After we warm-up the generator and made a discriminator config file, we are ready to attach the target model and fully train the defense.
python3 gan.py \
--original_input_dir PATH_TO_ORIGINAL_IMAGES_DIRECTORY \
--attacked_input_dir PATH_TO_ATTACKED_IMAGES_DIRECTORY \
--generator_input_dir PATH_WHERE_GENERATOR_MODEL_WEIGHTS_WERE_STORED \
--weights_output_dir PATH_TO_STORE_MODEL_WEIGHTS \
--logs_output_dir PATH_TO_STORE_TENSORBOARD_LOGS \
--discriminator_config_file PATH_TO_DISCRIMINATOR_CONFIG \
--labels_file PATH_TO_LABELS_FILE \
--num_classes NUM_OF_UNIQUE_CLASSES \
--image_size HEIGHT WIDTH COLOR \
--batch_size BATCH_SIZE \
--epochs NUM_OF_EPOCHS_TO_TRAIN
In order to evaluate the defense, you need to run the following command:
python eval.py \
--input_dir PATH_TO_IMAGES_DIRECTORY \
--labels_file PATH_TO_LABELS_FILE \
--discriminator_config_file PATH_TO_DISCRIMINATOR_CONFIG \
--weights_dir PATH_WHERE_MODEL_WEIGHTS_WERE_STORED \
--image_size HEIGHT WIDTH COLOR \
--num_classes NUM_OF_UNIQUE_CLASSES \
--defense \
If you are using our code in a publication, please use the citation provided below
@InProceedings{Patrick_2022_WACV, author = {Patrick, David and Geyer, Michael and Tran, Richard and Fernandez, Amanda}, title = {Reconstructive Training for Real-World Robustness in Image Classification}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Workshops}, month = {January}, year = {2022}, pages = {251-260} }