Skip to content

tangli-udel/DRE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Are Data-driven Explanations Robust against Out-of-distribution Data?

[Paper] [Code] [Video] [Deep-REAL Lab]

This repository holds the Pytorch implementation of Distributionally Robust Explanations (DRE) in Are Data-driven Explanations Robust against Out-of-distribution Data? by Tang Li, Fengchun Qiao, Mengmeng Ma, and Xi Peng. If you find our code useful in your research, please consider citing:

@inproceedings{li2023dre,
 title={Are Data-driven Explanations Robust against Out-of-distribution Data?},
 author={Li, Tang and Qiao, Fengchun and Ma, Mengmeng and Peng, Xi},
 booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
 year={2023}
}

Introduction

We study the out-of-distribution (OOD) robustness of data-driven explanations. Our evaluations prove that data-driven explanations are susceptible to distributional shifts. However, acquiring the ground truth explanations for all samples or obtaining the one-to-one mapping between samples from different distributions are prohibitively expensive or even impossible in practice. To this end, we propose Distributionally Robust Explanation (DRE) that, inspired by self-supervised learning, leveraging the mixed explanation to provide supervisory signals for the learning of explanations.

method

Pretrained Weights

DRE models:

Quick Start

This repository reproduces our results on Terra Incognita and VLCS, which is build upon Python3, Pytorch v1.12.1, and CUDA v10.2 on Ubuntu 18.04. Please install all required packages by running:

pip install -r requirements.txt

Data Download

To download the datasets, please run:

python download.py --data_dir=./

Please note that some URLs may not work due to various factors. You can copy the URLs and download them manually.

Prediction Results

The results for explanation quality and prediction accuracy: quantitative

To reproduce the results of our DRE method, please run:

python -m dre.train \
      --dataset terra_incognita \
      --model DRE  

To reproduce the results of baseline ERM method, please run:

python -m dre.train \
      --dataset terra_incognita \
      --model ERM  

For other baselines, such as IRM, GroupDRO, and Mixup, please run (you can specify the baseline method by changing the algorithm):

python3 -m domainbed.scripts.train \
       --data_dir=./data/ \
       --algorithm IRM \
       --dataset terra_incognita \
       --test_env 0

Explanation Visualization

The explanations using Grad-CAM: qualitative

To reproduce the explanation comparison between DRE and baseline methods, please run the notebooks in "./dre/explanations/visualizations/". For example, the Grad-CAM comparison between DRE and ERM:

./dre/explanations/visualizations/grad_cam_erm.ipynb

Explanation Fidelity

To reproduce the explanation fidelity results, please run:

python -m dre.explanations.fidelity.evaluate_auc \
      --ckpt-path ../../ckpts/best_model.pth \
      --root ../../../data/terra_incognita/location_38

TODO

  • Training code
  • Evaluation code
  • Terra Incognita
  • VLCS
  • Urban Land

Acknowledgement

Part of our code is borrowed from the following repositories.

We thank to the authors for releasing their codes. Please also consider citing their works.

About

The Pytorch implementation for "Are Data-driven Explanations Robust against Out-of-distribution Data?" (CVPR 2023)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published