Skip to content

๐Ÿ€ Official pytorch implementation of "D2ADA: Dynamic Density-aware Active Domain Adaptation for Semantic Segmentation. Wu et al. ECCV 2022."

License

Notifications You must be signed in to change notification settings

tsunghan-wu/D2ADA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

4 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

D2ADA: Dynamic Density-aware Active Domain Adaptation for Semantic Segmentation

Official pytorch implementation of "D2ADA: Dynamic Density-aware Active Domain Adaptation for Semantic Segmentation. Wu et al. ECCV 2022.".

In this work, we present D2ADA, a general active domain adaptation framework for domain adaptive semantic segmentation. Here is a brief introduction video of our work (Remember to turn on the sound ๐Ÿ˜€).

90sec_intro.mov

Environmental Setup

  • OS: Ubuntu 20.04
  • CUDA: 11.3
  • Installation
    conda env create -f environment.yml
    

A. Data Preparation

  • Download Cityscapes, GTA5, and SYNTHIA datasets.

  • Region division (For training only): run the following preprocessing code for the three datasets.

    python3 data_preprocessing/superpixel_gen.py \
        --dataset {cityscapes,GTA5,SYNTHIA} --datadir <DATASETDIR>
    
    • Note: For SYNTHIA dataset, the --datadir argument should be <SYNTHIA-ROOT>/RAND_CITYSCAPES/.

B. Training

Step0: Model Warm-up

For simplicity, you can download our pretrained warm-up models directly. As an alternative, you can run the following scripts for UDA warm-up (or supervision warm-up) yourself.

Model Benchmark mIoU Download
DeepLabV2-ResNet101 GTA5 44.61 Link
DeepLabV3Plus-ResNet101 GTA5 45.51 Link
DeepLabV2-ResNet101 SYNTHIA 39.95 Link
DeepLabV3Plus-ResNet101 SYNTHIA 43.04 Link
How to run our warm-up script

# Model Warm-up 
CUDA_VISIBLE_DEVICES=X python3 warmup.py -p <exp-path> [--warmup {uda_warmup, sup_warmup}]
  • Default arguments are configured in utils/common.py. You can modify it via input parameters.
    • -m: Choose model backbone, like deeplabv2_resnet101 or deeplabv3plus_resnet101.
    • --src_dataset: Choose GTA5 or SYNTHIA dataset.
    • --src_data_dir, --trg_data_dir, --val_data_dir: Set dataset path.
    • --src_datalist: Use either GTA5 datalist or SYNTHIA datalist.

Step1: Run our D2ADA framework

CUDA_VISIBLE_DEVICES=X python3 train_ADA.py -p <exp-path> --init_checkpoint <checkpoint path> \
    --save_feat_dir <feature directory> [--datalist_path PREVIOUS_DATALIST_PATH] 
  • Default arguments are configured in utils/common.py. You can modify it via input parameters.
    • --init_checkpoint: Specify the path of initial model. You can use our warm-up model in iteration #0 or use exp-path/checkpoint0X.tar to continue the experiment at iteration #X.
    • save_feat_dir: directory to save information (GMM models, region fearures, ...) when conducting density-aware selection.
    • --datalist_path: Load previous datdalist to continue the experiment.
Other Notes

  • In addition to our D2ADA active learning method, we provide a number of unorganized code of active learning baselines. If you are interested in the topic, feel free to modify these code for further research.

  • Known Issue: We found that bugs were occasionally triggered when programs constructing density estimators (GMMs). Specifically, the forked program comes out to perform density estimator will not be terminated, and the main process will always wait for the subprocess. As a workaround in our experiment, when this happens, we always press CTRL-C and load the checkpoint and datalist from the previous round to continue execution. If you know how to fix this, please let me know or send a pull request.

C. Testing (Demo) and Evaluation

Step0: Download Pretrained Models

Here we provide a number of pretrained models along with selected region lists.

  • checkpoint00.tar: Initial model (our wram-up model)
  • checkpoint01.tar ~ checkpoint05.tar: ADA model with 1% target annotation, ... ADA model with 5% target annotation.
Model Benchmark Download
DeepLabV2-ResNet101 GTA5 Link
DeepLabV3Plus-ResNet101 GTA5 Link
DeepLabV2-ResNet101 SYNTHIA Link
DeepLabV3Plus-ResNet101 SYNTHIA Link
Ways to analyze selected regions

datalist_0X.pkl contains the information about the current labeled training set (including the original GTA5/SYNTHIA dataset and incrementally selected cityscapes regions). You can use the following example script to view or analyze our selected regions for further investigation or future research.

import pickle
fname = "datalist_01.pkl"
with open(fname, "rb") as f:
    data = pickle.load(f)
# dict_keys(['src_label_im_idx', 'trg_label_im_idx', 'trg_pool_im_idx', 'trg_label_suppix', 'trg_pool_suppix'])
# The data structure is the same as "dataloader/region_active_dataset.py"

Step1: Predict Semantic Labeling

python3 inference.py --trained_model_path <trained_model_path> [--save_dir INFERENCE_RESULT_DIR]

Step2: Compute mIoU

python3 evaluation.py --root_dir <Cityscapes data root> --pred_dir <saved predicted directory>

Citation

@article{wu2022d2ada,
  title={D2ADA: Dynamic Density-aware Active Domain Adaptation for Semantic Segmentation},
  author={Wu, Tsung-Han and Liou, Yi-Syuan and Yuan, Shao-Ji and Lee, Hsin-Ying and Chen, Tung-I and Huang, Kuan-Chih and Hsu, Winston H},
  journal={arXiv preprint arXiv:2202.06484},
  year={2022}
}

About

๐Ÿ€ Official pytorch implementation of "D2ADA: Dynamic Density-aware Active Domain Adaptation for Semantic Segmentation. Wu et al. ECCV 2022."

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages