Skip to content
/ MFA Public

Multiple Fusion Adaptation: A Strong Framework for Unsupervised Semantic Segmentation Adaptation (BMVC 2021, official Pytorch implementation)

Notifications You must be signed in to change notification settings

KaiiZhang/MFA

Repository files navigation

Multiple Fusion Adaptation: A Strong Framework for Unsupervised Semantic Segmentation Adaptation (BMVC 2021, official Pytorch implementation)

Teaser

Kai Zhang, Yifan Sun, Rui Wang, Haichang Li and Xiaohui Hu.

Updates

17/2/2022

  1. The problem that the learning rate does not decay as expected after the warmup stage is fixed. Thanks to Pan for spotting this problem and suggesting a fix.

Abstract

This paper challenges the cross-domain semantic segmentation task, aiming to improve the segmentation accuracy on the unlabeled target domain without incurring additional annotation. Using the pseudo-label-based unsupervised domain adaptation (UDA) pipeline, we propose a novel and effective Multiple Fusion Adaptation (MFA) method. MFA basically considers three parallel information fusion strategies, i.e., the cross-model fusion, temporal fusion and a novel online-offline pseudo label fusion. Specifically, the online-offline pseudo label fusion encourages the adaptive training to pay additional attention to difficult regions that are easily ignored by offline pseudo labels, therefore retaining more informative details. While the other two fusion strategies may look standard, MFA pays significant efforts to raise the efficiency and effectiveness for integration, and succeeds in injecting all the three strategies into a unified framework. Experiments on two widely used benchmarks, i.e., GTA5-to-Cityscapes and SYNTHIA-to-Cityscapes, show that our method significantly improves the semantic segmentation adaptation, and sets up new state of the art (58.2% and 62.5% mIoU, respectively).

Installation

Install dependencies:

pip install -r requirements.txt

Data Preparation

Download Cityscapes, GTA5 and SYNTHIA-RAND-CITYSCAPES.

Inference Using Pretrained Model

1) GTA5 -> Cityscapes

Download the pretrained model (55.7 mIoU) and save it in ../cache/mfa_result. Then run the command

python test.py --config_file configs/mfa.yml
2) SYNTHIA -> Cityscapes

Download the pretrained model (58.7 mIoU for 13 categories) and save it in ../cache/mfa_syn_result. Then run the command

python test.py --config_file configs/mfa_syn.yml

Training

To reproduce the performance, you need 2 GPUs with no less than 15G memory.

1) GTA5 -> Cityscapes
  • SSL. Download warmup model A(Trained by FDA), save it in ../pretrain/FDA. Download warmup model B(Trained by SIM), save it in ../pretrain/SIM.
    • Download pseudo label, save it in ./data/Cityscapes/.
    • Train stage.
    python train.py --config_file configs/mfa.yml -g 2
2) SYNTHIA -> Cityscapes
  • SSL. Download warmup model A (Trained by FDA), save it in ../pretrain/FDA_synthia. Download warmup model B(Trained by SIM), save it in ../pretrain/SIM_synthia.
    • Download pseudo label, save it in ./data/Cityscapes/.
    • Train stage.
    python train.py --config_file configs/mfa_syn.yml -g 2

Citation

If you like our work and use the code or models for your research, please cite our work as follows.

@article{zhang2021multiple,
  title={Multiple Fusion Adaptation: A Strong Framework for Unsupervised Semantic Segmentation Adaptation},
  author={Zhang, Kai and Sun, Yifan and Wang, Rui and Li, Haichang and Hu, Xiaohui},
  journal={arXiv preprint arXiv:2112.00295},
  year={2021}
}

License

The codes and the pretrained model in this repository are under the MIT license as specified by the LICENSE file.

Acknowledgments

This code is adapted from image_seg.
We also thank to FDA, SIM, and ProDa.

About

Multiple Fusion Adaptation: A Strong Framework for Unsupervised Semantic Segmentation Adaptation (BMVC 2021, official Pytorch implementation)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published