Skip to content

RQ-Wu/RIDCP_dehazing

Repository files navigation

🔥 RIDCP: Revitalizing Real Image Dehazing via High-Quality Codebook Priors (CVPR2023)

Python 3.8 pytorch 1.12.0

This is the official PyTorch codes for the paper.

RIDCP: Revitalizing Real Image Dehazing via High-Quality Codebook Priors
Ruiqi Wu, Zhengpeng Duan, Chunle Guo*, Zhi Chai, Chongyi Li ( * indicates corresponding author)
The IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR), 2023

framework_img

[Arxiv Paper] [中文版 (TBD)] [Website Page] [Dataset (pwd:qqqo)]

🚀 Highlights:

  • SOTA performance: The proposed RIDCP achieves superior performance in both qualitative and quantitative comparisons with SOTA methods.
  • Adjustable: Our RIDCP can adjust the degree of enhancement manually.

📄 Todo-list

  • Release code for VQGAN pre-training and CHM weights acquiring
  • Website page
  • Colab demo

Demo

Video examples

       

Dependencies and Installation

  • Ubuntu >= 18.04
  • CUDA >= 11.0
  • Other required packages in requirements.txt
# git clone this repository
git clone https://github.com/RQ-Wu/RIDCP.git
cd RIDCP

# create new anaconda env
conda create -n ridcp python=3.8
conda activate ridcp 

# install python dependencies
pip install -r requirements.txt
BASICSR_EXT=True python setup.py develop

Get Started

Prepare pretrained models & dataset

  1. Downloading pretrained checkpoints
Model Description 🔗 Download Links
HQPs VQGAN pretrained on high-quality data. [Google Drive (TBD)] [Baidu Disk (pwd: huea)]
RIDCP Dehazing network trained on data generated by our pipeline.
CHM Weight for adjusting controllable HQPs matching.
  1. Preparing data for training
Dataset Description 🔗 Download Links
rgb_500 500 clear RGB images as the input of our phenomenological degradation pipeline [Google Drive (TBD)] [Baidu Disk (pwd: qqqo)]
depth_500 Corresponding depth map generated by RA-Depth(https://github.com/hmhemu/RA-Depth).
Flick2K, DIV2K High-quality data for VQGAN pre-training -
  1. The final directory structure will be arranged as:
datasets
    |- clear_images_no_haze_no_dark_500
        |- xxx.jpg
        |- ...
    |- depth_500
        |- xxx.npy
        |- ...
    |- Flickr2K
    |- DIV2K

pretrained_models
    |- pretrained_HQPs.pth
    |- pretrained_RIDCP.pth
    |- weight_for_matching_dehazing_Flickr.pth

Quick demo

Run demos to process the images in dir ./examples/ by following commands:

python inference_ridcp.py -i examples -w pretrained_models/pretrained_RIDCP.pth -o results --use weight --alpha -21.25

Train RIDCP

Step 1: Pretrain a VQGAN on high-quality dataset

TBD

Step 2: Train our RIDCP

CUDA_VISIBLE_DEVICES=X,X,X,X python basicsr/train.py --opt options/RIDCP.yml

Step3: Adjust our RIDCP

TBD

Citation

If you find our repo useful for your research, please cite us:

@inproceedings{wu2023ridcp,
    title={RIDCP: Revitalizing Real Image Dehazing via High-Quality Codebook Priors},
    author={Wu, Ruiqi and Duan, Zhengpeng and Guo, Chunle and Chai, Zhi and Li, Chongyi},
    booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
    year={2023}
}

License

Licensed under a Creative Commons Attribution-NonCommercial 4.0 International for Non-commercial use only. Any commercial use should get formal permission first.

Acknowledgement

This repository is maintained by Ruiqi Wu.