Skip to content

The official implementation of "Rank-One Network: An Effective Framework for Image Restoration" via TensorFlow

License

Notifications You must be signed in to change notification settings

shangqigao/RONet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RONet

The official implementation of "Rank-One Network: An Effective Framework for Image Restoration" via TensorFlow

RONet logo

Content

Dependencies

We implemented the method on Ubuntu 16.04 with Python 3.6. Before training and test, you need to create an environment via Anaconda (suppose it has been installed on your computer), and install Cuda 8.0 and CuDNN 7.1.3, as follows,

conda create -n RONet python=3.6
source activate RONet
conda install cudnn==7.1.3 # cuda 8.0 would be installed too.

Besides, you need to install the following packages using pip install -r requirements.txt

  • numpy==1.15.4
  • tensorflow==1.10.0
  • opencv-python==4.4.0
  • tqdm==4.48.2
  • scikit-image==0.17.1
  • Pillow==7.2.0

Finally, clone our code from the repository and start a quick test:

git clone https://github.com/shangqigao/RONet.git
cd RONet

Quick test

We tested our models on the widely used benchmark datasets, you can download the collection of them from BaiduPan (extraction code: 6c1k) or OneDrive, and unzip it to the folder data/Test

Noise-free image super-resolution

Following EDSR, We test our models with four widely used bechmark datasets :

We provided the pre-trained model RONet-NF, please unzip it to the folder models.

Model Upscale Parameters BaiduPan OneDrive
RONet-NF x4 5.0M link[0m12] link

Besides, we provided the script demo.sh for test, please uncomment the following code in src/demo.sh,

python RONet_test.py --dataset Set5 --input_data_dir ../data/Test/benchmark --task BiSR --upscale 4 --net_type net_sr --depth_RODec 3 --depth_RecROs 3 --depth_RecRes 6 --depth_RecFus 3 --out_channel 3 --RONet_checkpoint ../models/RONet-NF/model --save_dir ../results --GPU_ids 0

, and test RONet-NF on Set5 as follows,

cd src
sh demo.sh

Realistic image super-resolution

Following Track2, realistic image SR of NTIRE2018, we used DIV2K_valid_LR_mild to test our models. We provided the pre-trained model RONet-R, please unzip it to the folder models.

Model Upscale Parameters BaiduPan OneDrive
RONet-R x4 5.0M link [r5d9] link

Besides, we provided the script demo.sh for test, please uncomment the following code in src/demo.sh,

python RONet_test.py --dataset DIV2K_mild --input_data_dir ../data/Test/benchmark --task ReSR --upscale 4 --net_type net_sr --depth_RODec 3 --depth_RecROs 3 --depth_RecRes 6 --depth_RecFus 3 --out_channel 3 --RONet_checkpoint ../models/RONet-R/model --save_dir ../results --ensemble --GPU_ids 0

, and test RONet-R on DIV2K_valid_LR_mild as follow,

cd src
sh demo.sh

Gray-scale image denoising

We tested our models with three widely used datasets:

We provided the pre-trained model RONet-G_sigmaxx, where xxdenotes the noise level, please unzip it to the folder models

Model Noise level Parameters BaiduPan OneDrive
RONet-G_sigma15 15 2.01M link[t4i7] link
RONet-G_sigma25 25 2.01M link[l1bf] link
RONet-G_sigma35 35 2.01M link[iw2w] link
RONet-G_sigma50 50 2.01M link[xw55] link

Besides, we provided the script demo.sh for test, please uncomment the following code in src/demo.sh,

python RONet_test.py --dataset RNI6 --input_data_dir ../data/Test/benchmark --task DEN --net_type net_den --deep_scale 48 --depth_RODec 1 --depth_RecROs 3 --depth_RecRes 6 --depth_RecFus 3 --out_channel 1 --RONet_checkpoint ../models/RONet-G_sigma50/model --save_dir ../results --sigma 50 --GPU_ids 0

, and test RONet-G on RNI6 with the noise levle of 50, as follows,

cd src
sh demo.sh

Color image denoising

We tested our models with three widely used datasets:

We provided the pre-trained model RONet-C, please unzip it to the folder models

Model Noise level Parameters BaiduPan OneDrive
RONet-C [0, 75] 2.03M link[45iv] link

Besides, we provided the script demo.sh for test, please uncomment the following code in src/demo.sh,

python RONet_test.py --dataset CBSD68 --input_data_dir ../data/Test/benchmark --task DEN --net_type net_den --deep_scale 16 --depth_RODec 1 --depth_RecROs 3 --depth_RecRes 6 --depth_RecFus 3 --out_channel 3 --RONet_checkpoint ../models/RONet-C/model --save_dir ../results --sigma 50 --GPU_ids 0

, and test RONet-C on CBSD68 with the noise levle of 50, as follows,

cd src
sh demo.sh

How to train RONet

We adopted a two-step strategy to train our RONet, i.e., (1) pretrained the RO decomposition network (RODec), and (2) fixed RODec and trained the RO reconstruction network (RORec). You can download the training datasets from the official website, and put them to the folder data/Train. Besides, please download the checkpoint of VGG19 and put vgg_19.ckpt to the folder models

Pre-train RODec

  • Dataset: DIV2K_train_HR
  • Strategy: unsupervised training
  • Demo: please uncomment the following code in the provided script (src/demo.sh).
nohup python -u RODec_train.py --train_mode unsupervised --out_channel 3 --input_data_dir ../data/Train --augment --log_dir ../logs/UROD-C --GPU_ids 0 >out &

Then, please train RODec for color mode using the following code:

cd scr
sh demo.sh

Besides, we provided pre-trained RODec models for grayscale and color images:

Model mode Strategy BaiduPan OneDrive
UROD-G Grayscale Unsupervised link[pa2z] link
UROD-C RGB Unsupervised link[ghp7] link

Train RORec

  • Noise-free image super-resolution
    • Dataset: DIV2K_train_LR_bicubic_X4
    • Strategy: supervised learning
    • Demo: please uncomment the following code in the provided script (src/demo.sh).
nohup python -u RONet_train.py --input_data_dir ../data/Train --augment --task BiSR --upscale 4 --net_type net_sr --depth_RODec 3 --depth_RecROs 3 --depth_RecRes 6 --depth_RecFus 3 --out_channel 3 --vgg_checkpoint ../models/vgg_19.ckpt --RODec_checkpoint ../models/UROD-C/model --log_dir ../logs/RONet-NF --GPU_ids 0 >out &

Then, you can train a RONet-NF model using the following code:

cd scr
sh demo.sh

You can check logging information via vim out.

  • Realistic image super-resolution
    • Dataset: DIV2K_train_LR_mild
    • Strategy: supervised learning
    • Demo: please uncomment the following code in the provided script (src/demo.sh).
nohup python -u RONet_train.py --input_data_dir ../data/Train --augment --task ReSR --upscale 4 --net_type net_sr --depth_RODec 3 --depth_RecROs 3 --depth_RecRes 6 --depth_RecFus 3 --out_channel 3 --vgg_checkpoint ../models/vgg_19.ckpt --RODec_checkpoint ../models/UROD-C/model --log_dir ../logs/RONet-R --GPU_ids 0 >out &

Then, you can train a RONet-R model using the following code:

cd scr
sh demo.sh
  • Gray-scale image denoising
    • Dataset: DIV2K_train_HR
    • Strategy: supervised learning
    • Demo: please uncomment the following code in the provided script (src/demo.sh).
nohup python -u RONet_train.py --input_data_dir ../data/Train --augment --task DEN --net_type net_den --deep_scale 48 --depth_RODec 1 --depth_RecROs 3 --depth_RecRes 6 --depth_RecFus 3 --out_channel 1 --RODec_checkpoint ../models/UROD-G/model --sigma 50 --log_dir ../logs/RONet-G_sigma50 --GPU_ids 0 >out &

Then, you can train a RODec-G model for the noise level of 50 using the following code:

cd scr
sh demo.sh
  • Color image denoising
    • Dataset: DIV2K_train_HR
    • Strategy: supervised learning
    • Demo: please uncomment the following code in the provided script (src/demo.sh).
nohup python -u RONet_train.py --input_data_dir ../data/Train --augment --task DEN --net_type net_den --deep_scale 16 --depth_RODec 1 --depth_RecROs 3 --depth_RecRes 6 --depth_RecFus 3 --out_channel 3 --sigma 75 --range --RODec_checkpoint ../models/UROD-C/model --log_dir ../models/RONet-C --GPU_ids 0 >out &

Then, you can train a RODec-C model for the noise level of 50 using the following code:

cd scr
sh demo.sh

Citation

If our work is useful in your research or publication, please cite the work:

[1] Shangqi Gao, and Xiahai Zhuang, "Rank-One Network: An Effective Framework for Image Restoration", TPAMI, 2020. [arXiv] [TPAMI]

@Article{ronet/tpami/2020,
	title={Rank-One Network: An Effective Framework for Image Restoration},
	author={S. {Gao} and X. {Zhuang}},
    journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
    doi={10.1109/TPAMI.2020.3046476},
    number=xxx,
    volumn=xxx,
    year=2020,
    pages={xx--xx}  
}

Don't hesitate to contact us via shqgao@163.com or zxh@fudan.edu.cn, if you have any questions.

About

The official implementation of "Rank-One Network: An Effective Framework for Image Restoration" via TensorFlow

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published