Skip to content

dengzhuo-AI/Real-Fundus

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RFormer: Transformer-based Generative Adversarial Network for Real Fundus Image Restoration on A New Clinical Benchmark (J-BHI 2022)

Zhuo Deng, Yuanhao Cai, Lu Chen, Zheng Gong, Qiqi Bao, Xue Yao, Dong Fang, Wenming Yang, Shaochong Zhang, Lan Ma

The first two authors contribute equally to this work

News

  • 2022.06.28: Data, code, and models have been released. 🐌
  • 2022.06.22: Our paper has been accepted by J-BHI 2022. 🐌

Abstract: Ophthalmologists have used fundus images to screen and diagnose eye diseases. However, different equipments and ophthalmologists pose large variations to the quality of fundus images. Low-quality (LQ) degraded fundus images easily lead to uncertainty in clinical screening and generally increase the risk of misdiagnosis. Thus, real fundus image restoration is worth studying. Unfortunately, real clinical benchmark has not been explored for this task so far. In this paper, we investigate the real clinical fundus image restoration problem. Firstly, We establish a clinical dataset, Real Fundus (RF), including 120 low- and high-quality (HQ) image pairs. Then we propose a novel Transformer-based Generative Adversarial Network (RFormer) to restore the real degradation of clinical fundus images. The key component in our network is the Window-based Self-Attention Block (WSAB) which captures non-local self-similarity and long-range dependencies. To produce more visually pleasant results, a Transformer-based discriminator is introduced. Extensive experiments on our clinical benchmark show that the proposed RFormer significantly outperforms the state-of-the-art (SOTA) methods. In addition, experiments of downstream tasks such as vessel segmentation and optic disc/cup detection demonstrate that our proposed RFormer benefits clinical fundus image analysis and applications.


Real Fundus

image

Real Fundus consists of 120 LQ and HQ clinical fundus image pairs with the spatial size of 2560 $\times$ 2560.

Network Architecture

image

Comparison with State-of-the-art Methods

This repo is a baseline and toolbox containing 8 algorithms for real fundus images.

We are going to enlarge our model zoo in the future.

Quantitative Comparison on Real Fundus

Method Params(M) FLOPS(G) PSNR SSIM Model ZOO
Cofe-Net 39.31 22.48 17.26 0.789
GLCAE --- --- 21.37 0.570
I-SECRET 10.85 14.21 24.57 0.854
Bicubic+RL --- --- 25.34 0.824
ESRGAN 15.95 18.41 26.73 0.823
RealSR 15.92 29.42 27.99 0.850
MST 3.48 3.59 28.13 0.854
RFormer 21.11 11.36 28.32 0.873 Baidu Disk

The test size of FLOPS is 128 $\times$ 128. For GANs, we just test and show the Params of Generators.

Note: access code for Baidu Disk is fd11

1.Create Environment:

  • Python 3 (Recommend to use Anaconda)
  • NVIDIA GPU + CUDA
  • Python packages:
cd /Real-Fundus/
pip install -r requirements.txt

2.Prepare Dataset:

  • Download Real Fundus and unzip Real_Fundus.zip into ./datasets/Real_Fundus/
  • Divide Real Fundus into training images, validation images , and testing images randomly. The default rate is training : validation : testing = 81 : 9 : 30.
cd /Real-Fundus/datasets/
python3 generate_dataset.py
  • Crop training and validation images into the patches with the size of 128 $\times$ 128 and generate the train_dataset in ./datasets/train_dataset/ and val_dataset in ./datasets/val_dataset/.
python3 generate_patches.py

3.Training

To train a model, run

cd /Real-Fundus/
python3 ./train_code/train.py

Please note that hyper-parameter, such as path of training data and path of validation data, can be changed in ./train_code/train.yml.

4.Testing

To test trained model, run

cd /Real-Fundus/
python3 ./test_code/test.py

5.Evaluation on the Test Set

(1) Download the pretrained model from (Baidu Disk, code: fd11) and place them to /Real-Fundus/test_code/model_zoo/.

(2) To test pretrained model, run

cd /Real-Fundus/
python3 ./test_code/test.py --weights ./test_code/model_zoo/rformer.pth

6.Citation

If this repo helps you, please consider citing our work:

@article{deng2022rformer,
  title={Rformer: Transformer-based generative adversarial network for real fundus image restoration on a new clinical benchmark},
  author={Deng, Zhuo and Cai, Yuanhao and Chen, Lu and Gong, Zheng and Bao, Qiqi and Yao, Xue and Fang, Dong and Yang, Wenming and Zhang, Shaochong and Ma, Lan},
  journal={IEEE Journal of Biomedical and Health Informatics},
  year={2022},
  publisher={IEEE}
}

If you have any questions, please contact me at dz20@mails.tsinghua.edu.cn

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages