Skip to content

Latest commit

 

History

History
91 lines (50 loc) · 3.46 KB

SIDD.md

File metadata and controls

91 lines (50 loc) · 3.46 KB

reproduce the SIDD dataset results

1. Data Preparation

Download the train set and place it in ./datasets/SIDD/Data:
  • google drive or 百度网盘,
  • python scripts/data_preparation/sidd.py to crop the train image pairs to 512x512 patches and make the data into lmdb format.
Download the evaluation data (in lmdb format) and place it in ./datasets/SIDD/val/:
  • google drive or 百度网盘,
  • it should be like ./datasets/SIDD/val/input_crops.lmdb and ./datasets/SIDD/val/gt_crops.lmdb

2. Training

  • NAFNet-SIDD-width32:

    python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/train.py -opt options/train/SIDD/NAFNet-width32.yml --launcher pytorch
    
  • NAFNet-SIDD-width64:

    python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/train.py -opt options/train/SIDD/NAFNet-width64.yml --launcher pytorch
    
  • Baseline-SIDD-width32:

    python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/train.py -opt options/train/SIDD/Baseline-width32.yml --launcher pytorch
    
  • Baseline-SIDD-width64:

    python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/train.py -opt options/train/SIDD/Baseline-width64.yml --launcher pytorch
    
  • 8 gpus by default. Set --nproc_per_node to # of gpus for distributed validation.

3. Evaluation

Download the pretrain model in ./experiments/pretrained_models/
Testing on SIDD dataset
  • NAFNet-SIDD-width32:
python -m torch.distributed.launch --nproc_per_node=1 --master_port=4321 basicsr/test.py -opt ./options/test/SIDD/NAFNet-width32.yml --launcher pytorch
  • NAFNet-SIDD-width64:
python -m torch.distributed.launch --nproc_per_node=1 --master_port=4321 basicsr/test.py -opt ./options/test/SIDD/NAFNet-width64.yml --launcher pytorch
  • Baseline-SIDD-width32:
python -m torch.distributed.launch --nproc_per_node=1 --master_port=4321 basicsr/test.py -opt ./options/test/SIDD/Baseline-width32.yml --launcher pytorch
  • Baseline-SIDD-width64:
python -m torch.distributed.launch --nproc_per_node=1 --master_port=4321 basicsr/test.py -opt ./options/test/SIDD/Baseline-width64.yml --launcher pytorch
  • Test by a single gpu by default. Set --nproc_per_node to # of gpus for distributed validation.