Skip to content

Latest commit

 

History

History
210 lines (160 loc) · 13.3 KB

README.md

File metadata and controls

210 lines (160 loc) · 13.3 KB

CURL: Neural Curve Layers for Global Image Enhancement (ICPR 2020)

Sean Moran, Steven McDonagh, Greg Slabaugh

Huawei Noah's Ark Lab

Repository links for the paper CURL: Neural Curve Layers for Global Image Enhancement. In this repository you will find a link to the code and information of the datasets. Please raise a Github issue if you need assistance of have any questions on the research.

BATCH SIZE: Note this code is designed for a batch size of 1. It needs re-engineered to support higher batch sizes. Using higher batch sizes is not supported currently. To replicate our reported results please use a batch size of 1 only. If you do have a patch for CURL to support higher batch sizes please raise a pull request on this repo and we will integrate.

UPDATE 30th May 2022: Github user mahdip72 has kindly provided a refactored version of CURL. See Issue 31. A copy can also be found in CURL_refactored.gz. Note the authors of the paper have not tested this version of CURL.

UPDATE 19th April 2022: Github user barbodpj has kindly provided a batch > 1 version of CURL. See Issue 27. A copy can also be found in CURL_large_batch.tar.gz. Note the authors of the paper have not tested this version of CURL.

Input Label Ours (CURL)
Input Label Ours (CURL)
Input Label Ours (CURL)

Requirements

requirements.txt contains the Python packages used by the code.

How to train CURL and use the model for inference

Training CURL

Instructions:

To get this code working on your system / problem you will need to edit the data loading functions, as follows:

  1. main.py, change the paths for the data directories to point to your data directory
  2. data.py, lines 248, 256, change the folder names of the data input and output directories to point to your folder names

To train, run the command:

python3 main.py

Inference - Using Pre-trained Models for Prediction

The directory pretrained_models contains a CURL pre-trained model on the Adobe5K_DPE dataset. The model with the highest validation dataset PSNR (23.58dB) is at epoch 510:

  • curl_validpsnr_23.073045286204017_validloss_0.0701291635632515_testpsnr_23.584083321292365_testloss_0.061363041400909424_epoch_510_model.pt

This pre-trained CURL model obtains 23.58dB on the test dataset for Adobe DPE.

To use this model for inference:

  1. Place the images you wish to infer in a directory e.g. ./adobe5k_dpe/curl_example_test_input/. Make sure the directory path has the word "input" somewhere in the path.
  2. Place the images you wish to use as groundtruth in a directory e.g. ./adobe5k_dpe/curl_example_test_output/. Make sure the directory path has the word "output" somewhere in the path.
  3. Place the names of the images (without extension) in a text file in the directory above the directory containing the images i.e. ./adobe5k_dpe/ e.g. ./adobe5k_dpe/images_inference.txt
  4. Run the command and the results will appear in a timestamped directory in the same directory as main.py:
python3 main.py --inference_img_dirpath=./adobe5k_dpe/ --checkpoint_filepath=./pretrained_models/curl_validpsnr_23.073045286204017_validloss_0.0701291635632515_testpsnr_23.584083321292365_testloss_0.061363041400909424_epoch_510_model.pt

CURL for RGB images

  • rgb_ted.py contains the TED model for RGB images

CURL for RAW images

  • raw_ted.py contains the TED model for RGB images

Github user contributions

CURL_for_RGB_images.zip is a contribution (RGB model and pre-trained weights) courtsey of Github user hermosayhl

Bibtex

If you do use ideas from the paper in your research please kindly consider citing as below:

@INPROCEEDINGS{moran2020curl,
  author={Moran, Sean and McDonagh, Steven and Slabaugh, Gregory},
  booktitle={2020 25th International Conference on Pattern Recognition (ICPR)}, 
  title={CURL: Neural Curve Layers for Global Image Enhancement}, 
  year={2021},
  volume={},
  number={},
  pages={9796-9803},
  doi={10.1109/ICPR48806.2021.9412677}}

Datasets

  • Samsung S7 (110 images, RAW, RGB pairs): this dataset can be downloaded here. The validation and testing images are listed below, the remaining images serve as our training dataset. For all results in the paper we use random crops of patch size 512x512 pixels during training.

    • Validation Dataset Images

      • S7-ISP-Dataset-20161110_125321
      • S7-ISP-Dataset-20161109_131627
      • S7-ISP-Dataset-20161109_225318
      • S7-ISP-Dataset-20161110_124727
      • S7-ISP-Dataset-20161109_130903
      • S7-ISP-Dataset-20161109_222408
      • S7-ISP-Dataset-20161107_234316
      • S7-ISP-Dataset-20161109_132214
      • S7-ISP-Dataset-20161109_161410
      • S7-ISP-Dataset-20161109_140043
    • Test Dataset Images

      • S7-ISP-Dataset-20161110_130812
      • S7-ISP-Dataset-20161110_120803
      • S7-ISP-Dataset-20161109_224347
      • S7-ISP-Dataset-20161109_155348
      • S7-ISP-Dataset-20161110_122918
      • S7-ISP-Dataset-20161109_183259
      • S7-ISP-Dataset-20161109_184304
      • S7-ISP-Dataset-20161109_131033
      • S7-ISP-Dataset-20161110_130117
      • S7-ISP-Dataset-20161109_134017
  • Adobe-DPE (5000 images, RGB, RGB pairs): this dataset can be downloaded here. After downloading this dataset you will need to use Lightroom to pre-process the images according to the procedure outlined in the DeepPhotoEnhancer (DPE) paper. Please see the issue here for instructions. Artist C retouching is used as the groundtruth/target. Note, that the images should be extracted in sRGB space. Feel free to raise a Gitlab issue if you need assistance with this (or indeed the Adobe-UPE dataset below). You can also find the training, validation and testing dataset splits for Adobe-DPE in the following file.

  • Adobe-UPE (5000 images, RGB, RGB pairs): this dataset can be downloaded here. As above, you will need to use Lightroom to pre-process the images according to the procedure outlined in the Underexposed Photo Enhancement Using Deep Illumination Estimation (DeepUPE) paper and detailed in the issue here. Artist C retouching is used as the groundtruth/target. You can find the test images for the Adobe-UPE dataset at this link.

License

BSD-3-Clause License

Contributions

We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion.

If you plan to contribute new features, utility functions or extensions to the core, please first open an issue and discuss the feature with us. Sending a PR without discussion might end up resulting in a rejected PR, because we might be taking the core in a different direction than you might be aware of.