Skip to content

NextTechLab/ARCNN-keras

 
 

Repository files navigation

ARCNN-keras

GitHub issues GitHub forks PyPI - Python Version Tensorflow Version Docker Image Size

A tf-keras implementation of ARCNN mentioned in :

ARCNN-keras network notebook Open In Colab

Requirnments:

Dockerfile with all required libs included in repository

Dataset:

The scripts are written to be trained on any folder of images as long as:

  1. All images are of the same dimensions
  2. All images are of the same file format

example : A folder where all images are pngs and 720p (1280x720)

Recommended that you use Div2k, Use HR or LR based on the closest match to target inference domain.

Models

There are 3 seperate models for training: The ARCNN, Faster ARCNN, ARCNN Lite (A faster ARCNN with dilated convolutions)

The comparision in parameters is given below:

Model Paramenters
ARCNN 108k
Faster ARCNN 64k
ARCNN Lite 32k

Sample Results

Ground Truth

All outputs are from the dilated model

JPEG 20 Inference
JPEG 15 Inference
JPEG 10 Inference

Usage

Docker

The docker container includes all the packages plus jupyter lab for ease of use. Remember to pass the flag "--ip 0.0.0.0" to jupyter lab

The usage of docker would be the following:

Step 1: Enter repo folder

> cd /location/of/repository

Step 2: Build image from Dockerfile

docker build ./ -t arcnnkeras  

Step 3: Start and enter container

docker run -it --gpus all -v $PWD:/app -p 8888:8888 arcnnkeras bash

Notes:

  • Use the "--gpus" flag only is nvidia container runtime is set up
  • -v parameters can be added for different data folders that need to be mounted
  • The port 8888 is passed for jupyter usage. It isn't needed for inference

Inference

Inside the docker container or in your env use the infer.py script to infer the results on a folder of images.

Folder should follow the rules

  1. All images are of the same dimensions
  2. All images are of the same file format
usage: infer.py [-h] -p FOLDER_PATH -m MODEL_PATH -o OUTPUT_PATH

optional arguments:
  -h, --help            show this help message and exit
  -p FOLDER_PATH, --folder_path FOLDER_PATH
                        Path to folder of frames
  -m MODEL_PATH, --model_path MODEL_PATH
                        Path to weights file
  -o OUTPUT_PATH, --output_path OUTPUT_PATH
                        Path to output folder

Example

python infer.py -m ./models/Model2_dialted_epoch/ -p ./test/Inputs/ -o ./test/
  • Pre-Trained models are found in the model folder. Link one of them in the -m command. Reccomeneded Model2_dialted_epoch

Training

Use the train_ARCNN.py script in order to train the ARCNN model.

usage: train_ARCNN.py [-h] -m MODEL_SAVE_PATH -c CHECKPOINT_PATH -l
                               LOG_PATH -d DATASET_PATH [-f FILE_FORMAT] -v
                               {1,2,3} [-e EPOCHS] [--batch_size BATCH_SIZE]
                               [--patch_size PATCH_SIZE]
                               [--stride_size STRIDE_SIZE]
                               [--jpq_upper JPQ_UPPER] [--jpq_lower JPQ_LOWER]

optional arguments:
  -h, --help            show this help message and exit
  -m MODEL_SAVE_PATH, --model_save_path MODEL_SAVE_PATH
                        Path to Saved_model
  -c CHECKPOINT_PATH, --checkpoint_path CHECKPOINT_PATH
                        Path to checkpoints
  -l LOG_PATH, --log_path LOG_PATH
                        Path to logdir
  -d DATASET_PATH, --dataset_path DATASET_PATH
                        Path to Folder of images
  -f FILE_FORMAT, --file_format FILE_FORMAT
                        Format of images
  -v {1,2,3}, --version {1,2,3}
                        ARCNN version to train 1: Original | 2: Fast ARCNN |
                        3: Dilated
  -e EPOCHS, --epochs EPOCHS
                        Number of epochs
  --batch_size BATCH_SIZE
                        Batch size
  --patch_size PATCH_SIZE
                        Patch size for training
  --stride_size STRIDE_SIZE
                        Stride of patches
  --jpq_upper JPQ_UPPER
                        Highest JPEG quality for compression
  --jpq_lower JPQ_LOWER
                        Lowest JPEG quality for compression

About

A tf-keras implementation of ARCNN (Deep Convolution Networks for Compression Artifacts Reduction)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 52.9%
  • Jupyter Notebook 44.7%
  • Dockerfile 2.4%