Official repository of 3DSRnet (ICIP2019)
We provide the training and test code along with the trained weights and the dataset (train+test) used for 3DSRnet. The result videos on the Vidset4 benchmark are provided here. (Results of Bicubic, VSRnet, VESPCN, 3DSRnet and GT provided)
If you find this repository useful, please consider citing our paper.
Reference:
Soo Ye Kim, Jeongyeon Lim, Taeyoung Na, and Munchurl Kim, "Video Super-Resolution Based on 3D-CNNS with Consideration of Scene Change," IEEE International Conference on Image Processing, 2019. Electronic Poster
Extended paper on arXiv:
Soo Ye Kim, Jeongyeon Lim, Taeyoung Na, and Munchurl Kim, "3DSRnet: Video Super-resolution using 3D Convolutional Neural Networks," arXiv: 1812.09079, 2018.
Bibtex:
@inproceedings{kim2019video,
title = {Video Super-Resolution Based on 3D-CNNS with Consideration of Scene Change},
author = {Kim, Soo Ye and Lim, Jeongyeon and Na, Taeyoung and Kim, Munchurl},
booktitle = {Proceedings of the IEEE International Conference on Image Processing},
year = {2019}
}
Our code is implemented using MatConvNet. (MATLAB required)
Appropriate installations of MatConvNet is necessary via the official website.
Detailed instructions on installing MatConvNet can be found here.
The 3D convolution layer is implemented based on pengsun's mex implementation in GitHub.
MexConv3D must be installed prior to executing any of the provided source code.
The code was tested under the following setting:
- MATLAB 2017a
- MatConvNet 1.0-beta25
- CUDA 9.0, 10.0
- cuDNN 7.1.4
- NVIDIA TITAN Xp GPU
- Download the source code in a directory of your choice
<source_path>
- Download the test dataset (Vid4) from this link and place the 'test' folder in
<source_path>/data
- Place the files in
<source_path>/+dagnn/
to<MatConvNet>/matlab/+dagnn
- Run
test.m
We provide the pre-trained weights for the x2, x3 and x4 models in <source_path>/net
.
The test dataset (Vid4) can be downloaded from here.
With test.m
, the pre-trained models can be evaluated on the Vid4 benchmark.
Remarks
- You can change the SR scale factor (2, 3 or 4) by modifying the
scale
parameter in the initial settings. - You can change the video sequence by modifying the
sequence_name
parameter in the initial settings. - When you run this code, evaluation will be performed on PSNR and the .png prediction files will be saved in
<source_path>/pred/
.
- Download the source code in a directory of your choice
<source_path>
- Place the files in
<source_path>/+dagnn/
to<MatConvNet>/matlab/+dagnn
- Run
test_SF_subnet.m
ortest_SF_SR.m
- The pre-trained weights of the SF subnet is given in
<source_path>/net
. - Four samples of data containing a scene boundary after frame 1, 2, 3 and 4, and a sample containing no scene change are provided in
<source_path>/data/SF_subnet
. - With
test_SF_subnet.m
, you can test the scene boundary detection of the SF subnet for the given sample data. - In
test_SF_SR.m
, the whole pipeline of detecting the scene boundary, replacing the different scene frames, and finally inferring the video SR network is implemented. When you run this code, the .png prediction files will be saved in<source_path>/pred/SF_SR
. You can change the SR scale factor (2, 3 or 4) by modifying thescale
parameter in the initial settings.
- Download the source code in a directory of your choice
<source_path>
- Download the train dataset from here and place the 'train' folder in
<source_path>/data
- Place the files in
<source_path>/+dagnn/
to<MatConvNet>/matlab/+dagnn
- Run
train.m
This code (train.m
) trains the video SR subnet. The 3D-CNN model of the video SR subnet is specified in net.m
.
The train dataset can be downloaded from here.
Remarks
- You can change the SR scale factor (2, 3 or 4) by modifying the
scale
parameter. - The trained weights will be saved in
<source_path>/net/net_x[scale]
Please contact me via email (sooyekim@kaist.ac.kr) for any problems regarding the released code.