Skip to content

zhshi0816/Video-Frame-Interpolation-Transformer

Repository files navigation

Video Frame Interpolation Transformer

This repo is the official implementation of 'Video Frame Interpolation Transformer', CVPR 2022.

Paper, Video, Video without compression

Packages

The following pakages are required to run the code:

  • python==3.7.6
  • pytorch==1.5.1
  • cudatoolkit==10.1
  • torchvision==0.6.1
  • cupy==7.5.0
  • pillow==8.2.0
  • einops==0.3.0

Train

  • Download the Vimeo-90K septuplets dataset.
  • Then train VFIT-B using default training configurations:
python main.py --model VFIT_B --dataset vimeo90K_septuplet --data_root <dataset_path>

Training VFIT-S is similiar to above, just change model to VFIT_S.

Test

After training, you can evaluate the model with following command:

python test.py --model VFIT_B --dataset vimeo90K_septuplet --data_root <dataset_path> --load_from checkpoints/model_best.pth

You can also evaluate VFIT using our weight here.

More datasets for evaluation:

Please consider citing this paper if you find the code and data useful in your research:

@inproceedings{shi2022video,
  title={Video Frame Interpolation Transformer},
  author={Shi, Zhihao and Xu, Xiangyu and Liu, Xiaohong and Chen, Jun and Yang, Ming-Hsuan},
  booktitle={CVPR},
  year={2022}
}

References

Some other great video interpolation resources that we benefit from:

  • FLAVR: Flow-Agnostic Video Representations for Fast Frame Interpolation, arXiv 2021 Code
  • QVI: Quadratic Video Interpolation, NeurIPS 2019 Code
  • AdaCoF: Adaptive Collaboration of Flows for Video Frame Interpolation, CVPR 2020 Code

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages