Skip to content

🎬 Video Captioning: ICCV '15 paper implementation

Notifications You must be signed in to change notification settings

pochih/Video-Cap

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

33 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Open Source Love

Video-Captioning

Image src

performance

method BLEU@1 score
seq2seq* 0.28

*seq2seq is the reproduction of paper's model

run the code

pip install -r requirements.txt
./run.sh data/testing_id.txt data/test_features

for details, run.sh needs two parameters

./run.sh <video_id_file> <path_to_video_features>
  • video_id_file

a txt file with video id

you can use data/testing_id.txt for convience

  • path_to_video_features

a path contains video features, each video feature should be a *.npy file

take a look at data/test_features

you can use "data/test_features" directory for convience

train the code

pip install -r requirements.txt
./train.sh

test the code

./test.sh <path_to_model>
  • path_to_model

the path to trained model

type "models/model-2380" to use pre-trained model

Environment

  • OS: CentOS Linux release 7.3.1611 (Core)
  • CPU: Intel(R) Xeon(R) CPU E3-1230 v3 @ 3.30GHz
  • GPU: GeForce GTX 1070 8GB
  • Memory: 16GB DDR3
  • Python3 (for data_parser.py) & Python2.7 (for others)

Author

Po-Chih Huang / @pochih