Skip to content

Taniya-Das/video_annotation

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Knowledge Graph Extraction from Videos

Code for the paper Knowledge Graph Extraction from Videos.

Steps to reproduce

  • Prepare video tensors
  1. Download the video files for MSVD and MSRVTT.
  2. Preprocess each dataset using preprocess_.py to obtain video tensors of the right shape and match with the correct set of captions using a numerical video id.
  3. Run the VGG and I3D networks, make_vgg_vecs.py and make_i3d_vecs.py to get feature vectors for the videos.
  • Prepare logical caption datasets
  1. Download the word2vec vectors, and place at ../data/w2v_vecs.bin
  2. Run, in order, semantic_parser.py, w2v_wn_links.py and make_new_dset.py. These, respectively, convert the natural language captions to logical captions, link the components of the logical captions to wordnet, and form a new dataset from the linked logical captions (ie format the dataset properly and exclude predicates and individuals appearing fewer than 50 times).
  • Train and validate the model using main.py.

About

Dissertation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%