Download preprocessed coco captions from link from Karpathy's homepage. Extract dataset_coco.json
from the zip file and copy it in to data/
. This file provides preprocessed captions and also standard train-val-test splits.
Then do:
$ python scripts/prepro_labels.py --input_json data/dataset_coco.json --output_json data/cocotalk.json --output_h5 data/cocotalk
prepro_labels.py
will map all words that occur <= 5 times to a special UNK
token, and create a vocabulary for all the remaining words. The image information and vocabulary are dumped into data/cocotalk.json
and discretized caption data are dumped into data/cocotalk_label.h5
.
Download pre-extracted feature from link. You can either download adaptive one or fixed one.
For example:
mkdir data/bu_data; cd data/bu_data
wget https://storage.googleapis.com/bottom-up-attention/trainval.zip
unzip trainval.zip
Then:
python script/make_bu_data.py --output_dir data/cocobu
This will create data/cocobu_fc
, data/cocobu_att
and data/cocobu_box
. If you want to use bottom-up feature, you can just follow the following steps and replace all cocotalk with cocobu.
Rest of the necessary files for COCO and Flickr30k train and evaluation, and all other data files are available in link for download provided by Ruotian Luo.
It's similar.
python scripts/prepro_labels.py --input_json data/dataset_flickr30k.json --output_json data/f30ktalk.json --output_h5 data/f30ktalk
python scripts/prepro_ngrams.py --input_json data/dataset_flickr30k.json --dict_json data/f30ktalk.json --output_pkl data/f30k-train --split train
This is to generate the coco-like annotation file for evaluation using coco-caption.
python scripts/prepro_reference_json.py --input_json data/dataset_flickr30k.json --output_json data/f30k_captions4eval.json
For bottom-up feature, you can download from link
wget https://scanproject.blob.core.windows.net/scan-data/data.zip
and then convert to a pth file using the following script:
import numpy as np
import os
import torch
from tqdm import tqdm
out = {}
def transform(id_file, feat_file):
ids = open(id_file, 'r').readlines()
ids = [_.strip('\n') for _ in ids]
feats = np.load(feat_file)
assert feats.shape[0] == len(ids)
for _id, _feat in tqdm(zip(ids, feats)):
out[str(_id)] = _feat
transform('dev_ids.txt', 'dev_ims.npy')
transform('train_ids.txt', 'train_ims.npy')
transform('test_ids.txt', 'test_ims.npy')
torch.save(out, 'f30kbu_att.pth')