Skip to content
This repository has been archived by the owner on Dec 21, 2017. It is now read-only.

Run DenseCap

mlai edited this page Jun 15, 2016 · 7 revisions

Required

#Run DenseCap

Download DenseCap

git clone https://github.com/jcjohnson/densecap.git

Download a pretrained model (Note: kaixhin/cuda-torch container doesn't have wget so may need to run apt-get install wget)

sh scripts/download_pretrained_model.sh

Run the model on the provided elephant.jpg image

#GPU mode
th run_model.lua -input_image imgs/elephant.jpg

#CPU mode
th run_model.lua -input_image imgs/elephant.jpg -gpu -1

This command will write results into the folder vis/data. We have provided a web-based visualizer to view these results; to use it, change to the vis directory and start a local HTTP server:

cd vis
python -m SimpleHTTPServer 8181

Then point your web browser to http://localhost:8181/view_results.html.

If you have an entire directory of images on which you want to run the model, use the -input_dir flag instead:

th run_model.lua -input_dir /path/to/my/image/folder

This run the model on all files in the folder /path/to/my/image/folder/ whose filename does not start with ..

The web-based visualizer is the prefered way to view results, but if you don't want to use it then you can instead render an image with the detection boxes and captions "baked in"; add the flag -output_dir to specify a directory where output images should be written:

th run_model.lua -input_dir /path/to/my/image/folder -output_dir /path/to/output/folder/

The run_model.lua script has several other flags; you can find details here.