Skip to content

lorisbaz/self-taught_localization

Repository files navigation

STL

In this package, we provide the python code for following paper:

Self-taught Object Localization using Deep Networks. L. Bazzani, A. Bergamo, D. Anguelov, and L. Torresani CoRR 2014.

We include:

  • A demo that shows how STL can be used to extract the objectness bounding boxes of an image
  • The scripts to generate the plots of our paper

Dependencies

Getting started

Demo

  • Download the model used in our experiments from here
  • Open the file main_extract_stl.py
  • Change row 19 with the path where you downloaded the model
  • Select the option "cpu" or "gpu" at row 21
  • Run: python main_extract_stl.py

In order to play around with the parameters of STL, open the file stl_params.py and look at what you can pass to the the initialization function as argument.

By default the code runs the unsupervised version of STL, but it can be changed to the supervised version by choosing use_fullimg_GT_label=True. Note that the label(s) should be provided along with the image at row 24. See the file prototxt/synset_words.txt for the list of labels. For the example in the demo, it should be used gt_labels = ["n01744401"].

Generate Plots Paper

Open Matlab and run the script generate_figures.m. New curves can be added by modifing the file utils_generate_figures_paper/config.m.

Additional Info

List of 200 classes randomly selected for the ILSVRC2012-(val,train)-200-RND can be found in the file list_classes_ILSVRC2012-200-RND.txt

L. Bazzani and A. Bergamo contributed equally to the project. For the license and usage, have a look at the file LICENSE.

About

Self-taught Object Localization with Deep Networks - ARXIV 2014

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published