Skip to content
/ GRIEF Public

GRIEF - Image feature for teach-and-repeat visual-based navigation in outdoor environments

Notifications You must be signed in to change notification settings

gestom/GRIEF

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

86 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Image Features for Visual Teach-and-Repeat Navigation in Changing Environments

This project adresses visual-based navigation of mobile robots in outdoor environments. In particular, we adress the robustness of image features to seasonal changes in outdoor environments. First, we provide a simple framework that allows to benchmark the feature extractors - so far, our benchmark was used by Peer Neubert, who showed that Superpixel Grids (SpG) and Convolutional Neural Networks (CNN) outperform other image features in terms of their robustness to seasonal changes. However, the CNN-based features are computationally expensive. So, in this project, we also provide an evolutionary algorithm that allows to train the BRIEF features to be robust to environmental changes. We call this feature GRIEF (Generated BRIEF). While this feature is slighly less robust that SpG/CNN, it's really fast to calculate. The GRIEF feature and its evaluation is described in detail in a paper published in the Journal of Robotics and Autonomous Systems [1] and it was also presented at European Conference on Mobile robotics [2].

Image features for Visual Teach-and-Repeat Navigation in Changing Environments Click the picture to see a detailed explanation - make sure you have sound on.

Dependencies

The project itself depends on openCV and it uses the openCV non-free packages. To install the openCV-nonfree, type this in terminal:

  • sudo add-apt-repository --yes ppa:xqms/opencv-nonfree
  • sudo apt-get update
  • sudo apt-get install libopencv-nonfree-dev
  • sudo apt-get install libopencv-dev

Moreover, it uses the gnuplot and transfig packages to draw the results. You can install those by:

  • sudo apt-get install gnuplot xfig transfig

Datasets

The datasets we used for evaluation are available for download at my google drive and at L-CAS owncloud.

Feature evaluation

Testing the main program

  1. Go to tools and compile the match_all utilily: cd tools;make;cd ..,
  2. Run ./tools/match_all DETECTOR DESCRIPTOR DATASET to perform the evaluation of a single detector/descriptor combination (e.g. ./tools/match_all star brief GRIEF-dataset/michigan),
  3. After the tests finishes, have a look in the dataset/results/ directory for a detector_descriptor.histogram file (e.g. GRIEF-datasets/michigan/results/up-surf_brief.histogram),
  4. Run the benchmark on the provided dataset: ./scripts/match_all.sh DATASET.

Running benchmarks

  1. The first lines of the detectors and descriptors files in the settings folder contain the detectors and descriptors that will be used for the benchmark. You can select the detectors and descriptors for the benchmark by editing these files. Ty to modify the first line of the settings/detectors so that it contains star up-sift, and the first line of the settings/descriptors so that it contains brief root-sift.
  2. To run a benchmark of all detector/descriptor combinations: ./scripts/match_all.sh DATASET. For example, running the ./scripts/match_all.sh GRIEF-datasets/michigan with the files set according to the previous point will test four image features: star+brief, star+root-sift, up-sift+brief and up-sift+root-sift on the GRIEF-datasets/michigan dataset.
  3. To run a benchmark that will test the detector/descriptor pairs in a successive way, run ./scripts/match.sh DATASET. That is, running the ./scripts/match.sh GRIEF-datasets/michigan with the settings/detectors and settings/descriptors files set according to point 1 will test star+brief and up-sift+root-sift image features.

Evaluation of results

  1. The scripts, which evaluate the results obtained by running the benchmarks, evaluate the detectors and descriptors from the first lines of the files in the settings folder.
  2. Running ./scripts/benchmark_evolution.sh DATASET evaluates every iteration of the GRIEF algorithm stored in the grief_history on a given DATASET.
  3. Running ./scripts/benchmark_precision.sh DATASET creates a latex-formatted table that contains the error rates of the detector/descriptor combinations.
  4. Running ./scripts/draw.sh DATASET draws the dependence of the heading estimation error on the number of features extracted and stores the results in rates.fig and rates.pdf files.

GRIEF training

  1. To initiate the training, you need to set the initial comparisons of the GRIEF feature. Either reset the GRIEF to be the same as BRIEF by ./scripts/resetGrief.sh or switch to the GRIEF that was used in [1] by running ./scripts/restoreGrief.sh.
  2. Running ./scripts/evolveGrief.sh DATASET NUMBER will evolve a NUMBER of GRIEF generations on DATATASET, e.g. ./scripts/evolveGrief.sh GRIEF-dataset/michigan 100.
  3. Training will be speeded-up if you restrict the number of images by creating a smaller dataset just for training.
  4. To switch to an arbitrary GRIEF that was generated during the training, run ./scripts/restoreGrief.sh [grief_file]. The grief_files are in grief_history directory, which contains comparisons for the individual GRIEF generations and their fitness.

References

  1. T.Krajnik, P.Cristoforis, K. Kusumam, P.Neubert, T.Duckett: Image features for Visual Teach-and-Repeat Navigation in Changing Environments. Journal of Robotics and Autonomous Systems, 2016 bibtex.
  2. T.Krajnik, P.Cristoforis, M.Nitsche, K. Kusumam, T.Duckett: Image features and seasons revisited. ECMR 2015. bibtex.

Acknowledgements

This research is currently supported by the Czech Science Foundation project 17-27006Y STRoLL. It was also funded by the EU ICT project 600623 STRANDS.

About

GRIEF - Image feature for teach-and-repeat visual-based navigation in outdoor environments

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published