Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

adjust likelihood using z-score #1272

Merged
merged 1 commit into from May 20, 2024

Conversation

borongyuan
Copy link
Contributor

I constructed a new method to adjust the likelihood, as a solution to #1105 (comment). It should be compatible with previous similarity evaluation methods, as well as VLAD. The idea is to calculate z-score for values ​​greater than μ + σ. This eliminates the effect of different distributions. For new location likelihood, how significant the max value is will be evaluated.

@matlabbe
Copy link
Member

matlabbe commented Apr 29, 2024

Thanks for the suggestion, though as it is touching a core feature of RTAB-Map, I would prefer to have an parameter/option to switch on/off this new approach (with default "off") until I can compare with the original Precision/Recall results (with all those datasets https://github.com/introlab/rtabmap/wiki/Benchmark) to see if results are similar, better or worst. To do so, I'll check if I could make a docker file to automate/test against at least the main loop closure datasets.

@borongyuan
Copy link
Contributor Author

Yes, this change needs more testing. I've only done some preliminary testing so far, and found it produces pretty pleasing likelihood curves. In particular, no matter which method is used to calculate the raw likelihood, the likelihood curves are very similar. It’s just that the PDF curves have become a bit flat, so we may need to further adjust the parameters of the Bayesian filter.

The reason is that z-score is not prone to particularly large values. A significant loop event generally only has 3 to 6 σ. New location likelihood is limited to between 1 and 2. Since such distribution characteristics are stable, Bayesian filtering should be able to handle them well.

@matlabbe matlabbe merged commit c034a96 into introlab:master May 20, 2024
6 checks passed
@matlabbe
Copy link
Member

matlabbe commented May 20, 2024

I added a new parameter to select this approach: set Rtabmap/VirtualPlaceLikelihoodRatio=1 to use approach of this MR.

Param: Rtabmap/VirtualPlaceLikelihoodRatio = "0"           [Likelihood ratio for virtual place (for no loop closure hypothesis): 0=Mean / StdDev, 1=StdDev / (Max-Mean)]

Here is a comparison of resulting hypotheses of the Bayes filter based on the old (original) and proposed approaches. Note that for the proposed approach, we should set Rtabmap/LoopThr to 0.5 to be somewhat equivalent to 0.11 (default) using the original approach. On right is the Precision/Recall curve (Recall on x-axis, Precision on y-axis).

NewCollege dataset:

  • Rtabmap/VirtualPlaceLikelihoodRatio=0 (original approach)
    NewCollegeOld
  • Rtabmap/VirtualPlaceLikelihoodRatio=1 (proposed approach)
    NewCollege

CityCentre dataset:

  • Rtabmap/VirtualPlaceLikelihoodRatio=0 (original approach)
    CityCentreOld
  • Rtabmap/VirtualPlaceLikelihoodRatio=1 (proposed approach)
    CityCentre

UdeS_1Hz dataset:

  • Rtabmap/VirtualPlaceLikelihoodRatio=0 (original approach)
    UdeS_1HzOld
  • Rtabmap/VirtualPlaceLikelihoodRatio=1 (proposed approach)
    UdeS_1Hz

In conclusion, the resulting Precision/Recall curves are similar using one or the other approach for BOW. However, the Rtabmap/LoopThr seems to vary more between the datasets, which is why I didn't put the proposed approach to be the new default. I'll try to do the same comparison in coming weeks with global descriptor NETVLAD with which I expect to get better Precision/Recall with Rtabmap/VirtualPlaceLikelihoodRatio=1.

To reproduce the results, see https://github.com/introlab/rtabmap/tree/master/archive/2010-LoopClosure#readme

@borongyuan
Copy link
Contributor Author

Great to see your test results. I've been doing real-time testing with the OAK camera recently, so I haven't tested it on these datasets yet. I'm a little surprised that from your PR curves, it seems that the AUC is higher even if you still use BOW?
I'm still having some difficulty getting more significant posterior from Bayesian filtering. Intuitively, it can be clearly seen that a longer path keeping high likelihood is needed to filter to a significant result. NETVLAD does have better recall capabilities than BOW at some locations, but the current parameters don't seem to be unleashing its potential.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants