Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce the used memory for big map #1217

Open
hellovuong opened this issue Feb 12, 2024 · 6 comments
Open

Reduce the used memory for big map #1217

hellovuong opened this issue Feb 12, 2024 · 6 comments

Comments

@hellovuong
Copy link
Contributor

hellovuong commented Feb 12, 2024

Hello @matlabbe,
We created a pretty good map of a large area (multi-session mapping: 5). It ended up with >4 million words in the vocabulary, which led to rtabmap crashing every time It ran in localization mode for minutes or when I called the backup service explicitly. I am aware that you already opened issue #1201, however, I am wondering if any intermediate step to reduce the size of words?
Some parameters that may related for you to check:

Kp/MaxFeatures 1500
Vis/MaxFeatures 1000
FeatureType: GFTT/BRIEF

Let me know if I can provide more information to help you with support.

Thank you for your contribution and hope that will receive your reply soon.

@alexk1976
Copy link

alexk1976 commented Feb 15, 2024

Also waiting for this critical feature. Cannot load full vocabulary due to huge memory needs to allocate and dont manage to get same accuracy while activating WM/LTM.

@hellovuong
Copy link
Contributor Author

hellovuong commented Feb 15, 2024

@alexk1976 You can try to use kp/NNStrategy to 0 or 2. It doesn't need to uncompress the descriptor from bin to float to build vocabulary, this saves 50% of memory usage, and it goes with the reduced perform of Near Neighbor search of course. However, the need of this feature is still necessary.

@matlabbe
Copy link
Member

Here some options:

  • Kp/MaxFeatures: 1500 is high, default is 500. You can fix your current database with:

    rtabmap-reprocess --Kp/MaxFeatures 500 input.db output.db
    
  • You can try Kp/ByteToFloat: true to save some RAM.

  • At the other extreme, we can use a fixed dictionary with less words (<1M): Using Precomputed Dictionary #942 (comment)

@hellovuong
Copy link
Contributor Author

Thank you!

@alexk1976
Copy link

alexk1976 commented Feb 18, 2024

i dont think it's a real solution..a bit bigger area like we have and its impossible to load dictionary even when we set MaxFeatures=500. If we dont use FlanTree - have performance issues. Fixed dictionary - gives worse accuracy. We need a way to load full graph and only part of the dictionary

@hellovuong hellovuong reopened this Feb 18, 2024
@matlabbe
Copy link
Member

@alexk1976 Agreed, it is kinda included in that other issue #1201 .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants