Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance advice #89

Open
ducha-aiki opened this issue Nov 16, 2023 · 3 comments
Open

Performance advice #89

ducha-aiki opened this issue Nov 16, 2023 · 3 comments

Comments

@ducha-aiki
Copy link
Contributor

ducha-aiki commented Nov 16, 2023

Although the adaptive stuff is very cool for per-image pair evaluation, I have found that batching together in 32 offers order of magnitude better speed-up. So if you do 3D reconstruction, just write some batching script and enjoy the speed-up.

@sarlinpe
Copy link
Member

The adaptive mechanisms indeed don't yet support batching well. We could however conservatively exit when all pairs in the batch are ready and prune to the large number of keypoints retained across the batch.

@udit7395
Copy link

@ducha-aiki Sorry to ask this but would it possible to share a snippet of the batching code? If not possible, some intuition how would one go about it? I am using SuperPoint + LightGlue and these are things I have tried up till now,

  • Superpoint batching (failed): tried updating lightglue/utils.py ImagePreprocessor and Extractor but towards the end in SuperPoint torch.stack fails because keypoints are of different sizes
  • Ran Superpoint for batch_image size individually in a for loop and tried creating a dictionary {image0: dict, image1: dict} but failed again because keypoints are of different sizes

@ducha-aiki
Copy link
Contributor Author

@udit7395 your version 2 is correct:

Ran Superpoint for batch_image size individually in a for loop and tried creating a dictionary

What you also should do, is to reduce threshold to zero and reduce nms.
Finally, consider padding with random descriptors for those, which are still less than needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants