Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question About Reproducing LOST Evaluation Metrics for Registers Paper #381

Open
katieluo88 opened this issue Feb 28, 2024 · 2 comments
Open

Comments

@katieluo88
Copy link

katieluo88 commented Feb 28, 2024

Hi! Thank you for the amazing work with including registers and including their checkpoints. I was trying to reproduce the results from Table 3 in "Vision Transformers Need Registers" paper: LOST unsupervised object discovery using the features of the ViT. For some reason, I'm unable to reproduce the number for DINOv2+reg on any of the datasets. We get ~35.94 for the VOC12 dataset, and ~23.39 for the COCO dataset using the official LOST implementation and the official checkpoints of DINOv2+reg (from this github codebase).

We suspect it may be due to the distillation process; is there some way that the authors can confirm this is the case? Can they possibly share the evaluation setting for the results on LOST object discovery?

Many thanks!

@nguyenthekhoig7
Copy link

Hi, same issue here, I ran on LOST, and seem there is no difference between DINOv2 and DINOv2+reg. Wondering what should I change to reproduce.

@sceddd
Copy link

sceddd commented May 17, 2024

The same here, Are there any paper around this topic that I can take a look?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants