Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Accuracy is always zero #20

Open
LXYTSOS opened this issue Jan 9, 2019 · 4 comments
Open

Accuracy is always zero #20

LXYTSOS opened this issue Jan 9, 2019 · 4 comments

Comments

@LXYTSOS
Copy link

LXYTSOS commented Jan 9, 2019

The accuracy is always zero, any idea of what might went wrong?

@bnu-wangxun
Copy link
Owner

The accuracy during training makes no sense, it doesn't matter. If you read the loss function, you wiil find that. It is a remaining issue of old code. I will modify this problem in the near future.

You will find the test is OK.

@LXYTSOS
Copy link
Author

LXYTSOS commented Jan 9, 2019

Thank you very much, I'll try it.

@LXYTSOS
Copy link
Author

LXYTSOS commented Jan 9, 2019

I don't quite understand the code:gallery_feature, gallery_labels = query_feature, query_labels = features, labels and then you calculate the similarity between query_feature and gallery_feature.
and can you explain the Recall_at_ks() function in evaluations/recall_at_k.py, especially what's ks_dict stands for?

@bnu-wangxun
Copy link
Owner

Please ref to the paper below:
[3][H. Oh Song, Y. Xiang, S. Jegelka, and S. Savarese. Deep metric learning via lifted structured feature embedding. In CVPR, 2016.]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants