Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the Binomial Deviance Loss #30

Open
Jeff-Zilence opened this issue Apr 11, 2019 · 3 comments
Open

About the Binomial Deviance Loss #30

Jeff-Zilence opened this issue Apr 11, 2019 · 3 comments

Comments

@Jeff-Zilence
Copy link

Thank you for sharing your code. Nice work! I have a question about the Binomial Deviance Loss. In your previous paper, Binomial without mining achieves recall-top1 of 64% on CUB200, but I can not reproduce the result with your code. Could you provide more detail about your implementation? For example, the alpha and beta.

@bnu-wangxun
Copy link
Owner

bnu-wangxun commented Apr 11, 2019

alpha is 40 and beta is 2. Batchsize is 60-80, and adam with 1e-5 lr.

But performance on CUB is not stable (2% higher or lower is not suprising). The other three datasets is much easier to have the performance as in the paper.

@Jeff-Zilence
Copy link
Author

Thank you very much. I reproduced the result of Binomial without mining. I think the problem is the normalization term (1/alpha and 1/beta). These terms are removed in your non-mining version and the accuracy starts dropping when I add these terms. But you use these normalization terms in the mining version. Is that designed for balancing the gradient of positive and negative pairs?

@bnu-wangxun
Copy link
Owner

Yes! Exactly that!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants