Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can share the super parameters to reproduce the result on CUB-200-2011? #21

Open
bjkite opened this issue Jan 9, 2019 · 5 comments
Open

Comments

@bjkite
Copy link

bjkite commented Jan 9, 2019

I use the super parameters given in the run script to reproduce the work of WeightLoss, but I can not get the same result as reported. My best result as follow:

Epoch-130 0.6411 0.7437 0.8301 0.8981 0.9458 0.9731

Would you please give me some help?

@bnu-wangxun
Copy link
Owner

Fisrt, the performance of CUB is not quite stable, you just need run more iteration with more times.
Second, the batch size should be 70-80, num_instance should be 5. Adam with 1e-5 learn rate. My result is always higher than 0.65 on CUB dataset.

And I suggest you try the SGD optimizer to replace ADAM, I try the SGD on Car, and a bit better performance than Adam.

@besran
Copy link

besran commented Jan 11, 2019

How about the parameters for In-Shop?

@bnu-wangxun
Copy link
Owner

Larger batchsize (> 200) is already enough. No other tricks.

@bnu-wangxun
Copy link
Owner

bnu-wangxun commented Feb 8, 2019

We rerun our script and the performance of Emenbedding size of 512 is as below:
Epoch-200 0.6595 0.7601 0.8427 0.9067 0.9465 0.9738

@bnu-wangxun
Copy link
Owner

bnu-wangxun commented Feb 18, 2019

@bjkite I think the problem of not get close performance exists in hard mining:
you should change the hard_mining in the loss: /losses/Weight.py

self.hard_mining = hard_mining

Make self.hard_mining not None.
Then you will get similar performance.
I will fix this problem in these days.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants