Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why do you change initial GoogleNet loss1 output from 1000 to 128? Also,why do you change the lr_mult simultaneously ? #10

Open
jsjs0827 opened this issue Aug 17, 2018 · 3 comments

Comments

@jsjs0827
Copy link

No description provided.

@jsjs0827 jsjs0827 changed the title Why do you change initial GoogleNet Why do you change initial GoogleNet loss1 output from 1000 to 128? Also,why do you change the lr_mult simultaneously ? Aug 17, 2018
@XinyiXuXD
Copy link

The 128 issue: the author sets the embedding size of each submodel to 128, and finally obtains a 384 ensemble feature vector to represent each sample when testing. This embedding size is refer to "Deep metric
learning via lifted structured feature embedding"

@abcdvzz
Copy link

abcdvzz commented Nov 14, 2018

So why do you change the lr_mult simultaneously?

@XinyiXuXD
Copy link

because the final fc has no pretrain model to fine tune on, but the otherlayers fine tune on googlenet model that pretrained on imagenet dataset

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants