Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to reproduce inception / FID scores? #12

Open
jcpeterson opened this issue Dec 19, 2017 · 3 comments
Open

how to reproduce inception / FID scores? #12

jcpeterson opened this issue Dec 19, 2017 · 3 comments

Comments

@jcpeterson
Copy link

Which scores in the table are from your code? Do you have the commands to reproduce them exactly?

@jcpeterson
Copy link
Author

bump

@takerum
Copy link
Contributor

takerum commented Jan 18, 2018

Sorry for the late reply.

All scores showed in README (except for the scores in the "official" column) are calculated with the code in this repository.
The scores for each method are calculated at the end of training, using the following extension functions:
https://github.com/pfnet-research/chainer-gan-lib/blob/master/common/evaluation.py#L82
https://github.com/pfnet-research/chainer-gan-lib/blob/master/common/evaluation.py#L154

We cannot reproduce the scores exactly because we did not specify the random seed when we trained each model, but you should be able to get similar scores for each method by running the script in https://github.com/pfnet-research/chainer-gan-lib/blob/master/example.sh

@jcpeterson
Copy link
Author

I'm getting varying scores that doesn't always preserve the rank in the table. Are these supposed to be averaged to say 10 runs of training a model? They don't seem reliable at all otherwise.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants