Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

one-level network for 64x64 image size datasets #2

Open
jcpeterson opened this issue Jun 25, 2017 · 11 comments
Open

one-level network for 64x64 image size datasets #2

jcpeterson opened this issue Jun 25, 2017 · 11 comments

Comments

@jcpeterson
Copy link

Any chance you can add this?

@igul222
Copy link
Owner

igul222 commented Jun 26, 2017

I can put something together, but we ran experiments with this and found it didn't perform nearly as well as 2-level. Do you want it anyway?

@jcpeterson
Copy link
Author

Yes, that would be much appreciated. I'd like to compare performance on a few datasets under several conditions, especially celebA, which I can already successfully load in as jpg files. Thanks

@jcpeterson
Copy link
Author

It doesn't have to run with this codebase if you have another that works already

@igul222
Copy link
Owner

igul222 commented Jun 29, 2017

Just updated the code with a model that should work ('64px_big_onelevel'), but I haven't actually tested it. Let me know how it goes if you do.

@jcpeterson
Copy link
Author

Thanks Ishaan! Likely citation incoming ;)

I will give this a shot and let you know if it runs / how it performs.

I don't know if you are aware, but this architecture seems quite SOTA for smallish datasets with manageable variation (more complex than MNIST, but less than CIFAR where data-to-complexity ratio seems off to me), outperforming even the best new GANs. That doesn't really come across in the paper.

@igul222
Copy link
Owner

igul222 commented Jun 29, 2017 via email

@jcpeterson
Copy link
Author

Sample quality, but also a lower probability of artifacts that give away that the images aren't "real". Agreed on LSUN, and clearly you tried a large range of datasets, much more than most papers.

I can show you some examples soon enough.

@jcpeterson
Copy link
Author

Ok, it looks like training works with the new script, thanks! The sampling code does seem different though. The samples are int32 (isn't this wrong?), and it samples 8 points and then adds variability to those?

@jcpeterson
Copy link
Author

Actually, the int32 is for the output images it seems, so that's fine. However, I'm still not sure where the variability is coming from in the rows of the output image

@igul222
Copy link
Owner

igul222 commented Jul 8, 2017

Can you point to the sampling code you're talking about? The code that gets executed should be lines 855-902.

@jcpeterson
Copy link
Author

Yes that's right. When this runs, I get 8 samples (rows), and then 8 variations (columns) on those samples. I'd like sampling behavior like the two-level network with the logits *= 10000. line added. The redundant columns are easy to remove, so that's not really a problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants