Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Weird figure reconstruction results for newly trained model #47

Open
5agado opened this issue Jun 18, 2020 · 6 comments
Open

Weird figure reconstruction results for newly trained model #47

5agado opened this issue Jun 18, 2020 · 6 comments

Comments

@5agado
Copy link

5agado commented Jun 18, 2020

I trained a new model on a personal footwear dataset. Sample results from training looked good, but when I run a make_figures script, I obtained this weird, oversaturated/false-colors results.

reconstructions_0

Any idea what's happening?

@shahik
Copy link

shahik commented Jun 26, 2020

did you train it from scratch or using finetuning? what was your dataset size?

@5agado
Copy link
Author

5agado commented Jun 29, 2020

Trained from scratch, ~100k images.
The weird thing is that the samples during training are good

sample_129_0

@MagicalForcee
Copy link

what are the steps to train a model for custom dataset?

@uhiu
Copy link

uhiu commented Dec 5, 2020

hi, I also encountered this problem. Had you solved it?
image

@uhiu
Copy link

uhiu commented Dec 6, 2020

I think I find the reason. For me, it's because I stop the training at LOD=5, where the final LOD should be 6. So I should adjust the code in the demo python file like

# Z, _ = model.encode(x, layer_count - 1, 1)
Z, _ = model.encode(x, layer_count - 2, 1)
# cause the layer_cout=7 in the config file, and I want it to be 5, not 6

Accordingly, we should also adjust the decoder part

model.decoder(x, 5, 1, noise=True)

hope it helps. stay safe

@podgorskiy
Copy link
Owner

Yes, what @uhiu says seems to be the most likely cause.
When training on custom data, make sure that the final LOD is consistent everywhere.

The first thing to check is the config. There are two parameters:

for example from bedroom:

DATASET.MAX_RESOLUTION_LEVEL: 8 this means that it will train up to 2**8 resolution (256)
MODEL.LAYER_COUNT: 7 this means that the network will have 7 blocks. We start from 4x4 and each block doubles the resolution except for the first one. This means the final output will be 4 * 2 ** (7 - 1), which is 256.

Basically, if you want resolution 2**x, then you should set DATASET.MAX_RESOLUTION_LEVEL: x and MODEL.LAYER_COUNT: x-1

@5agado,
Seems that the very last layer has weights with random initialization.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants