Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

请问下训练512x512分辨率的图像也使用16x16的codebook size吗 #54

Closed
YilanWang opened this issue May 11, 2024 · 2 comments
Closed

Comments

@YilanWang
Copy link

我请问在训练512x512的时候,是再下采样一次,使用16x16的latent size, 还是把v_patch_nums=(1, 2, 3, 4, 5, 6, 8, 10, 13, 16)扩展为v_patch_nums=(1, 2, 3, 4, 5, 6, 8, 10, 13, 16....32), 还是直接使用v_patch_nums=(1, 2, 3, 4, 5, 6, 8, 10, 13, 16),最后卷积到latent size=32就好?

@keyu-tian
Copy link
Collaborator

@YilanWang
Copy link
Author

多谢作者~看到了,我发现复现的时候channel如果比较少(也就是网络小一点),multiscale vq很难收敛啊,不知道是不是复现有什么bug,希望作者大大早日开源VAE的复现

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants