We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
as the vae of opensora is different from Latte, the weights from latte could be able to use directly? Or your team train a latte model from scratch?
The text was updated successfully, but these errors were encountered:
Yes, we use it directly. It adapts very quickly and the transformation is visible in about 500 steps. This is consistent with pixart-sigma.
Sorry, something went wrong.
Thanks for your reply! But if we want to scale up the param number, what should we do first, you suggest? just train a larger new latte model?
Yes, we use it directly. It adapts very quickly and the transformation is visible in about 500 steps. This is consistent with pixart-sigma. Thanks for your reply! But if we want to scale up the param number, what should we do first, you suggest? just train a larger new latte model?
I think a pixart-alpha should be retrained.
No branches or pull requests
as the vae of opensora is different from Latte, the weights from latte could be able to use directly?
Or your team train a latte model from scratch?
The text was updated successfully, but these errors were encountered: