You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
SVD can now get to really fast speeds step wise but is limited by the slow speed of the vae. Any way to distill the temporal spatial auto encoder the same way as the regular auto encoder?
The text was updated successfully, but these errors were encountered:
Definitely possible! I worry there's a fairly narrow band of usefulness for a TAESVD, though, since for cheap previews you can run TAESD per-frame and for max quality you should just run the SVD VAE.
(I started training a TAESVD with temporal layers a few months ago - top is GT, middle is per-frame TAESD, and bottom is TAESD with temporal layers - but I haven't gotten around to finishing it)
Since each frame after the first should only be a difference from the previous frame (not an entirely new frame), is it viable to first decode the first frame with the original SVD vae (should only take about the same memory/speed as undistilled regular SD vae in this context) then decode the rest on a vae that is specifically trained on the difference between the current and previous frame in the latents, outputting a difference map to be applied to the last decoded frame? Kinda like a video codec. Moves away from the simplicity of decoding normally though
SVD can now get to really fast speeds step wise but is limited by the slow speed of the vae. Any way to distill the temporal spatial auto encoder the same way as the regular auto encoder?
The text was updated successfully, but these errors were encountered: