You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, (I can never thank you enough for this setup!)
I have been playing with the animation which is great - love that you can guide the animation using blender, so intuitive, however, when using the COLAB methods, such as this one https://colab.research.google.com/github/deforum/stable-diffusion/blob/main/Deforum_Stable_Diffusion.ipynb you have the ability to interpolate between AI renders, as well as chose 2d/3d etc - I think it is looking at the previous image, checking against where it has to get to, and then morphs the images progressively over several frames. This produces much more consistent changes between frames. Whereas currently your system is just rendering from the latest Blender render image output (I think)?
Having more animation controls would be amazing! Thanks!
Additional information
No response
The text was updated successfully, but these errors were encountered:
Yeah, this would be a really valuable and lovely thing to implement. You're exactly right that it currently just sends each frame to Stable Diffusion, so the result isn't stable between frames. I've seen some really amazing algorithms that actually create a depth map / textured point cloud of each frame, and then do 3d camera moves through the newly created space to interpolate between frames.
This is beyond what I'm personally able to do with AI Render, but if anyone would like to contribute/collaborate, it would be an amazing feature (or suite of features)!
Describe the feature you'd like to see:
Hi, (I can never thank you enough for this setup!)
I have been playing with the animation which is great - love that you can guide the animation using blender, so intuitive, however, when using the COLAB methods, such as this one https://colab.research.google.com/github/deforum/stable-diffusion/blob/main/Deforum_Stable_Diffusion.ipynb you have the ability to interpolate between AI renders, as well as chose 2d/3d etc - I think it is looking at the previous image, checking against where it has to get to, and then morphs the images progressively over several frames. This produces much more consistent changes between frames. Whereas currently your system is just rendering from the latest Blender render image output (I think)?
Having more animation controls would be amazing! Thanks!
Additional information
No response
The text was updated successfully, but these errors were encountered: