Skip to content

Using a video as a base for the results? #57

Answered by tin2tin
gerroon asked this question in Q&A
Discussion options

You must be logged in to vote

The open source img2vid and vid2vid solutions are not very strong. If you select zeroscope and and input a video or image and output video, there is a Strip Power value which is now exposed again. If you lower this value to ex. 0.18 you'll get some text prompt impact, but not a lot.

The img2img video option results in flicker. I've done the paint videos with that. ControlNet on frame by frame will also flicker.

So, we'll have to see if something like AnimateDiff or https://github.com/williamyang1991/Rerender_A_Video will get implemented in the Diffusers module so Pallaidium/and all other opensource generativeAI can get to the level of PIKA or MIdjourney's gen2, but we're not there yet.

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by gerroon
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants