Skip to content

m-onz/artifice

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

artifice

about

using generative ai image-to-video ML to hallucinate extended video sequences from a single source image.

you will need an API key from replicate.com exported as an environment variable.

dependencies

tested on debian linux

this requires ffmpeg

this costs money to run *see replicate.com pricing

put your images in the ./images folder. mp4 output is in the output folder.

sudo apt-get install ffmpeg
git clone https://github.com/m-onz/hallucinate
npm i
export REPLICATE_API_TOKEN=r8_BRU**********************************
node hallucinate.js

wait a long time!

generating initial images

You can add images to the images folder from any source. I used this model

With the prompt:

shocking abstract 3D art in the style of andy warhol and francis bacon for a gallery that shocks the viewer exploring digital, glitch and modern culture, distorted abstract wireframe mesh forms

You can also update the hallucinate.js script to configure the image-to-video model:

const output = await replicate.run(
  "ali-vilab/i2vgen-xl:5821a338d00033abaaba89080a17eb8783d9a17ed710a6b4246a18e0900ccad4",
  {
    input: {
      image: dataURI,
      prompt: "shocking abstract 3D art in the style of andy warhol and francis bacon for a gallery that shocks the viewer exploring digital, glitch and modern culture, distorted abstract wireframe mesh forms",
      max_frames: 33
    }
  }
);

generate gifs

You can use this model to generate a .gif from an .mp4 video.

an example video

You can see an example video of this output here

About

using generative ai image-to-video ML to hallucinate extended video sequences from an initial source image.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published