Video Music is a work in progress that aims for the automatic generation of music soundtracks for videos. This code does a couple of things:
- Generates a simple techno soundtrack for a random video
- Alters that video so that its scenes change with the beat of the music
example video with scene changes synced to generated musical beat: https://goo.gl/JNcuql
It currently employs the following:
- ffmpeg for video analysis
- TensorFlow/Magenta for melody and beat creation
- SuperCollider for audio synthesis
Currently, the musical variation is provided by TensorFlow/Magenta in the form of differing melodies and rhythms (but leaves much to be desired!)
- Accepts a random video as a seed (1-2m videos with several scene changes are ideal, e.g. GoPro Videos)
- Uses ffmpeg to extract onset times of major scene changes
- Determines a best approximate "beat" of the video based on scene change onset times
- Performs a kind of video quantization by altering the speed of each scene based on this approximated beat so that the scene's length becomes a multiple of the beat
- Concatenates the altered segments into a new output video with scene changes at regular beats
- Uses Tensorflow/Magenta to create melodies and rhythms set to the video's scene change beat
- Mixes the generated music with the output video
- TensorFlow [needs version]
- Magenta [needs version]
- ffmpeg and ffprobe [needs version]
- SuperCollider [needs version]
Within the video_music root directory, execute:
bash RunMe.sh path/to/video.mp4 path/to/magenta/ path/to/magenta/attention_bundle.mag
- Much more sophisticated musical variation and structure (keep to a 2m track limitation)
- Assess video content with Google's Cloud Vision or Amazon's Rekognition, pick from a list of possible "genres" based on video content
- Productize the app in a web interface that accepts a video upload
- Dockerize this app
- Write documentation for TensorFlow/Magenta integration