Skip to content
Roelof Pieters edited this page Jul 9, 2015 · 1 revision

Welcome to the DeepDreamVideo wiki!

We will collect some helpful tips and advise on how to best "dream" on video here.

How the current version works:

  • it blends 50% (controlled with the --blend argument) of the dreamed up version, back with the next frame. In that way it would already be very easy to parallelize, but you'd have to keep track of these "blended" frames as well. This is important so the dreamed up artifacts stay relatively continous. The blending can also be looped over with 50 to 100% so iteratively artefacts can dissapear as well as new ones can appear: Otherwise it would just be all puppyslugs filling up the screen.

You can control the dreamed up artefacts also by limiting the network to a particular layer, or (as I did for fear and loathing) loop through the different layers one by one, with a different layer as the "end=" argument at the deepdream() step. This makes the result of course more chaotic, but also more different artefacts and transformations will occur.

An overview of what kind of transformations different layers of different pretrained networks do, see also the visualizations at: www.csc.kth.se/~roelof/deepdream/ and a less extended version is at youtube on the MIT Places dataset/net

An playlist of #deepdream videos is at: https://www.youtube.com/playlist?list=PL1z-xdY3wUl_yWU_KK49QLCe9UAp7_S3R

Clone this wiki locally