We are glad to announce the release of the first experimental shortfilm of the Computer Vision Cinema system, where theRANDOMlab helped in the development. The raw footage was processed by the system, which determined the cuts and framing of the output, based on movement detection algorithms, such as frame differencing, background subtraction or brightness tracking.
Life as we know it is ruled by rhythm and cycles from the beginning itself. At any cycle's end, with increased experience, we step at the edge of a fork in our paths. Although we think we are in control, the branch we take depends on rules we don't know. In mathematics, such a situation might be defined as random, i.e., having a result "which cannot be determined but only described probabilistically". Although everyday Humanity increases the amount of knowledge about the Universe, the field is infinite. Every truth we reach opens a new set of questions. If there is a Truth, is it at our fingertips?This film is part of the Computer Vision Cinema, an experimental research project to apply Computer Vision techniques in the process of filmmaking, in the search for new audiovisual expression languages. The footage is processed by a computer that determines the cuts and framing of the output based on movement detection algorithms, such as frame differencing, background substraction or brightness tracking.
Let's go to the bits and pieces: the making of.
As you can see in the following video, the original footage, recorded with a Canon 600D/t3i, is a hanheld shot, which doesn't help if you want to make a proper detection with the frame differencing/background substraction techniques.
So, the selected fragment of the footage had to be stabilized. We made stabilization tests with AfterEffects (licensed to our University), as well as with free software: Cinelerra and Blender (thanks to Francois Tarlier's tutorial at http://www.francois-tarlier.com/blog/2d-tracking-tutorials-with-blender/). We most liked the Blender result, so we used it to feed the system. This is a low res version of the stabilized footage fragment fed into the CvCinema system:
After tuning the detection parameters, the outputs and the detection of several algorithms and configurations were recorded to be used in the final editing.
Then the footage was arranged from the most abstract (least information) to the most realist (more information), to keep the audience's expectation. We used Kdenlive free video editor.
A low res version of the final video was fed into a PureData patch to generate the soundtrack. This patch analyzes the rgb levels and does a mapping to the parameters of a sysnthesis generator.