Live video broadcasting requires a multitude of professional expertise to enable multi-camera productions. Robotic systems allow the automation of common and repeated tracking shots. However, predefined camera shots do not allow quick adjustments when required due to unpredictable events. We introduce a modular automated robotic camera control and video switch system, based on fundamental cinematographic rules. The actors’ positions are provided by a markerless tracking system. In addition, sound levels of actors’ lavalier microphones are used to analyze the current scene. An expert system determines appropriate camera angles and decides when to switch from one camera to another. A test production was conducted to observe the developed prototype in a live broadcast scenario and served as a video demonstration for an evaluation.
- Project was published in Proceedings of the ACM International Conference on Interactive Experiences for TV and Online Video 2016 (sponsored by Samsung) as one out of 44 selected contributions in 168 submissions (Overall Acceptance Rate: 26%).
- German version also published in Fernseh- und Kinotechnische Gesellschaft e.V. magazine: Halbautomatische Steuerung von Kamera und Bildmischer bei Live-Übertragungen
- Building the expert system with state machines (reinvented the use of mecanim in Unity for this purpose)
- Coding interfaces to the switcher and camera control modules (Networking through OSC protocol)
- Implementing the audio recognition via lavalier microphones (using PureData)
Host explaining camera modules
Test production with two persons
Hierarchical state machine of the system
Pan camera module for one person