Abstract

Purpose – The purpose of this study is to present a depth information-based solution for automatic camera control, depending on the presenter’s moving positions. Talks, presentations and lectures are often captured on video to give a broad audience the possibility to (re-)access the content. As presenters are often moving around during a talk, it is necessary to steer recording cameras. Design/methodology/approach – We use depth information from Kinect to implement a prototypical application to automatically steer multiple cameras for recording a talk. Findings – We present our experiences with the system during actual lectures at a university. We found out that Kinect is applicable for tracking a presenter during a talk robustly. Nevertheless, our prototypical solution reveals potential for improvements, which we discuss in our future work section. Originality/value – Tracking a presenter is based on a skeleton model extracted from depth information instead of using two-dimensional (2D) motion- or brightness-based image processing techniques. The solution uses a scalable networking architecture based on publish/subscribe messaging for controlling multiple video cameras.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call