Abstract

Viewpoint selection for capturing human motion is an important task in autonomous aerial videography, animation, and virtual 3D environments. Existing methods rely on heuristics for selecting the “best” viewpoint, which requires human effort to summarize and integrate viewpoint selection rules into a visual servo system to control a camera. In this work, we propose an integrated aerial filming system for autonomously capturing cinematic shots of action scenes on the basis of a set of demonstrations given for imitation. Our model, which is built on the basis of the deep deterministic policy gradient, takes a sequence of a subject’s skeleton and the camera pose as input and outputs the camera motion with an optimal viewpoint related to the subject. In addition, we design a spatial attention network to selectively focus on the discriminative joints of the skeleton within each frame. Given the demonstrations with human motions, our framework learns to predict the next best viewpoint by imitating the demonstrations for viewing the motion of the subject. Extensive experimental results in simulated and real outdoor environments demonstrate that our method can successfully mimic the viewpoint selection strategy and capture a more accurate viewpoint than state-of-the-art autonomous cinematography methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call