Abstract

During the past years, camera-equipped Unmanned Aerial Vehicles (UAVs) have revolutionized aerial cinematography, allowing easy acquisition of impressive footage. In this context, autonomous functionalities based on machine learning and computer vision modules are gaining ground. During live coverage of outdoor events, an autonomous UAV may visually track and follow a specific target of interest, under a specific desired shot type, mainly adjusted by choosing appropriate focal length and UAV/camera trajectory relative to the target. However, the selected UAV/camera trajectory and the object tracker requirements (which impose limits on the maximum allowable focal length) affect the range of feasible shot types, thus constraining cinematography planning. Therefore, this paper explores the interplay between cinematography and computer vision in the area of autonomous UAV filming. UAV target-tracking trajectories are formalized and geometrically modeled, so as to analytically compute maximum allowable focal length per scenario, to avoid 2D visual tracker failure. Based on this constraint, formulas for estimating the appropriate focal length to achieve the desired shot type in each situation are extracted, so as to determine shot feasibility. Such rules can be embedded into practical UAV intelligent shooting systems, in order to enhance their robustness by facilitating on-the-fly adjustment of the cinematography plan.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call