Abstract

Motions in videos are typically a mixture of local dynamic object motions and global camera motion, which are inconsistent in some cases, and even interfere with each other, causing difficulties in various downstream applications, such as video stabilization that requires the global motion, and action recognition that consumes local motions. Therefore, it is crucial to estimate them separately. Existing methods separate two motions from the mixed motion fields, such as optical flow. However, the quality of mixed motion determines the higher bounds of the performance. In this work, we propose a framework, GLOCAL, to directly estimate global and local motions simultaneously from adjacent frames in a self-supervised manner. Our GLOCAL consists of a Global Motion Estimation (GME) module and a Local Motion Estimation (LME) module. The GME module involves a mixed motion estimation backbone, an implicit bottleneck structure for feature dimension reduction, and an explicit bottleneck for global motion recovery based on the global motion bases with foreground mask under the training guidance of proposed global reconstruction loss. An attention U-Net is adopted for LME which produces local motions while excluding motion of irrelevant regions under the guidance of proposed local reconstruction loss. Our method can achieve better performance than the existing methods on the homography estimation dataset DHE and the action recognition dataset NCAA and UCF-101.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call