Abstract

Distinguishing between dynamic foreground objects and a mostly static background is a fundamental problem in many computer vision and computer graphics tasks. This paper presents a novel online video background identification method with the assistance of inertial measurement unit (IMU). Based on the fact that the background motion of a video essentially reflects the 3D camera motion, we leverage IMU data to realize a robust camera motion estimation for identifying background feature points by only investigating a few historical frames. We observe that the displacement of the 2D projection of a scene point caused by camera rotation is depth-invariant, and the rotation estimation by using IMU data can be quite accurate. We thus propose to analyze 2D feature points by decomposing the 2D motion into two components: rotation projection and translation projection. In our method, after establishing the 3D camera rotations, we generate the depth-relevant 2D feature point movement induced by the camera 3D translation. Then, by examining the disparity between inter-frame offset and the projection of estimated 3D camera motion, we can identify the background feature points. In the experiments, our online method is able to run at 30FPS with only 1 frame latency and outperforms state-of-the-art background identification and other relevant methods. Our method directly leads to a better camera motion estimation, which is beneficial to many applications like online video stabilization, SLAM, image stitching, etc.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call