Abstract

This paper considers stabilizing traffic videos, which are recorded by cameras mounted on moving vehicles. Compared with videos captured by hand-held cameras, traffic videos are more difficult to stabilize due to dynamic scenes, higher frequency camera jitter, more moving foreground objects and more serious parallax. The conventional video stabilization methods usually estimate the camera jitter from the background feature trajectories which are mainly determined by the camera motion, and then stabilize videos with the estimated jitter. These methods have to correctly distinguish background and foreground feature trajectories, which is not trivial, and may suffer from performance degradation when large foreground objects exist and no enough number of background feature trajectories can be obtained. To resolve these issues, this paper proposes a novel stabilization method, under which background and foreground feature trajectories are no longer distinguished and work together to yield stabilized trajectories. More specifically, the movement of all feature trajectories is modeled as the summation of the camera motion and the object motion. By solving an optimization problem, we can remove the high frequency components of the camera motion, i.e., the camera jitter, and stabilize videos. Parallax is also treated as object motion and contributes feature trajectories for video stabilization. As our method makes use of both foreground and background feature trajectories, it can outperform the conventional stabilization methods which use only background feature trajectories, especially when there are large foreground objects and the number of extracted background feature trajectories is small. Furthermore, some refinements are proposed to speed up our method and enhance its robustness. Experiments are done and confirm its performance superiority.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call