Video stabilization aims to eliminate camera jitter and improve the visual experience of shaky videos. Video stabilization methods often ignore the active movement of the foreground objects and the camera, and may result in distortion and over-smoothing problems. To resolve these issues, this paper proposes a novel video stabilization method based on motion decomposition. Since the inter-frame movement of foreground objects is different from that of the background, we separate foreground feature points from background feature points by modifying the classic density based spatial clustering method of applications with noise (DBSCAN). The movement of background feature points is consistent with the movement of the camera, which can be decomposed into the camera jitter and the active movement of the camera. And the movement of foreground feature points can be decomposed into the movement of the camera and the active movement of foreground objects. Based on motion decomposition, we design first-order and second-order trajectory smoothing constraints to eliminate the high-frequency and low-frequency components of the camera jitter. To reduce content distortion, shape-preserving constraints, and regularization constraints are taken to generate stabilized views of all feature points. Experimental results demonstrate the effectiveness and robustness of the proposed video stabilization method on a variety of challenging videos.
Read full abstract