Abstract

In many digital video applications, video sequences suffer from jerky movements between successive frames. In this paper, an integrated general-purpose stabilization method is proposed, which extracts the information from successive frames and removes the translation and rotation motions that result in undesirable effects. The scheme proposed starts with computation of the optical flow between consecutive video frames and an affine motion model is adopted in conjunction with the optical flow field obtained to estimate objects or cameramotions using the Horn-Schunck algorithm. The estimatedmotion vectors are then used by a modelfitting filter to stabilize and smooth video sequences. Experimental results demonstrate that the proposed scheme is efficient due to its simplicity and provides good visual quality in terms of the global transformation fidelity measured by the peak-signal-noise-ratio.

Highlights

  • Video captured by cameras often suffers from unwanted jittering motions

  • Most video stabilization algorithms presented in the recent literature try to remove the image motions by either totally or partially compensating for all motions caused by camera rotations or vibrations [1,2,3,4,5,6,7,8,9]; the resultant background remains motionless

  • Chang et al [5] used the optical flow between consecutive frames based on the modification of the method in [6] to estimate the camera motions by fitting a simplified affine motion model

Read more

Summary

Introduction

Video captured by cameras often suffers from unwanted jittering motions. In general, this problem is usually dealt with by means of compensation for image motions. Chang et al [5] used the optical flow between consecutive frames based on the modification of the method in [6] to estimate the camera motions by fitting a simplified affine motion model. Rather than developing novel and complicated individual algorithms, it aims to simplify the stabilization process by integrating the well-researched techniques, such as motion estimation, motion modeling, and motion compensation, into a new single framework that is of modular nature and can reduce the complexity for implementation, in hardware. The scheme aims to provide better performance in terms of the global transformation fidelity (a typical measure of stabilization performance), compared to other existing methods

Overview of the Video Stabilization Scheme
Optical Flow Estimation Technique
Motion Model Fitting
Motion Parameters Smoothing
Global Motion Compensation
Simulation Results
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call