Abstract

In this paper, we propose a new accuracy measurement model for the video stabilization method based on background motion that can accurately measure the performance of the video stabilization algorithm. Undesired residual motion present in the video can quantitatively be measured by the pixel by pixel background motion displacement between two consecutive background frames. First of all, foregrounds are removed from a stabilized video, and then we find the two-dimensional flow vectors for each pixel separately between two consecutive background frames. After that, we calculate a Euclidean distance between these two flow vectors for each pixel one by one, which is regarded as a displacement of each pixel. Then a total Euclidean distance of each frame is averaged to get a mean displacement for each pixel, which is called mean displacement error, and finally we calculate the average mean displacement error. Our experimental results show the effectiveness of our proposed method.

Highlights

  • The demand for video stabilization (VS) is increasing over time [1]

  • Though subjective measurement has some usages in psycho-visual experiments, this evaluation strategy does not reflect much in scientific measurements

  • The state-of-the-art video stabilization algorithms such as Subspace [13] and the L1 camera optimization path for rolling shutter removal [14] are evaluated based on optimal user experience

Read more

Summary

Introduction

The demand for video stabilization (VS) is increasing over time [1]. many video processing software programs and camcorders are using a video stabilization facility [2]. The human eye is very sensitive to motion frequency, amplitude, spatial image frequency, color, intensity, and context of a video [9,10,11,12]. A large number of video stabilization algorithms are using subjective measurements to validate the VS algorithm. We can check the stability of a video based on the background motion. To test our method’s accuracy, we compare the ground truth values with the measured values of the synthetically shaking videos. The ground truth values and the measured values are tabulated in the experiments in the case of translation, rotation, TR, RT, and SRT in Eqs. The error is increasing for the composite transformation compared to the single translation and the single rotation. Our proposed method is robust and it can detect the background motion, which will quantify the accuracy of the VS methods

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.