Abstract

The paper proposes a Bayesian algorithm for the reduction of additive video noise in the wavelet domain. Spatial and temporal redundancies that exist in a video sequence in the time domain also persist in the wavelet domain. This allows video motion to be captured in the wavelet domain. Based on this fact, a new statistical model is proposed for video sequences. We not only model the subband coefficients in individual frames, but also the wavelet coefficient difference occurring between two consecutive frames using the generalized Laplacian distribution. Following this model, a Bayesian processor is developed that estimates the noise-free wavelet coefficients in the current frame, conditioned on the noisy coefficients in the current frame and the filtered coefficients in the past frame. Rigorous experimental results show that the proposed scheme outperforms several state-of-the-art spatio-temporal filters in time and wavelet domains in terms of quantitative performance as well as visual quality.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.