Abstract

The paper proposes a Bayesian algorithm for the reduction of additive video noise in the wavelet domain. Spatial and temporal redundancies that exist in a video sequence in the time domain also persist in the wavelet domain. This allows video motion to be captured in the wavelet domain. Based on this fact, a new statistical model is proposed for video sequences. We not only model the subband coefficients in individual frames, but also the wavelet coefficient difference occurring between two consecutive frames using the generalized Laplacian distribution. Following this model, a Bayesian processor is developed that estimates the noise-free wavelet coefficients in the current frame, conditioned on the noisy coefficients in the current frame and the filtered coefficients in the past frame. Rigorous experimental results show that the proposed scheme outperforms several state-of-the-art spatio-temporal filters in time and wavelet domains in terms of quantitative performance as well as visual quality.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call