Abstract

Sparse representation based on over-complete dictionaries is a hot issue in the field of computer vision and machine learning. In probability theory, over-complete dictionary can be learned by non-parametric Bayesian techniques with Beta Process. However, traditional probabilistic dictionary learning method assumes noise follows Gaussian distribution, which can only remove Gaussain noise. In order to remove outlier or complex noise, we propose a dictionary learning method based on non-parametric Bayesian technology by assuming the noise follows Laplacian distribution. Because the non-conjugacy of Laplacian distribution makes the calculation of posteriors of latent variables more complicate, thus we utilize a superposition of an infinite number of Gaussian distributions to substitute for L1 density function. The weights of mixture Gaussian distribution are controlled by an extra hidden variable. Then the Bayesian inference is applied to learn all the key parameters in the proposed probabilistic model, which avoids the processing of parameter setting and fine tuning. In the experiments, we mainly test the performance of different algorithms in removing salt-and-pepper noise and mixture noises. The experimental results show that the PSNRs of our algorithm are higher 2-4 dB at least than other classic algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.