Blind image deconvolution, i.e., estimating both the latent image and the blur kernel from the only observed blurry image, is a severely ill-posed inverse problem. In this paper, we propose a blur kernel estimation method for blind motion deblurring using sparse representation and cross-scale self-similarity of image patches as priors to recover the latent sharp image from a single blurry image. Sparse representation indicates that image patches can always be represented well as a sparse linear combination of atoms in an appropriate dictionary. Cross-scale self-similarity results in that any image patch can in some way be well approximated by a number of other similar patches across different image scales. Our method is based on the observations that almost any image patch in a natural image has multiple similar patches in down-sampled versions of the image, and down-sampling produces image patches that are sharper than those in the blurry image itself. In our method, the dictionary for sparse representation is trained adaptively from sharper patches sampled from the down-sampled latent image estimate to make the similar patches of the latent sharp image well represented sparsely, and meanwhile, all patches from the latent image estimate are optimized to be as close to the sharper similar patches searched from the down-sampled version to enforce the sharp recovery of the latent image by constructing a non-local regularization. Experimental results on both simulated and real blurry images demonstrate that our method outperforms state-of-the-art blind deblurring methods.
Read full abstract