Abstract

AbstractThe aim of removing camera shake is to estimate a sharp version x from a shaken image y when the blur kernel k is unknown. Recent research on this topic evolved through two paradigms called and . only solves for k by marginalizing the image prior, while recovers both x and k by selecting the mode of the posterior distribution. This paper first systematically analyses the latent limitations of these two estimators through Bayesian analysis. We explain the reason why it is so difficult for image statistics to solve the previously reported failure. Then we show that the leading methods, which depend on efficient prediction of large step edges, are not robust to natural images due to the diversity of edges. , although much more robust to diverse edges, is constrained by two factors: the prior variation over different images, and the ratio between image size and kernel size. To overcome these limitations, we introduce an inter‐scale prior prediction scheme and a principled mechanism for integrating the sharpening filter into . Both qualitative results and extensive quantitative comparisons demonstrate that our algorithm outperforms state‐of‐the‐art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call