Abstract

Today, we capture and store images in a way that has never been possible. However, huge numbers of degraded and blurred images are captured unintentionally or by mistake. In this paper, we propose a geometrical hypothesis stating that blurring occurs by shifting or scaling the depth of field (DOF). The validity of the hypothesis is proved by an independent method based on depth estimation from a single image. The image depth is modeled regarding its edges to extract amplitude comparison ratios between the generated blurred images and the sharp/blurred images. Blurred images are generated by a stepwise variation in the standard deviation of the Gaussian filter estimate in the improved model. This process acts as virtual image recording used to mimic the recording of several image instances. A historical documentation database is used to validate the hypothesis and classify sharp images from blurred ones and different blur types. The experimental results show that distinguishing unintentionally blurred images from non-blurred ones by a comparison of their depth of field is applicable.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call