Abstract

Motion blur is a common artifact in image processing, specifically in e-health services, which is caused by the motion of a camera or scene. In linear motion cases, the blur kernel, i.e., the function that simulates the linear motion blur process, depends on the length and direction of blur, called linear motion blur parameters. The estimation of blur parameters is a vital and sensitive stage in the process of reconstructing a sharp version of a motion blurred image, i.e., image deblurring. The estimation of blur parameters can also be used in e-health services. Since medical images may be blurry, this method can be used to estimate the blur parameters and then take an action to enhance the image. In this paper, some methods are proposed for estimating the linear motion blur parameters based on the extraction of features from the given single blurred image. The motion blur direction is estimated using the Radon transform of the spectrum of the blurred image. To estimate the motion blur length, the relation between a blur metric, called NIDCT (Noise-Immune Discrete Cosine Transform-based), and the motion blur length is applied. Experiments performed in this study showed that the NIDCT blur metric and the blur length have a monotonic relation. Indeed, an increase in blur length leads to increase in the blurriness value estimated via the NIDCT blur metric. This relation is applied to estimate the motion blur. The efficiency of the proposed method is demonstrated by performing some quantitative and qualitative experiments.

Highlights

  • Image blur, caused by pixel recording lights from multiple sources, in e-health services, as one of the most common image degradation, occurs due to various reasons, such as camera or object motion

  • We focus on estimating the linear motion blur parameters, i.e., the motion blur direction θ and the motion blur length L, from a single blurry image

  • It means that an increase in blur length leads to increase in the blurriness value estimated via the NIDCT blur metric

Read more

Summary

Introduction

Image blur, caused by pixel recording lights from multiple sources, in e-health services, as one of the most common image degradation, occurs due to various reasons, such as camera or object motion. If the camera does not rotate and only moves in a plane parallel to the scene, the motion blur is shift-invariant In this case, the blurring process can be modelled as the convolution of the true latent image x and a blur kernel (Point Spread Function (PSF) or blur function) a with additive noise n: bðx, yÞ = xðx, yÞ ⊗ aðx, yÞ + nðx, yÞ, ð1Þ where ⊗ denotes the convolution operator and b represents the blurred image. We focus on estimating the linear motion blur parameters, i.e., the motion blur direction θ and the motion blur length L, from a single blurry image. Experimental results show the efficiency of the proposed method to estimate the motion blur parameters.

Related Works
Proposed Model
Estimation of Motion Blur Length
Experimental Results
Conclusion and Discussion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call