Abstract

This paper presents a new scale space-based method to extract edges in gray level images. The method is based on a novel representation of gray-level shape called the scale-spectrum space. The scale space representation is used to describe an image at different scales. In order to obtain the original image edges, an edge detector is applied to each simplified image on the corresponding scale. At best, some form of compromise among the edges at different scale levels may be sought. To overcome this problem, we present a stability criterion to combine edges obtained at different scales. Usual problems in edge detection such as displacement, redundancy and error are analyzed and solved using a realistic estimation of displacement of points across scale space. The proposed approach suppress the finer details without weakening or dislocating the larger scale edges (the usual problems of edge detection using an isotropic diffusion procedure) in an improved manner compared to anisotropic diffusion procedures because a tuning function is not required. The proposed methodology is biologically inspired by the behavior of visual cortex neurones as well as retinal cells.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call