Abstract

One of the fundamental tasks in image processing is edge detection. High level image processing, such as object recognition, segmentation, image coding, and robot vision, depend on the accuracy of edge detection. Edges contain essential information about an image. Most edge detection techniques are based on finding maxima in the first derivative of the image function or zero-crossings in the second derivative of the image function. This concept is illustrated for a gray-level image in Fig. 4.1 [4]. The figure shows that the first derivative of the gray-level profile is positive at the leading edge of a transition, negative at the trailing edge, and zero in homogeneous areas. The second derivative is positive for that part of the transition associated with the dark side of the edge, negative for that part of the transition associated with the light side of the edge, and zero in homogeneous areas. In a monochrome image an edge usually corresponds to object boundaries or changes in physical properties such as illumination or reflectance. This definition is more elaborate in the case of color (multispectral) images since more detailed edge information is expected from color edge detection. According to psychological research on human visual system [1], [2], color plays a significant role in the perception of boundaries. Monochrome edge detection may not be sufficient for certain applications since no edges will be detected in gray-level images when neighboring objects have different hues but equal intensities [3].KeywordsColor ImageEdge DetectionImpulse NoiseDifference VectorImpulsive NoiseThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call