Abstract
Entropy-based image thresholding has received considerable interest in recent years. Two types of entropy are generally used as thresholding criteria: Shannon's entropy and relative entropy, also known as Kullback–Leibler information distance, where the former measures uncertainty in an information source with an optimal threshold obtained by maximising Shannon's entropy, whereas the latter measures the information discrepancy between two different sources with an optimal threshold obtained by minimising relative entropy. Many thresholding methods have been developed for both criteria and reported in the literature. These two entropy-based thresholding criteria have been investigated and the relationship among entropy and relative entropy thresholding methods has been explored. In particular, a survey and comparative analysis is conducted among several widely used methods that include Pun and Kapur's maximum entropy, Kittler and Illingworth's minimum error thresholding, Pal and Pal's entropy thresholding and Chang et al.'s relative entropy thresholding methods. In order to objectively assess these methods, two measures, uniformity and shape, are used for performance evaluation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEE Proceedings - Vision, Image, and Signal Processing
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.