This research concerns the common problem of edge detection that produces a disjointed and incomplete edge, leading to the misdetection of visual objects. The entropy-based algorithm can potentially solve this problem by classifying the pixel belonging to which objects in an image. Hence, the paper aims to evaluate the performance of entropy-based algorithm to produce the closed-loop edge representing the formation of object boundary. The research utilizes the concept of Entropy to sense the uncertainty of pixel membership to the existing objects to classify pixels as the edge or object. Six entropy-based algorithms are evaluated, i.e., the optimum Entropy based on Shannon formula, the optimum of relative-entropy based on Kullback-Leibler divergence, the maximum of optimum entropy neighbor, the minimum of optimum relative-entropy neighbor, the thinning of optimum entropy neighbor, and the thinning of optimum relative-entropy neighbor. The experiment is held to compare the developed algorithms against Canny as a benchmark by employing five performance parameters, i.e., the average number of detected objects, the average number of detected edge pixels, the average size of detected objects, the ratio of the number of edge pixel per object, and the average of ten biggest sizes. The experiment shows that the entropy-based algorithms significantly improve the production of closed-loop edges, and the optimum of relative-entropy neighbor based on Kullback-Leibler divergence becomes the most desired approach among others due to the production of more considerable closed-loop edge in the average. This finding suggests that the entropy-based algorithm is the best choice for edge-based segmentation. The effectiveness of Entropy in the segmentation task is addressed for further research.
Read full abstract