Abstract

Multiobjective segmentation algorithms are based on an objective function, consisting of two or more terms, that is minimized by using an optimization algorithm. The objective terms represent differing segmentation objectives, the most popular of which are statistical likelihood of pixel values and smoothness of segment boundaries. Many assumptions are built into the objective function, and we present a case study based on the algorithm of Stewart to demonstrate the importance of analyzing algorithm characteristics to test the validity of hidden assumptions. We develop a set of simulated test images and a novel segmentation performance metric for use with simulated data. An innovative aspect of the Stewart algorithm (SA) is the probability of false alarm (PFA) model used to weight the objective terms. This is intended to dynamically balance the terms as the algorithm progresses. The PFA model is only valid for false edges, and we have shown that the number of selected true edges increases as segmentation evolves, making the theoretical weight model increasingly invalid. In addition, we found problems with several other algorithm assumptions. We tested algorithm performance against a fixed-weight version. We found that the performance of the SA was worse than a fixed-weight version. Thus, while the two-term objective function algorithm does deliver reasonable performance for multilook data, the fixed-weight version gives better performance. While these results hold only for simulated data, we believe that the experimental results indicate the need for a more powerful approach to multiobjective synthetic aperture radar segmentation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call