Abstract

A general framework for quantifying the worth of a performance-estimation model is proposed. The purpose of the model is to predict the performance of an automatic target recognition algorithm on a given set of test data, while the purpose of the framework is to quantify how well the model fulfills its task. To this end, a quantity referred to as the utility, which is based on the Kullback–Leibler divergence, is introduced. A key aspect of the framework is the inclusion of a significance function that specifies the relative importance of each point in the performance space, here assumed to be defined in terms of false alarm rate and probability of detection. Example significance functions are suggested and discussed. The functionality of the proposed framework is demonstrated on an underwater target detection application involving measured synthetic aperture sonar data. In this context, an image complexity metric is exploited to enable the development of models corresponding to different seafloor conditions and mine-hunting difficulty. The appeal of the framework is its ability to quantitatively assess the utility of competing performance-estimation models and to fairly compare the utility of a model on different test data sets.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.