Abstract

The ability of certain performance metrics to quantify how well target recognition systems under test (SUT) can correctly identify targets and non-targets is investigated. The SUT assigns a score between zero and one which indicates the predicted probability of a target. Sampled target and non-target SUT score outputs are generated using representative sets of Beta probability densities. Two performance metrics, Area under the Receiver Operating Characteristic (AURC) and Confidence Error (CE) are analyzed. AURC quantifies how well the target and non-target distributions are separated, and CE quantifies the statistical accuracy of each assigned score. CE and AURC are generated for many representative sets of beta-distributed scores, and the metrics are calculated and compared using continuous methods as well as discrete (sampling) methods. Close agreement in results with these methods for AURC is shown. Also shown are differences between calculating CE using sampled data and calculating CE using continuous distributions. These differences are due to the collection of similar sampled scores in bins, which results in CE weighting proportional to the sum of target and non-target scores in each bin. A method for an alternative weighted CE calculation using maximum likelihood estimation of density parameters is identified. This method enables sampled data to be processed using continuous methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call