Abstract

Metrics that historically have been applied to quantify the performance of signal processing for source localization are algorithm-dependent. For example, performance of conventional beamforming or matched-field processing is usually quantified by main-peak width and secondary-peak levels of the beam response or spatial ambiguity function, while performance of Bayesian localization may be quantified by measures of the statistical dispersion of the a posteriori pdf of source location. While algorithm-dependent performance metrics permit comparisons within a given class of signal processing algorithms, they do not provide comparability across algorithm classes. The present work identifies fundamental information-theoretic quantities that can be used as metrics to quantify the source localization performance of diverse signal processing algorithms and thus provide for performance comparisons across signal-processor classes. These quantities include conditional entropy of source location given processor output, mutual information of source location and processor output, and cross-entropy of actual and posterior source-location probability distributions. Applications of these information-theoretic metrics are illustrated in examples of Bayesian localization, conventional beamforming, and matched-field processing of a time-harmonic source in a range-independent shallow-water acoustic waveguide. The results are interpreted in the light of the data processing inequality of information theory. [Work supported by ONR.]

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call