Abstract

This paper presents a survey of feature saliency measures used in artificial neural networks. Saliency measures can be used for assessing a feature's relative importance. In this paper, we contrast two basic philosophies for measuring feature saliency or importance within a feed-forward neural network. One philosophy is to evaluate each feature with respect to relative changes in either the neural network's output or the neural network's probability of error. We refer to this as a derivative-based philosophy of feature saliency. Using the derivative-based philosophy, we propose a new and more efficient probability of error measure. A second philosophy is to measure the relative size of the weight vector emanating from each feature. We refer to this as a weight-based philosophy of feature saliency. We derive several unifying relationships which exist within the derivative-based feature saliency measures, as well as between the derivative and the weight-based feature saliency measures. We also report experimental results for an target recognition problem using a number of derivative-based and weight-based saliency measures.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.