Abstract

ABSTRACT Two different types of adaptive networks are considered for solving the centralized and distributed hy-pothesis testing problem. The performance of the two different types of networks is compared under differentperformance indices and training rules. It is shown that training rules based on the Neyman-Pearson criterion outperform error based training rules. Simulations are provided for data that are linearly and nonlinearly separable. I. INTRODUCTION The optimum Bayesian and Neyman-Pearson solution to the distributed decision fusion problem bearsstriking similarities to the structure of a neural network (NN), [28,29]. Moreover, NNs can, in principle learn arbitrary input-output mappings, provided that they are sufficiently smooth. These two facts motivate theuse of NNs for solving the centralized and distributed hypothesis testing problem. In selecting the properNN layout, one could argue that a perceptron-type NN can learn any input-output mapping, thus it can betrained to solve the hypothesis testing problem. However, the ability of a perceptron-type NN to learn an

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.