Abstract

The value of bottom-up and robust neural network designs is demonstrated, as well as the performance superiority of recurrent neural structures over feedforward neural architectures. Along these lines, two neural structures are considered, one feedforward and one recurrent, whose objective is binary hypothesis testing. The first, FFS1, is a tandem feedforward structure, whereas the second, FFS2, is recurrent and involves cumulative forward feedback. Both parametric and robust designs for the two structures are considered and analyzed in terms of induced false alarm and power probabilities, and the inferiority of the FFS1 is rigorously proven in terms of the rate with which the induced power probability increases with the number of the neural elements. Asymptotic results are presented, as well as numerical results, with emphasis on the Gaussian and location parameter nominal hypotheses model, that exhibit the superiority of the robust designs clearly. Learning algorithms for the parameters involved in the robust network designs are also discussed. >

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call