Abstract

Feedforward networks having a one-to-one correspondence between input and output units are readily trained using backpropagation to perform auto-associative mappings. A novelty filter is obtained by subtracting the network output from the input vector. Then the presentation of a ‘familiar’ pattern tends to evoke a null response; but any anomalous component is enhanced. This principle motivates the design of an Adaptive Novelty Filter (ANF) to enhance the detectability of weak signals added to a statistically stationary or slowly-varying noise background and to serve as a pre-processor to any device which performs signal detection, estimation, or classification. The ability of the ANF to enhance the detectability of weak signals in wideband ocean acoustic background was measured by comparing the signal-to-noise ratios out of two matched filter detectors one of which received the time series directly while the other received the output of the ANF. The resulting Detectability Enhancement Ratio (DER) was found to increase with the number of hidden units for the first several thousand iterations of the learning algorithm. Subsequent devolution of the network pushes the noise power lower but the DER likewise drops off. We explore the causes of this phenomenon by studying the internal behavior of the auto-associative network as it learns to reconstruct the input vectors as linear combinations of intrinsic basis vectors each of which is defined by the weights of connections fanning out from a single hidden unit to the output layer.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call