Abstract

AbstractIn this paper, a new modified hybrid learning algorithm for feedforward neural networks is proposed to obtain better generalization performance. For the sake of penalizing both the input-to-output mapping sensitivity and the high frequency components in training data, the first additional cost term and the second one are selected based on the first-order derivatives of the neural activation at the hidden layers and the second-order derivatives of the neural activation at the output layer, respectively. Finally, theoretical justifications and simulation results are given to verify the efficiency and effectiveness of our proposed learning algorithm.KeywordsHide LayerOutput LayerFeedforward Neural NetworkHigh Frequency ComponentModify AlgorithmThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call