Abstract

Feed-forward neural networks in conjunction with back-propagation are an effective tool to automate the classification of biomedical signals. Most of the neural network research to date has been done with a view to accelerate learning speed. In the medical context, however, generalisation may be more important than learning speed. With the brain stem auditory evoked potential classification task described in this study, the authors found that parameter values that gave fastest learning could result in poor generalisation. In order to achieve maximum generalisation, it was necessary to fine tune the neural net for gain, momentum, batch size, and hidden layer size. Although this maximization could be time consuming, especially with larger training sets, the authors' results suggest that fine tuning parameters can have important clinical consequences, which justifies the time involved. In the authors' case, fine tuning parameters for high generalisation had the additional effect of reducing false negative classifications, with only a small sacrifice in learning speed.< <ETX xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">&gt;</ETX>

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.