Abstract

To simulate central auditory responses to complex sounds, a computational model was implemented. It consists of a multi-scale classification process, and an artificial neural network composed of two modules of finite impulse response (FIR) neural networks connected to a maximum network. Electrical activities of single auditory neurons were recorded at the rat midbrain in response to a repetitive pseudo-random frequency modulated (FM) sound. The multi-scale classification process divides the training dataset into either strong or weak response using a multiple-scale Gaussian filter that based on response probability. Two modules of FIR neural network are then independently trained to model the two types of responses. This caters for the possible differences in neuronal circuitry and transmission delay. Their outputs are connected to a maximum network to generate the final output. After training, we use a different set of FM responses collected from the same neuron to test the performance of the model. Two criteria are adopted for assessment. One measures the matching of the modeled output to the actual output on a point-to-point basis. Another measures the matching of bulk responses between the two. Results show that the proposed model predicts the responses of central auditory neurons satisfactorily.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.