Abstract
Bottlenose dolphins (genus Tursiops) were trained in ‘‘choice reaction time’’ experiments to respond to each of two different types of acoustic stimuli by producing the corresponding one of two different vocalizations. An acoustic time history for each stimulus–response pair was obtained by digitizing the voltage output of a nearby hydrophone. To determine response time required that the onset of the response vocalization within the digitized waveform be identified. To distinguish ‘‘correct’’ from ‘‘incorrect’’ responses required that each response waveform be categorized as ‘‘whistle’’ or ‘‘pulsatile.’’ Both determinations were complicated by the presence of background noise and by variability in the characteristics of response vocalizations among the several dolphins. A four-layer feed-forward neural network (using neurons with identity activation functions and unit-gain sigmoidal transfer functions) was developed which achieved a mean error of <1.9 ms in estimating response time and a response-type recognition rate of ≳98% on a test set of one thousand 100-ms vocalization samples from three dolphins, when trained by backpropagation using a training set of two hundred 100-ms vocalization samples from the same dolphins. The performance of this neural network architecture (which can operate at real-time rates) surpasses that previously achieved using offline discriminant analysis (BMDP/PC-90) applied to vocalization samples of much greater (e.g., 700-ms) duration.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.