Abstract
A discrete type of the Dynamic Synapses Neural Network (DSNN) has been developed and applied to speech recognition. In order to speed up the training time of the network, a new discrete time implementation of the original DSNN [J.-S. Liaw and T. W. Berger, 1996] has been introduced based on the impulse invariant transformation. The new architecture of the network was trained with the Genetic Algorithms [H. H. Namarvar et al., 2001] and tested against the continues-type DSNN. The overall speed of the new algorithm with discrete-time difference equation set is about 13 times faster than the same algorithm with the continuous differential equation set. This significant reduction of processing time not only decreases the training time but also makes the system better suited for real-time speech recognition tasks. [Work supported by DARPA.]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.