Abstract

Abstract Speech, as one of the earliest forms of communication used by humans, can effectively convey information. However, the current deep neural network models for speech recognition are generally large in scale and can only be deployed in the cloud, which imposes high deployment environment requirements and power consumption, thereby limiting their implementation on embedded devices. In the context of end-to-end speech recognition, a series of challenges are encountered, including power consumption constraints, computing power limitations, network dependencies, privacy protection, bandwidth restrictions, and communication delays. To address these issues, this paper proposes the design of an end-to-end voice command recognition chip based on deep neural networks specifically for recognizing voice commands in specific scenarios. This chip achieves low power consumption and minimal delay in recognition. Additionally, we introduce a weighted, overloadable chip architecture to enable seamless scene migration, ultimately aiming to resolve the aforementioned challenges.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.