Abstract
The state-of-the-art large vocabulary speech recognition systems consist of several components including hidden Markov model and deep neural network. To realize the highest recognition performance, numerous meta-parameters specifying the designs and training setups of these components must be optimized. A prominent obstacle in system development is the laborious effort required by human experts in tuning these meta-parameters. To automate the process, we propose to tune the meta-parameters of a whole large vocabulary speech recognition system using the evolution strategy with a multi-objective Pareto optimization. As the result of the evolution, the system is optimized for both low word error rate and compact model size. Since the approach requires repeated training and evaluation of the recognition systems that require large computation, we make use of parallel computation on cloud computers. Experimental results show the effectiveness of the proposed approach by discovering appropriate configuration for large vocabulary speech recognition systems automatically.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE/ACM Transactions on Audio, Speech, and Language Processing
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.