Abstract

Recent years have witnessed wider adoption of Automated Speech Recognition (ASR) techniques in various domains. Consequently, evaluating and enhancing the quality of ASR systems is of great importance. This paper proposes Asdf, an Automated Speech Recognition Differential Testing Framework to test ASR systems. Asdf extends an existing ASR testing tool, the CrossASR++, which synthesizes test cases from a text corpus. However, CrossASR++ fails to make use of the text corpus efficiently and provides limited information on how the failed test cases can improve ASR systems. To address these limitations, our tool incorporates two novel features: (1) a text transformation module to boost the number of generated test cases and uncover more errors in ASR systems, and (2) a phonetic analysis module to identify phonemes that the ASR systems tend to transcribe incorrectly. Asdf generates more high-quality test cases by applying various text transformation methods (e.g., changing tense) to the input text in a failed test case. By doing so, Asdf can utilize a small text corpus to generate a large number of audio test cases, something which CrossASR++ is not capable of. In addition, Asdf implements more metrics to evaluate the performance of ASR systems from multiple perspectives. Asdf performs phonetic analysis on the identified failed test cases to identify the phonemes that ASR systems tend to transcribe incorrectly, providing useful information for developers to improve ASR systems. The demonstration video of our tool is made online at https://www.youtube.com/watch?v=DzVwfc3h9As. The implementation is available at https://github.com/danielyuenhx/asdf-differential-testing.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call