Abstract

This study proposes an effective data-building strategy for non-native voice data of Korean speech for automatic speech recognition. To maximize the effectiveness of this low-resource L2 data, we used a design methodology targeting major L1 speaker populations, specifying topic domains, and amplifying L1-based phonetic errors, and accordingly collected over 4300 hours of reading and free speech data from speakers of Chinese, English, Japanese, Thai, Vietnamese, and 62 other languages. This data set is expected to contribute significantly to error reduction in speech recognition and to future research and model development for the assessment and education of Korean.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.