Abstract
In this research, we present a comparative analysis of two state-of-the-art speech recognition models, Whisper by OpenAI and XLSR Wave2vec by Facebook, applied to the low-resource Marathi language. Leveraging the Common Voice 16 dataset, we evaluated the performance of these models using the word error rate (WER) metric. Our findings reveal that the Whisper (Small) model achieved a WER of 45%, while the XLSR Wave2vec model obtained a WER of 71%. This study sheds light on the capabilities and limitations of current speech recognition technologies for low-resource languages and provides valuable insights for further research and development in this domain.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Innovative Science and Research Technology (IJISRT)
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.