Abstract

The Conversion of Speech to Text (CoST) is crucial for developing automated systems to understand and process voice commands. Studies have focused on developing this task, especially for Turkish-specific voice commands, a strategic language in the international arena. However, researchers face various challenges, such as Turkish's suffixed structure, phonological features and unique letters, dialect and accent differences, word stress, word-initial vowel effects, background noise, gender-based sound variations, and dialectal differences. To address the challenges above, this study aims to convert speech data consisting of Turkish-specific audio clips, which have been limitedly researched in the literature, into texts with high-performance accuracy using different Machine Learning (ML) models, especially models such as Convolutional Neural Networks (CNNs) and Convolutional Recurrent Neural Networks (CRNNs). For this purpose, experimental studies were conducted on a dataset of 26,485 Turkish audio clips, and performance evaluation was performed with various metrics. In addition, hyperparameters were optimized to improve the model's performance in experimental studies. A performance of over 97% has been achieved according to the F1-score metric. The highest performance results were obtained with the CRNN approach. In conclusion, this study provides valuable insights into the strengths and limitations of various ML models applied to CoST. In addition to potentially contributing to a wide range of applications, such as supporting hard-of-hearing individuals, facilitating notetaking, automatic captioning, and improving voice command recognition systems, this study is one of the first in the literature on CoST in Turkish.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.