Abstract

Proficiency in oral English plays a significant role in different professional and academic settings. However, evaluating oral English proficiency precisely is challenging because of the subjective nature of oral communication. In this study, we developed an innovative Oral English Evaluation System (OEES) by combining the efficiencies of speech recognition technology and scoring algorithms. Initially, an intensive oral English database was collected, containing numerous persons' voice recordings. The collected voice recordings are pre-processed to remove the background noise, ensuring clarity in the voice signals. Then, an automatic speech recognition algorithm was developed using the recurrent neural network (RNN) to transcribe voice signals into text. This module trains the system to map the voice signals to their corresponding textual characteristics. Finally, an adaptive scoring algorithm was incorporated into the designed OEES to evaluate the oral English proficiency of the individuals. The adaptive scoring algorithm considers different factors like fluency, grammar, pronunciation, and vocabulary for evaluating the individual's oral English proficiency as "excellent," "good," and "poor." The presented framework was modeled in Python and validated across diverse natural English-speaking databases. The experimental results are assessed in terms of accuracy, precision, recall, and error rate. The implementation outcomes suggest that the proposed OEES framework accurately evaluates oral English proficiency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call