Abstract

The rapid enhancement of technology has made it possible to integrate technology and L2 pronunciation assessment. While the investigation of L2 pronunciation was considered vital in English Language Teaching, assessing pronunciation is granted the least attention. This study attempts to discuss the roles and impacts of O’Speak version 1.0 as an automated pronunciation tool and compare it with human ratings while assessing L2 segmental pronunciation features uttered by Indonesian learners of English. This study aims to pilot an android-based pronunciation test, namely, O’Speak, which was developed using Feuerstein’s Mediated Learning Experience principles. Performed under a quasi-experimental research design, this study ran an independent two-sample t-test involving 50 participants. The study showed that there was no statistically significant difference between O’Speak and human ratings in the segmental pronunciation assessment. This indicates that a new tool functions equally with the ability that human rating has. During the study, this study identified some caveats shown by the human rating that leads to its ability to be equal to O’Speak, and these include teaching experience, hallo effect, and rating experience.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.