Abstract

This paper proposes an Artificial Intelligence Speaking Program Model based on voice recognition system in which the level of English language service provided is adjusted to the behavioral characteristics of the participants in elementary school classroom situations. For this purpose, the system’s capability to recognize human voice was analyzed, with the voice recognition rates of elementary school EFL students, Korean adults, and native speakers of English being compared through the use of a Google speech recognition engine. The focus was given to delineating a range of ‘correct’ answers defined in terms of given learning objectives and target expressions. AI Speaking Program consists of three stages: Word Talk, Sentence Talk, and Let’s Talk. The Word Talk stage is geared to word-focused practice through which learners can focus on improving the accuracy of pronunciation. This is followed by the Sentence Talk stage, where learners can practice how to construct sentences. The next stage, Let’s Talk stage, engages learners in a conversation with a chatbot, helping them improve fluency through dialogues. The AI Speaking Program is proposed to have the potential to serve as a resourceful service device, equipped with the capability to relevantly address learners’ answers and enrich dialogues through reinforcing expressions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call