Abstract

In this paper, we present an intelligent architecture, called intelligent virtual environment for language learning, with embedded pedagogical agents for improving listening and speaking skills of non-native English language learners. The proposed architecture integrates virtual environments into the Intelligent Computer-Assisted Language Learning. This architecture supports visual, auditory, and haptic channels of interaction. It allows pedagogical ideas about language skills to be implemented and validated with a minimum design time. Moreover, we design a computational model to evaluate learner's proficiency level, and an automatic adaptation mechanism which adjusts to the learner's learning curve. We have implemented two scenarios based on the proposed architecture to teach learners how to communicate in public places such as airports and TV stores. Inputs to this system include learner's speech and hand motion, and outputs include graphical scenes, force feedback, and speech by a few embodied agents. Throughout interactions, agents discover the proficiency level of the learner and customize the level of communication complexity accordingly. The system is tested on 10 subjects. Experimental results show 14% increase in the number of proper replies, 3% decrease in grammatical errors, 16% decrease in pronunciation duration, and 11% increase in learners' proficiency level within three trials.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.