Abstract

The aim of this research is to provide a novel educational model with the goal of reducing the expenses associated with manual question production and meeting the demand for a continual supply of new questions on MOOC platforms such as Moodle or Open EDX. We considered integrating machine-learning methods with natural language processing in order to increase the number and validity of assessing questions. To accomplish this, we developed a system that generates multilingual questions automatically.
 Various kinds of evaluation were conducted with two factors in mind: evaluating MOOC learners' competency and the similarity of the generated questions to those created by humans. The first evaluation is based on subjective judgment by three MOOC creators, while the second is based on replies from MOOC participants on machine-generated and human-created questions. Both evaluations revealed that the machine-generated questions performed on par with the human-created questions in terms of evaluating skills and similarity. Moreover, the results demonstrate that most of the produced questions (up to 82 percent) enhance e-assessment when the new suggested technology is used.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call