Abstract

Despite efforts for defining best practices for assessing Digital Game-Based Learning (DGBL) effectiveness, lack of scientific rigor is yet to be established. This has led to academics and educational practitioners doubting the quality of evidence and practical value of scientific research in educational settings. Hence, the present manuscript aims to test the feasibility of previously defined best practices by means of 3 feasibility studies: one in formal, one in health and one in corporate education. Results firstly show a more nuanced view on previously defined best practices regarding control groups: a) inclusion of an educational activity is not always desirable and depends on whether absolute or relative effectiveness is assessed and b) keeping instructional time equal in experimental and control group does not comply with the time efficiency outcome of DGBL. Secondly, several non-intervention related elements jeopardizing internal validity have been established: a) failed randomization, which can be tackled with blocked randomization and b) pre-test effects, which can be tackled with carefully piloted parallel versions of tests. Lastly, additional indicators for motivational and efficiency outcomes in a self-paced distance learning context have been established: a) motivation during/after the training should be expanded with motivation to start playing and b) time required to finish the training should be expended with time required to follow-up on learners. Recommendations in the present manuscript are not exhaustive or generalizable to all contexts, but do provide preliminary insights into feasible experimental designs for DGBL effectiveness studies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call