Abstract

ABSTRACTOnline video-based learning has been increasingly used in educational settings. However, students usually do not have enough cognitive capacity and metacognition skills to diagnose and record their attention status during learning tasks by themselves. This study thus presents an attention-based video lecture review mechanism (AVLRM) that can generate video segments for review based on students’ sustained attention status, as determined using brainwave signal detection technology. A quasi-experiment nonequivalent control group design was utilized to divide 55 participants from two classes of an elementary school in New Taipei City, Taiwan, into two groups. One class was randomly assigned to the experimental group, and used video lectures with the AVLRM support for learning. The other class was assigned to the control group, and used video lectures with autonomous review for learning. Analytical results indicate that students in the experimental group exhibited significantly better review effectiveness than did the control group, and this difference was especially marked for students who had a low attention level, were field-dependent, or were female. The findings show that AVLRM based on brainwave signal detection technology can precisely identify video segments that are more useful for effective review than those picked by student themselves. This study contributes to the design of learning tools that aim to support independent learning and effective review in online or video-based learning environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call