Abstract

Spatial cognition plays a crucial role in academic achievement, particularly in science, technology, engineering, and mathematics (STEM) domains. Immersive virtual environments (VRs) have the growing potential to reduce cognitive load and improve spatial reasoning. However, traditional methods struggle to assess the mental effort required for visuospatial processes due to the difficulty in verbalizing actions and other limitations in self-reported evaluations. In this neuroergonomics study, we aimed to capture the neural activity associated with cognitive workload during visuospatial tasks and evaluate the impact of the visualization medium on visuospatial task performance. We utilized functional near-infrared spectroscopy (fNIRS) wearable neuroimaging to assess cognitive effort during spatial-reasoning-based problem-solving and compared a VR, a computer screen, and a physical real-world task presentation. Our results reveal a higher neural efficiency in the prefrontal cortex (PFC) during 3D geometry puzzles in VR settings compared to the settings in the physical world and on the computer screen. VR appears to reduce the visuospatial task load by facilitating spatial visualization and providing visual cues. This makes it a valuable tool for spatial cognition training, especially for beginners. Additionally, our multimodal approach allows for progressively increasing task complexity, maintaining a challenge throughout training. This study underscores the potential of VR in developing spatial skills and highlights the value of comparing brain data and human interaction across different training settings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call