Abstract

Our visual environment impacts multiple aspects of cognition including perception, attention and memory, yet most studies traditionally remove or control the external environment. As a result, we have a limited understanding of neurocognitive processes beyond the controlled lab environment. Here, we aim to study neural processes in real-world environments, while also maintaining a degree of control over perception. To achieve this, we combined mobile EEG (mEEG) and augmented reality (AR), which allows us to place virtual objects into the real world. We validated this AR and mEEG approach using a well-characterised cognitive response—the face inversion effect. Participants viewed upright and inverted faces in three EEG tasks (1) a lab-based computer task, (2) walking through an indoor environment while seeing face photographs, and (3) walking through an indoor environment while seeing virtual faces. We find greater low frequency EEG activity for inverted compared to upright faces in all experimental tasks, demonstrating that cognitively relevant signals can be extracted from mEEG and AR paradigms. This was established in both an epoch-based analysis aligned to face events, and a GLM-based approach that incorporates continuous EEG signals and face perception states. Together, this research helps pave the way to exploring neurocognitive processes in real-world environments while maintaining experimental control using AR.

Highlights

  • Our visual environment impacts multiple aspects of cognition including perception, attention and memory, yet most studies traditionally remove or control the external environment

  • To test for changes in power for upright and inverted faces, linear mixed effects modelling was used, showing that EEG low frequency power over posterior electrodes was significantly greater for inverted faces compared to upright faces (mean difference = 0.404, t(1427) = 8.27, p < 0.0001; Fig. 3A)

  • Plotting EEG power across frequencies further indicated that these increases in low frequency power peaked near 10 Hz, and differences between inverted and upright faces were principally between 5 and 12 Hz (Fig. 3B)

Read more

Summary

Introduction

Our visual environment impacts multiple aspects of cognition including perception, attention and memory, yet most studies traditionally remove or control the external environment. We present and validate an approach to studying human cognition in naturalistic, real-world environments, while importantly retaining the ability to manipulate our key variables and retain experimental control We achieve this by combining mobile EEG (mEEG) with head-mounted cameras and augmented reality (AR). We used a GLM approach similar in nature to that used in naturalistic fMRI studies of movie watching (e.g.19,20) and MEG studies of language comprehension (e.g.21), again testing the sensitivity of the approach against face inversion effects Together, these twin analyses demonstrate an approach that manipulates and controls variables in conjunction with mobile neural recordings to reveal cognitive effects in dynamically changing settings

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call