Abstract

ABSTRACT Encoding of visual scenes remains under-explored due to methodological limitations. In this study, we evaluated the relationship between memory accuracy for visual scenes and eye movements at encoding. First, we used data-driven methods, a fixation density map (using iMap4) and a saliency map (using GBVS), to analyse the visual attention for items. Second, and in a more novel way, we conducted scanpath analyses without a priori (using ScanMatch). Scene memory accuracy was assessed by asking participants to discriminate identical scenes (targets) among rearranged scenes sharing some items with targets (distractors) and new scenes. Shorter fixation duration in regions of interest (ROIs) at encoding was associated with a better rejection of distractors; there was no significant difference in the relative fixation time in ROIs at encoding, between subsequent hits and misses at test. Hence, density of eye fixations in data-driven ROIs seems to be a marker of subsequent memory discrimination and pattern separation. Interestingly, we also identified a negative correlation between average MultiDimensional Scaling (MDS) distance scanpaths and the correct rejection of distractors, indicating that scanpath consistency significantly affects the ability to discriminate distractors from targets. These data suggest that visual exploration at encoding participates in discrimination processes at test.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call