Abstract

Face recognition and memory entail not only encoding the perceptual input of a face upon its presence but also retrieving a relatively permanent representation in spite of variation in illumination, pose, an/or expression. For more than two decades, a network of face-selective regions has been identified as the core system of face processing, including occipital face area (OFA), fusiform face area (FFA), and posterior region of superior temporal sulcus (pSTS). However, recent studies have proposed that ventral route of face processing and memory ends at the anterior temporal lobes (i.e., vATLs), which may play an important role bridging face perception and face memory. Here we examined whether neural activity in vATLs can effectively predict performance on a face memory test that requires recognition circumventing variations in pose and lighting. To that end, we first identified during the functional scan the core face network by asking participants to perform a one-back task, while viewing either static images or dynamic videos. Compared to static localizers, dynamic localizers were far more effective identifying regions-of-interest (ROIs) in the core face processing system. We then determined for each ROI (OFA, FFA, pSTS, and vATL), the cluster size associated with maximum face selectivity. Participants were called back with various delays to perform a variety of face processing tasks, including the Taiwanese Face Memory Test (TFMT), which was constructed largely following Cambridge Face Memory Task (CFMT) and used images drawn from a recently established Taiwanese face database. Like CFMT, TFMT was administered in three consecutive stages with increasing reliance on robust face representations. Correlation analyses revealed that participants with greater neural adaptation in the right vATLs demonstrated better recognition and memory performance on TFMT, suggesting that individual differences in constructing invariant and robust neural representation of faces can predict behavioural performance on face recognition and memory. Meeting abstract presented at VSS 2016

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call