Abstract

Previous brain decoding studies using functional magnetic resonance imaging (fMRI) have greatly advanced our understanding of human visual coding and non-invasive brain-machine interfaces. However, most of these studies focus on classifying a limited number of image categories or reconstructing visual images with additional information, e.g., semantic categories and textual cues. Constraint-free visual reconstruction remains scarce. Here, we propose a generative network based on the functional diversity of the human visual cortex (FDGen) that takes multivariate brain activity as input and directly reconstructs natural images perceived by observers without any additional cues (semantic categories or textual description). Our FDGen is augmented by two bio-inspired computational modules. Based on the functional specializations of the human visual cortex, we propose a new function-based input module (FIM) that projects responses from different brain regions into separate feature spaces. Second, inspired by human attention, we construct a computational module to derive attentive feature weights at the function level to refine the feature map. These function-selection modules (FSMs) allow the network to dynamically select multiscale visual information during the generation process. We test FDGen on the popular fMRI datasets of natural images and achieve highly robust performance. Our work represents an important step forward in the development of fMRI-based brain decoding algorithms and highlights the utility of neuroscience theories in the design of deep learning models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call