Abstract

Brain decoding has shown that viewed image categories can be estimated from evoked functional magnetic resonance imaging (fMRI) activity. Recent studies attempted to estimate viewed image categories that were not used for training previously. Nevertheless, the estimation performance is limited since it is difficult to collect a large amount of fMRI data for training. This paper presents a method to accurately estimate viewed image categories not used for training via a semi-supervised multi-view Bayesian generative model. Our model focuses on the relationship between fMRI activity and multiple modalities, i.e., visual features extracted from viewed images and semantic features obtained from viewed image categories. Furthermore, in order to accurately estimate image categories not used for training, our semi-supervised framework incorporates visual and semantic features obtained from additional image categories in addition to image categories of training data. The estimation performance of the proposed model outperforms existing state-of-the-art models in the brain decoding field and achieves more than 95% identification accuracy. The results also have shown that the incorporation of additional image category information is remarkably effective when the number of training samples is small. Our semi-supervised framework is significant for the brain decoding field where brain activity patterns are insufficient but visual stimuli are sufficient.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.