Abstract

The reappearance of human visual perception is a challenging topic in the field of brain decoding. Due to the complexity of visual stimuli and the constraints of fMRI data collection, the present decoding methods can only reconstruct the basic outline or provide similar figures/features of the perceived natural stimuli. To achieve a high-quality and high-resolution reconstruction of natural images from brain activity, this paper presents an end-to-end perception reconstruction model called the similarity-conditions generative adversarial network (SC-GAN), where visually perceptible images are reconstructed based on human visual cortex responses. The SC-GAN extracts the high-level semantic features of natural images and corresponding visual cortical responses and then introduces the semantic features as conditions of generative adversarial networks (GANs) to realize the perceptual reconstruction of visual images. The experimental results show that the semantic features extracted from SC-GAN play a key role in the reconstruction of natural images. The similarity between the presented and reconstructed images obtained by the SC-GAN is significantly higher than that obtained by a condition generative adversarial network (C-GAN). The model we proposed offers a potential perspective for decoding the brain activity of complex natural stimuli.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call