The semantic knowledge stored in our brains can be accessed from different stimulus modalities. For example, a picture of a cat and the word "cat" both engage similar conceptual representations. While existing research has found evidence for modality-independent representations, their content remains unknown. Modality-independent representations could be semantic, or they might also contain perceptual features. We developed a novel approach combining word/picture cross-condition decoding with neural network classifiers that learned latent modality-independent representations from MEG data (25 human participants: 15 female, 10 male). We then compared these representations to models representing semantic, sensory, and orthographic features. Results show that modality-independent representations correlate both with semantic and visual representations. There was no evidence that these results were due to picture-specific visual features or orthographic features automatically activated by the stimuli presented in the experiment. These findings support the notion that modality-independent concepts contain both perceptual and semantic representations.Significance statement This study sheds light on how the human brain stores semantic knowledge across different stimulus modalities (pictures and written words). We developed a method that allowed us to investigate the content of conceptual representations in the brain independently of the stimulus modality that was perceived by participants. Results showed that modality-independent representations contain both semantic and visual features. We found no evidence that these results are due to picture-specific visual or orthographic features activated by the stimuli presented in the experiment.