Due to the lacking of attention to the scene’s essential characteristics, the existing fusion methods suffer from the deficiency of scene distortion. In addition, the lack of groundtruth can cause an inadequate representation of vital information. To this end, we propose a novel infrared and visible image fusion network based on three-dimensional feature fusion strategy (D3Fuse). In our method, we consider the scene semantic information in the source images and extract the commonality contents of the two images as the third-dimensional feature to extend the feature space for fusion tasks. Specifically, a commonality feature extraction module (CFEM) is designed to extract the scene commonality features. Subsequently, the scene commonality features are utilized together with modality features to construct the fusion image. Moreover, to ensure the independence and diversity of distinct features, we employ a contrastive learning strategy with multiscale PCA coding, which stretches the feature distance in an unsupervised manner, prompting the encoder to extract more discriminative information without incurring additional parameters and computational costs. Furthermore, a contrastive enhancement strategy is utilized to ensure adequate representation of modality information. The results of the qualitative and quantitative evaluations on the three datasets show that the proposed method has better visual performance and higher objective metrics with lower computational cost. The object detection experiments show that our results have superior performance on high-level semantic tasks.