Image fusion is crucial in computer vision, offering numerous advantages across various applications. However, current fusion methods often face challenges in striking a balance between preserving specific features, which are susceptible to subjective bias, and avoiding the distortion of unique information, thus limiting overall fusion performance. Given these challenges, we have proposed a cross-reconstruction uniqueness fusion network (CUFNet) to fuse visible and infrared images. Initially, we developed the cross-reconstruction uniqueness module to objectively measure feature uniqueness and enhance the accurate retention of distinctive features. Subsequently, we devised the feature compensation module to improve the interaction among medium, deep, and shallow features, generating a fused image with enhanced texture details. Finally, the multilayer feature fusion model was utilized to optimize the retention of specific features across different layers. Experimental validation conducted using two publicly available datasets, TNO and RoadScene, demonstrates that our proposed method, CUFNet, outperforms twelve comparative methods in terms of both quantitative and qualitative perspectives.