Current approaches in RGB-Infrared Person Re-Identification (RGB-IR ReID) often utilize two parameter-specific subnetworks to extract modality-specific features from RGB and IR images, followed by a shared-parameter subnetwork that identifies common features across modalities to facilitate the matching of individuals. Nevertheless, the aforementioned methods ignore the complementary relationships between modality-specific and modality-common features. The exploration of discriminative person-related modality-common features may not be comprehensive, leading to sub-optimal results. To address the above issue, we design a novel multi-level modality-specific and modality-common features fusion network (3MFFNet). Specifically, in 3MFFNet, we design a multi-granularity feature fusion module (MGFFM) to explore those complementary person-related information between modality-specific features and modality-common features. Meanwhile, we combine a multi-granularity feature extraction strategy with an attention mechanism for selecting those discriminative person-related modality-specific features and facilitating the fusion process. This method can make the RGB-IR network obtain a more robust and discriminative person representation. Through extensive experiments conducted on two public datasets, our approach demonstrates superior performance compared to existing state-of-the-art methods.