In recent years, although research on person re-identification (ReID) has made significant progress, occluded person ReID remains a major challenge. In real-world scenes, persons are often occluded by various obstacles such as vehicles, umbrellas, and other persons. This leads to the noisy interference and the loss of visual information, which result in poor ReID performance of occluded persons. To address this issue, we propose an end-to-end Mask-guided Discriminative Feature Network (MDFNet). First, MDFNet adopts a dual-branch architecture with a shared encoder as the Feature Extraction module for paired images. Each pair of images consist of one image from the training set and its corresponding occluded image generated through an occlusion augmentation strategy. Second, MDFNet utilizes a Mask-guided Discriminative Feature Enhancement and Fusion (MDFEF) module to achieve the fusion and enhancement of global and local features for high-quality person representations. MDFEF module effectively suppresses the interference of occlusion, enriches the representation capacity of person features, and enables the model to focus more on the important features in non-occluded regions. Furthermore, MDFNet introduces a sparse pairwise loss for enabling the model to dynamically adapt to intra-class variations and reduce the negative impact of complex occlusions. The experimental results on four challenging person ReID datasets demonstrate the effectiveness of the proposed method.