The issue of Twin Noisy Labels, known as Noisy Annotations and Noisy Correspondences, increases the challenge in the engineering application of Visible–Infrared Person Re-identification (VI-ReID). This paper proposes an novel Modality Blur and Batch Alignment (MBBA) framework to address this issue in the practical application of VI-ReID. The MBBA consists of the Label Confidence Learning (LCL) module, the Modality Blur Learning (MBL) module, and the Batch Alignment Learning (BAL) module. The LCL utilizes the memorization effect of deep neural networks to estimate the confidence level of identity labels and rectify the noisy annotations and the noisy correspondences. The MBL uses the noiseless modality labels and the center loss to blur the boundary among modality data to make the discrimination of the learned latent feature space robust. Based on the designed alignment loss, the BAL employs the self-attention mechanism to align the significant prediction distributions among the cross-modal sample pairs from a batch-size perspective. The mean Average Precision (mAP) is improved by 4.06% and 5.34% compared to state-of-the-art methods on the 20% noisy RegDB dataset. And our MBBA exhibits contemporary state-of-the-art performance on Twin Noisy Labels based VI-ReID. Our MBBA is available at https://github.com/SWU-CS-MediaLab/MBBA.