In speaker recognition (SR) task, the inconsistency of emotion between enrolled speech and test speech will lead to emotion domain mismatch, which decreases the recognition accuracy and is more serious under low-resource training data condition. To help SR model learn the various emotion distribution when only neutral speech is enrolled, Identity Retention and Emotion Converted StarGAN (IREC-StarGAN) is proposed to generate speech feature in other emotions. Emotion and speaker relevant multi-task loss is designed to retain origin speaker information during adversarial training progress for the generation of emotion feature. To avoid essential speaker or emotion information being contaminated by irrelevant regions, generator uses SC-bottleneck with multi kernel size channels to learn the deep feature of data in various scales. Discriminator adopts densenet with two branches, one of which classifies whether input feature belongs to target emotion domain and the another one focuses on the speaker information. The origin neutral feature will be enrolled with generated feature together to be recognized by Resnet18 for speaker recognition task. The proposed framework obtains the EER of 18.33% on Mandarin Affective Speech Corpus and 8.14% on IEMOCAP, which outperform the existing embedding and GAN methods and reveals the robustness of IREC-StarGAN under low-resource emotion condition.