Recently, many researchers pay more attention to unsupervised person re-identification (UPRID) due to its scalability and flexibility without any labeled data. The popular methods for UPRID are divided into clustering based and transfer learning based approaches. However, both of them either lack pre-knowledge or some useless knowledge transfered, which always achieves poor matching performance. Specifically, transfer learning based methods can utilize existing labeled data to boost the training of unlabeled data in target domain. Nevertheless, they generally offer weaker re-identification performance because there are a lot of negative images in source domain bringing negative effect on accuracy for the target domain, which is very challenging to address this negative source data removing problem. In this paper, we propose a Selective Transfer Cycle Generative Adversarial Network (STCGAN) by selecting “valuable” source domain knowledge to boost the training efficiency in unlabeled target domain. Our STCGAN approach is developed from a Cycle GAN attached a selector for source data, a part based feature extractor for target data and reconstructed source images following target distribution. Such that it can simultaneously learn “valuable” source images through the selector and exploit transferable discriminative information from these selected source images into target domain. Then, we introduce a joint optimization method and conduct extensively experiments on two widely used person re-identification datasets. The results show the superiority of the proposed STCGAN model over a range of the-state-of-the-arts.