When applying deep learning to tasks, preparing training data for training a model is often a severe challenge. On the other hand, in some fields using images such as image recognition, a large amount of training data can be easily generated by using computer graphics (CG), even the target images are photos. Since target photos can be translated into CG images by using generative adversarial networks (GAN), a model can be trained with CG images to accept these photos as input for prediction. However, fake CG images translated by GANs are not always suitable for a target model resulting in an accuracy deterioration. Therefore, this study proposes a method based on cycle-consistent adversarial networks (CycleGAN) to translate photos into more suitable fake CG images for a target model, where this target model is incorporated as a discriminator. According to experimental results, it is shown that the proposed method can achieve close accuracy with CG images as it does with photos (real images).