Detecting changes in multisource heterogeneous images is a great challenge for unsupervised change detection methods. Image-translation-based methods, which transform two images to be homogeneous for comparison, have become a mainstream approach. However, most of them primarily rely on information from unchanged regions, resulting in networks that cannot fully capture the connection between two heterogeneous representations. Moreover, the lack of a priori information and sufficient training data makes the training vulnerable to the interference of changed pixels. In this paper, we propose an edge-oriented generative adversarial network (EO-GAN) for change detection that indirectly translates images using edge information, which serves as a core and stable link between heterogeneous representations. The EO-GAN is composed of an edge extraction network and a reconstructive network. During the training process, we ensure that the edges extracted from heterogeneous images are as similar as possible through supplemented data based on superpixel segmentation. Experimental results on both heterogeneous and homogeneous datasets demonstrate the effectiveness of our proposed method.