Semantic image segmentation is of crucial importance to many applications, such as autonomous driving, robot vision, and scene understanding. However, the border of a segmented image tends to be rough, and the labeling process is tedious and labor-intensive. Therefore, this study is the first proposing to use a deep generative adversarial network (GAN) with double-layered upsampling based on max-pooling indexed deconvolution. Our proposed upsampling method replaces the bilinear interpolation upsampling method; i.e., we fuse the deep deconvolution method by saving the indices of relative locations of the max weights computed during pooling. Combined with the deep GAN, our upsampling method can improve the extraction of low-resolution features, and compensate for the loss of the image size. To further reduce the whole network’s dependence on labeled datasets, a weakly supervised feedback method is proposed. The unlabeled data can improve the generalization ability of the model. Considering the generalization to unseen image domains, we introduce transfer learning based on a deep GAN and a weakly supervised method. The segmentation model using the trained data in the source domain can obtain good segmentation in the target domain using transfer learning. Extensive experiments in various domains demonstrate the advantages of the proposed method compared to the generalization ability of semantic segmentation. This method also significantly decreases the dependence on labeled data and ensures the network accuracy.