Glaucoma is a chronic eye disease that causes loss of vision and it is irreversible. Accurate segmentation of optic disc and optic cup is a basic step in screening glaucoma. The most existing deep convolutional neural network (DCNN) methods have insufficient feature information extraction, and hence they are susceptible to pathological regions and low-quality images, with have poor ability to restore context information. Finally, the accuracy of the model segmentation is low. In this paper, we propose GL-Net, a multi-label DCNN model that combines the generative adversarial networks. GL-Net consists of two network structures including a Generator and a Discriminator. In the Generator, we use skip connection to promote the fusion of low-level feature information and high-level feature information, which alleviates the difficulty of restoring detailed feature information during upsampling, and reduces the downsampling factor, effectively alleviating excessive feature information loss. In the loss function, we add the $L_{1}$ distance function and the cross-entropy function to prevent the mode collapse when the model is trained, which makes the segmentation result more accurate. We use transfer learning and data augmentation to alleviate the problem of insufficient data and over-fitting of the model during training. Finally, GL-Net was verified on DRISHTI-GS1 dataset. The experimental results show that GL-Net outperforms some state-of-the-art method, such as M-Net, Stack-U-Net, RACE-net, and BCRF in terms of $F1$ and boundary distance localization error (BLE). Particularly, in the optic cup segmentation, GL-Net outperforms RACE-net by 3.5% and 4.16 pixels in terms of $F1$ and BLE, respectively.