Surgical tool segmentation is a challenging and crucial task for computer and robot-assisted surgery. Supervised learning approaches have shown great success for this task. However, they need a large number of paired training data. Based on Generative Adversarial Networks (GAN), unpaired image-to-image translation (I2I) techniques (like CycleGAN and dualGAN) have been proposed to avoid the requirement of paired data and have been employed for surgical tool segmentation. The unpaired I2I methods avoid annotating images for domain changes. Instead of using them directly for the segmentation task, we propose new GAN-based methods for unpaired I2I by embedding a specific constraint for segmentation, namely each pixel of input image belongs to either background or surgical tool. Our methods simplify the architectures of existing unpaired I2I with a reduced number of generators and discriminators. Compared with dualGAN, the proposed strategies have a faster training process without reducing the accuracy of the segmentation. Besides, we show that, using textured tool images as annotated samples to train discriminators, unpaired I2I (including our methods) can achieve simultaneous tool image segmentation and repair (such as reflection removal and tool inpainting). The proposed strategies are validated for image segmentation of a flexible tool and for in vivo images from the EndoVis dataset.