Abstract

Abstract Purpose To improve the performance of deep-learning-based image segmentation, a sufficient amount of training data is required. However, it is more difficult to obtain training images and segmentation masks for medical images than for general images. In deep-learning-based colon polyp detection and segmentation, research has recently been conducted to improve performance by generating polyp images using a generative model, and then adding them to training data. Methods We propose SemanticPolypGAN for generating colonoscopic polyp images. The proposed model can generate images using only the polyp and corresponding mask images without additional preparation of input condition. In addition, the semantic generation of the shape and texture of polyps and non-polyp parts is possible. We experimentally compare the performance of various polyp-segmentation models by integrating the generated images and masks into the training data. Results The experimental results show improved overall performance for all models and previous work. Conclusion This study demonstrates that using polyp images generated by SemanticPolypGAN as additional training data can improve polyp segmentation performance. Unlike existing methods, SemanticPolypGAN can independently control polyp and non-polyp parts in a generation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call