Abstract

Despite the impressive progress of fully supervised crack segmentation, the tedious pixel-level annotation restricts its general application. Weakly supervised crack segmentation with image-level labels has received increasing attention due to the easily accessible annotation. However, the current methods are mainly based on class activation mapping (CAM) and fail to obtain the accurate crack position information directly, resulting in the complex training steps and poor segmentation performance. For the efficient tasks of weakly supervised crack segmentation, this paper proposes a novel end-to-end weakly supervised crack segmentation method termed RepairerGAN, which can directly obtain the crack segmentation result with the category information only. The proposed RepairerGAN decouples the image-to-image translation model of two different image domains into a semantic translation module and a position extraction module and uses the attention mechanism to extract the crack position information as the segmentation result. In the simple weakly supervised segmentation task based on METU crack dataset, the performance of RepairerGAN only needs a training time equal to 13.3% of that of the best performing ScoreCAM. In the complex task based on Combined crack dataset, the performance of RepairerGAN (F1 of 72.63% and IoU of 61.37%) with shorter training time is significantly ahead of the best performing ScoreCAM (F1 of 44.43% and IoU of 33.32%).

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.