Abstract

The need for precise segmentation of tile peeling on building façades is emphasized by its implications for efficient building maintenance, particularly in regions like Taiwan, where tiles are the dominant façade protection. In response to this challenge, the research introduces YOLOM, an innovative deep-learning-based segmentation model. This model combines the strengths of the You Only Look Once version 7 (YOLOv7) and the BlendMask-based segmentation technique, further enriched by the Efficient Layer Aggregation Network (ELAN) to boost feature discrimination and extraction capabilities for tile peeling. From a dataset of 1458 images encompassing 4595 instances of varied tile peeling captured during field surveys of public buildings, YOLOM showcased exceptional segmentation performance. It exceeded the Resnet-BlendMask by achieving 1.32 % and 0.8 % better Average Precision (AP) and AP at 50 % Intersection over Union (IoU) metrics, respectively. Notably, YOLOM consistently outperformed other models, reflecting a lead of 6.83 % and 3.3 % in AP for small objects (APs) and AP at 75 % IoU metrics. In a significant advancement, YOLOM was adeptly integrated with drone technology, refining its potential in aerial surveying of building facades. The combined approach is invaluable for building maintenance teams and facilitating proactive and cost-efficient interventions. The research contributes a distinctive framework seamlessly integrating state-of-the-art backbone and neck modules, particularly highlighting the ALAN. The innovative YOLOM model sets a new benchmark in AI methodologies for building maintenance and amplifies academic dialogues on AI-enhanced image segmentation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call