The location and pixel-level information of the patch are all critical data for the quantitative evaluation of pavement conditions. However, obtaining both parch location and pixel-level information simultaneously is a challenge in intelligent pavement patch surveys. This paper proposes a deep-learning-based patch instance segmentation network (PISNet) that employs you only look once (YOLO)v5 as the baseline and adds a semantic segmentation branch to provide an effective solution for this challenge. The proposed PISNet replaces the original backbone CSPDarknet53 and neck of YOLOv5 with a novel feature extractor named symmetrical pyramid network (SPN). The proposed SPN aims at repeating fusion and transfer of shallow semantic features and deep spatial localization features in the order of “FPN-PAN-FPN” such that the multi-scale semantic expression and localization ability of the feature map could be enhanced. Moreover, a modified feature selection module is also incorporated into the SPN as a skip connection to aggregate more spatial details of the feature map while suppressing redundant features. Experimental results show that compared with Mask region convolutional neural network (R-CNN), You only look at coefficients (YOLACT), YOLACT++, EfficientDet, Fully convolutional one stage object detector (FCOS), You only look once version 5m (YOLOv5m), U-Net, DeepLabv3+, and High resolution network-object contextual representations (HRNet-OCR), the proposed PISNet has the best detection performance. Meanwhile, the proposed PISNet achieves superior accuracy/frames per second trade-offs compared to Mask R-CNN, YOLACT, and YOLACT++. Particularly, the proposed PISNet has certain promising potential in supporting pavement patch detection in real-time scenarios and potentially degraded pavement patch detection. Moreover, the proposed PISNet can yield superior segmentation results compared with Mask R-CNN, YOLACT, YOLACT++, U-Net, HRNet-OCR, and DeepLabv3+ on public CRACK500 datasets. Code has been made available at: https://github.com/716HAZ/PISNet .
Read full abstract