Abstract When using image data for signage extraction, poor visibility conditions such as insufficient light, rainy days, and low light intensity are encountered, leading to low accuracy and poor boundary segmentation in vision-based detection methods. To address this problem, we propose a cross-modal latent feature fusion network for signage detection, which obtains rich boundary information by combining images with light detection and ranging depth images, thus compensating for the pseudo-boundary phenomenon that may occur when using a single RGB image segmentation. First, HRNet is utilized as the backbone network to extract the boundary information of the point cloud depth map and RGB image by introducing the boundary extraction module; Second, the sensitivity to the boundary is enhanced by applying the feature aggregation module to deeply fuse the extracted boundary information with the image features; Finally, boundary Intersection over Union (IOU) is introduced as an evaluation index. The results show that the method performs more superiorly compared to the mainstream RGBD network, with an improvement of 5.5% and 6.1% in IOU and boundary IOU, and an accuracy of 98.3% and 96.2%, respectively, relative to the baseline network.
Read full abstract