To address power transmission lines (PTLs) traveling through complex environments leading to misdetections and omissions in fitting recognition using cameras, we propose a fitting recognition approach combining depth-attention YOLOv5 and prior synthetic dataset to improve the validity of fitting recognition. First, datasets with inspection features are automatically synthesized based on prior series data, achieving better results with a smaller data volume for the deep learning model and reducing the cost of obtaining fitting datasets. Next, a unique data collection mode is proposed using a developed flying-walking power transmission line inspection robot (FPTLIR) as the acquisition platform. The obtained image data in this collection mode has obvious time-space, stability, and depth difference, fusing the two data types in the deep learning model to improve the accuracy. Finally, a depth-attention mechanism is proposed to change the attention on the images with depth information, reducing the probability of model misdetection and omission. Test field experiments results show that compared with YOLOv5, the mAP5095 (mean average precision on step size 0.05 for thresholds from 0.5 to 0.95) of our depth-attention YOLOv5 model for fitting is 68.1%, the recall is 98.3%, and the precision is 98.3%. Among them, AP, recall, and precision increased by 5.2%, 4.8%, and 4.1%, respectively. Test field experiments verify the feasibility of the depth-attention YOLOv5. Line field experiments results show that the mAP5095 of our depth-attention YOLOv5 model for fittings is 64.6%, and the mAPs of each class are improved compared with other attention mechanisms. The inference speed of depth-attention YOLOv5 is 3 ms slower than the standard YOLOv5 model and 10 ms to 15 ms faster than other attention mechanisms, verifying the validity of the depth-attention YOLOv5. The proposed approach improves the accuracy of the fitting recognition on PTLs, providing a recognition and localization basis for the automation and intelligence of inspection robots.
Read full abstract