Abstract

A drill-anchor robot is an essential means of efficient drilling and anchoring in coal-mine roadways. It is significant to calculate the position of the drill-anchor robot based on the positioning information of the supported anchor rod to improve tunneling efficiency. Therefore, identifying and positioning the supported anchor rod has become a critical problem that needs to be solved urgently. Aiming at the problem that the target in the image is blurred and cannot be accurately identified due to the low and uneven illumination environment, we proposed an improved YOLOv7 (the seventh version of the You Only Look Once) model based on the fusion of image enhancement and multiattention mechanism, and the self-made dataset is used for testing and training. Aiming at the problem that the traditional positioning method cannot guarantee accuracy and efficiency simultaneously, an anchor-rod positioning method using depth image and RGB image alignment combined with least squares linear fitting is proposed, and the positioning accuracy is improved by processing the depth map. The results show that the improved model improves the mAP by 5.7% compared with YOLOv7 and can accurately identify the target. Through the positioning method proposed in this paper, the error between the positioning coordinate and the measurement coordinate of the target point on each axis does not exceed 11 mm, which has high positioning accuracy and improves the positioning accuracy and robustness of the anchor rod in the coal-mine roadway.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call