Abstract

Background With the advancement of robot technology, firefighting robots are increasingly utilized for environmental reconnaissance, detection, and rescue operations, showcasing significant advantages. Visual recognition and localization serve as crucial foundations for robots to efficiently accomplish predefined tasks. Methods In this article, a control method for firefighting robot valve operations based on the YOLO v5 target detection algorithm is proposed. This strategy solves the issue of low accuracy and efficiency in valve closure operations during hazardous gas or liquid leaks. By employing a collected dataset of valve images and leveraging deep learning-based target detection methods, the YOLO v5 network is trained and tested to enhance valve recognition accuracy and speed. The proposed approach is further validated through simulations on the Webots platform, where a mobile robot is controlled based on visual guidance to perform valve closure tasks in a simulated hazardous environment. Results The valve recognition based on YOLO v5 achieves an accuracy of 82.36%, a recall rate of 94.82%, and a mean Average Precision of 90.74%. In the Webots simulation, the robotic system accurately identifies and locates valves, smoothly rotates and tightens them, and subsequently retrieves the mechanical arm. The simulation results demonstrate the effectiveness of the proposed strategy. Conclusions The valve recognition strategy presented in this article has important implications for fire fighting robots to close valves and aid in disaster relief, particularly in harsh environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call