Abstract
Detection of visual markers, such as circular markers or quick response codes, is a commonly used approach to the positioning of wall-climbing robots. However, when the camera is far from the wall-climbing robot (e.g., 20 m), these markers become extremely blurred and difficult to detect. In this paper, a convolutional neural network-based positioning scheme comprised of a global bounding box detector and local wheel detector is proposed. The light-weight local wheel detector can quickly and accurately detect the four wheel points of a distant wall-climbing robot, and the detected wheel points can be used for calculating its position and direction angle. Our wheel detector has a single-frame processing time of 72.2 ms on a CPU and 7.1 ms on a GPU, where the latter meets the real-time positioning requirements of the wall-climbing robot. We also developed an efficient cost function for wheel matching between video frames. Simulation results and multiple test videos confirmed that the proposed cost function can match wheels between video frames perfectly. The high performance of this positioning system indicates that it may be used in a variety of industrial applications.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.