Abstract
Mobile service robots possess high potential of providing numerous assistances in the working areas. In an attempt to develop a mobile service robot which is dynamically balanced for faster movement and taller manipulation capability, we designed and prototyped J4.alpha, which is intended for swift navigation and nimble manipulation. Previously, we devised a pure visual method based on a supervised deep learning model for real-time recognition of nodal locations. Four low-resolution RGB cameras are installed around J4.alpha to capture the surrounding visual features for training and detection. As the method is developed for ease of implementation, fast real-time application, accurate detection, and low cost, we further improve the accuracy and the practicality of the method in this study. Specifically, a set of expectation rules are introduced to reject outlier detections, and a scheme of training renewal is devised to effectively react to environmental modifications. In our previous tests, precision and recall rates of the location coordinate detection by the ConvNet models were generally between 0.78 and 0.91; by introducing the expectation rules, precision and recall are improved by approximately 10%. A large scale field test is also carried out here for both corridor and factory scenarios; the performance of the proposed method was tested for detection accuracy and verified for 2 m and 0.5 m nodal intervals. The scheme of training renewal designed for capturing and reflecting environmental modifications was also proved to be effective.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have