Abstract
Abstract This work aims to facilitate robotic-assisted limited-access manufacturing, where a mechanic manually performs work in an enclosed space while a robot guides a camera to allow the mechanic to observe visually-occluded operations. Gesture control allows the mechanic to adjust their camera view without requiring contact-based interaction through button presses or manual positioning of the robot, which would require setting down tools or exiting the enclosed space. Because the only input to the robotic platform’s gesture recognition system is a continuously tracked hand, there is potential for confusion between the motions of work and motions intended to communicate control. The main contribution of this article is to develop a separability metric Ψ for systematically selecting a set of control gestures that can be easily distinguished from a given set of work gestures. The context-specific control gesture selection process is implemented on a benchmark dataset to validate the method. The process is then implemented on a hand-following robot, and experimental results show that the proposed metric-based selection of the control gesture set results in the improvement of online gesture recognition across eight subjects even in the presence of application-specific distortions in the gestures.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.