Abstract

Navigating a mobile robot in an indoor or outdoor environment is an interesting research area for human-robot collaboration (HRC). In such a scenario, hand gesture offers some unique abilities for HRC to provide non-verbal communication between the user and robot. This paper proposes a novel real-time hand gesture recognition (HGR) technique for mobile robot control application. A compact convolutional neural network (CNN) based HGR system denoted as densely connected residual channel attention module (DRCAM) is proposed to recognize the vision-based hand gestures effectively. Since fingers are the most vital sign for HGR; an attention mechanism using multi-scale representation is proposed, which emphasizes finger information more effectively. The proposed CNN model employs the cascading structure of residual blocks with a multi-scale channel attention module to learn low to high-level information of hand gestures. In addition, the cascading structures are connected through dense connectivity, which strengthens the feature propagation and facilitates feature reuse. Experiments are conducted on an ingenuously developed dataset and a publicly available American sign language finger-spelling (ASL-FS) benchmarked dataset to evaluate the performance of the proposed technique. The experimental result illustrates that the proposed DRCAM network outperforms the state-of-the-art methods in terms of mean accuracy using the leave-one-subject-out cross-validation test. Finally, the training model is used to develop a software-based user interface system for the control of a mobile robot in a real-time environment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call