Abstract
In this paper, we design a robust and friendly human–robot interface (HRI) system for our intelligent mobile robot based only on natural human gestures. It consists of a triple-face detection method and a fuzzy logic controller (FLC)-Kalman filter tracking system to check the users and predict their current position in a dynamic and cluttered working environment. In addition, through the combined classifier of the principal component analysis (PCA) and back-propagation artificial neural network (BPANN), single and successive commands defined by facial positions and hand gestures are identified for real-time command recognition after dynamic programming (DP). Therefore, the users can instruct this HRI system to make member recognition or expression recognition corresponding to their gesture commands, respectively based on the linear discriminant analysis (LDA) and BPANN. The experimental results prove that the proposed HRI system perform accurately in real-time face detection and tracking, and robustly react to the corresponding gesture commands at eight frames per second (fps).
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Pattern Recognition and Artificial Intelligence
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.