Abstract

Vision sensing-based hand gesture recognition is considered an important contemporary research problem of collaborative robotics in a human-robot coexisting environment. The problem becomes more complex in challenging environments, where a robot needs to correctly recognize a human gesture to perform navigation or some other designated job, even in the presence of poor illumination conditions, occlusion, etc. This work proposes a novel approach of successfully utilizing regularized robust coding (RRC)-based models to solve such hand gesture detection problems in real-life, challenging situations. The RRC model is used for robust regression of a signal or image and is known as an improvement over the classical sparse representation-based classification (SRC) model. In this work, we propose three novel variants of weight thresholding mechanism in conjunction with RRC (named WTRRC algorithms) which essentially employ more error-tolerant concepts of logistic functions for weight update, when the dictionary is formed by real-world, photometrically irregular hand gesture images. Extensive case studies in real-world environments (i) with poor illumination, and (ii) with both poor illumination and occlusions firmly establish superior performances of the proposed WTRRC variants compared to other state-of-the-art algorithms in such collaborative robotics problems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.