Vision sensing-based hand gesture recognition is considered an important contemporary research problem of collaborative robotics in a human-robot coexisting environment. The problem becomes more complex in challenging environments, where a robot needs to correctly recognize a human gesture to perform navigation or some other designated job, even in the presence of poor illumination conditions, occlusion, etc. This work proposes a novel approach of successfully utilizing regularized robust coding (RRC)-based models to solve such hand gesture detection problems in real-life, challenging situations. The RRC model is used for robust regression of a signal or image and is known as an improvement over the classical sparse representation-based classification (SRC) model. In this work, we propose three novel variants of weight thresholding mechanism in conjunction with RRC (named WTRRC algorithms) which essentially employ more error-tolerant concepts of logistic functions for weight update, when the dictionary is formed by real-world, photometrically irregular hand gesture images. Extensive case studies in real-world environments (i) with poor illumination, and (ii) with both poor illumination and occlusions firmly establish superior performances of the proposed WTRRC variants compared to other state-of-the-art algorithms in such collaborative robotics problems.