Abstract

We present a novel hand localization technique for 3D user interfaces. Our method is designed to overcome the difficulty of fitting anatomical models which fail to converge or converge with large errors in complex scenes or suboptimal imagery. We learn a discriminative model of the hand from depth images by using fast to compute features and a Random Forest classifier. The learned model is then combined with a spatial clustering algorithm to localize the hand position. We propose three formulations of low-level image features for use in model training. We evaluate the performance of our method by testing on low resolution depth maps of users two to three meters from the sensor in natural poses. Our method can detect an arbitrary number of hands per scene and preliminary results show that it is robust to suboptimal imagery.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call