Abstract

Among many techniques to interact with 3D environments, gesture-based input appears promising. However, due to insufficient computing hardware capabilities, such interfaces have to be built either upon standard tracking devices or using limited image-based video tracking algorithms. As today computing power tends to be more and more powerful, more complex video analysis such as real-time model-based tracking is at hand. Considering the use of a model-based approach to allow unencumbered input gives us the advantage of extracting a low-level hand description useful to build natural interfaces. The algorithm we developed relies on a 3D polygonal hand model. Its pose parametrization is iteratively refined so that its 2D projection matches more closely the input 2D image. Relying on the graphics hardware to handle fast 2D projection is critical, while adding more cameras is useful to cope with the occlusion issue.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.