Abstract
Mobile wearable computers are intended to provide users with real-time access to information in a natural and unobtrusive manner. Computing and sensing in these devices must be reliable, easy to interact with, transparent, and configured to support different needs and complexities. This paper presents a vision-based robust finger tracking algorithm combined with audio-based control commands that is integrated into a multimodal unobtrusive user interface, wherein the interface may be used to segment out objects of interest in the environment by encircling them with the user's pointing fingertip. In order to quickly extract the objects encircled by the user from a complex scene, this unobtrusive interface uses a single head-mounted camera to capture color images, which are then processed using algorithms to perform: color segmentation, fingertip shape analysis, perturbation model learning, and robust fingertip tracking. This interface is designed to be robust to changes in the environment and user's movements by incorporating a state-space estimation with uncertain models algorithm, which attempts to control the influence of uncertain environment conditions on the system's fingertip tracking performance by adapting the tracking model to compensate for the uncertainties inherent in the data collected with a wearable computer
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.