Abstract

A real-time fingertip-gesture-based interface is still challenging for human–computer interactions, due to sensor noise, changing light levels, and the complexity of tracking a fingertip across a variety of subjects. Using fingertip tracking as a virtual mouse is a popular method of interacting with computers without a mouse device. In this work, we propose a novel virtual-mouse method using RGB-D images and fingertip detection. The hand region of interest and the center of the palm are first extracted using in-depth skeleton-joint information images from a Microsoft Kinect Sensor version 2, and then converted into a binary image. Then, the contours of the hands are extracted and described by a border-tracing algorithm. The K-cosine algorithm is used to detect the fingertip location, based on the hand-contour coordinates. Finally, the fingertip location is mapped to RGB images to control the mouse cursor based on a virtual screen. The system tracks fingertips in real-time at 30 FPS on a desktop computer using a single CPU and Kinect V2. The experimental results showed a high accuracy level; the system can work well in real-world environments with a single CPU. This fingertip-gesture-based interface allows humans to easily interact with computers by hand.

Highlights

  • With the development of augmented-reality technology, researchers are working to reduce people’s workload while increasing their productivity by studying human–computer interactions (HCI)

  • The Natural User Interface (NUI) of hand-gesture recognition is an important topic in HCI

  • We propose a gesture-based interface where users interact with a computer using fingertip detection in RGB with depth (RGB-D) inputs

Read more

Summary

Introduction

With the development of augmented-reality technology, researchers are working to reduce people’s workload while increasing their productivity by studying human–computer interactions (HCI). Some systems use depth images from Kinect and achieve high speeds, while avoiding the disadvantages of traditional RGB cameras by tracking depth maps from frame to frame [18, 22, 28]. These methods use a complex mesh model and achieve real-time performance. We propose a gesture-based interface where users interact with a computer using fingertip detection in RGB-D inputs. & It provides simultaneous fingertip tracking for up to six people and selects the main person to control the mouse cursor, focusing on the right hand.

Related work
Proposed method
Hand detection and segmentation
Hand-contour extraction
Fingertip detection and tracking
Target-person locking
Virtual screen matching
Virtual mouse
Experimental results
Virtual-mouse performance analysis
Fingertip tracking in different conditions
Performance of multiple people tracking
Comparison with other approaches
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call