Abstract

New low cost sensors and open free libraries for 3D image processing are making important advances in robot vision applications possible, such as three-dimensional object recognition, semantic mapping, navigation and localization of robots, human detection and/or gesture recognition for human-machine interaction. In this paper, a novel method for recognizing and tracking the fingers of a human hand is presented. This method is based on point clouds from range images captured by a RGBD sensor. It works in real time and it does not require visual marks, camera calibration or previous knowledge of the environment. Moreover, it works successfully even when multiple objects appear in the scene or when the ambient light is changed. Furthermore, this method was designed to develop a human interface to control domestic or industrial devices, remotely. In this paper, the method was tested by operating a robotic hand. Firstly, the human hand was recognized and the fingers were detected. Secondly, the movement of the fingers was analysed and mapped to be imitated by a robotic hand.

Highlights

  • The recognition and tracking processes of non-rigid 3D objects have been widely studied

  • The second ROS node is a client of this service which implements the human hand recognition algorithm and it can be executed on a different computer

  • To carry out the hand recognition, some video sequences of human hand movements were analysed. These video sequences were captured with Kinect from different viewpoints using different illuminations (Figures 7-8)

Read more

Summary

Introduction

The recognition and tracking processes of non-rigid 3D objects have been widely studied. The most common methods [4] of hand detection and gesture recognition involved skin segmentation techniques, edges, silhouettes, moments, etc., from 2d colour or greyscale images In these cases, Pablo Gil, CInartloJ sAMdvatReoobaont dSyFsetr,n2a0n1d4o, T1o1r:r2e6s:| 3dDoiV: 1is0u.a5l7S7e2n/s5in7g52o5f 1 the Human Hand for the Remote Operation of a Robotic Hand the main problem was the lighting conditions. Unlike the method presented in [16][17], the proposed approach does not require a 3D hand model to be built in order to search for discrepancies between visual observations of a human hand and model It does not take a set initial reference position to find the human hand in the range image.

Implemented Architecture
The recognition and detection process
The interaction process
Human hand Recognition Process
Filtered and sampled
Skin detection
Hand descriptor and tracker
Robotic hand vs human hand model
Communication with the robotic hand
Recognition
Interaction and remote operation
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.