Abstract

Hand gesture recognition is one of communication in which used bodily behavior to transmit several messages. This paper aims to detect hand gestures with the mobile device camera and create a customize dataset that used in deep learning model training to recognize hand gestures. The real-time approach was used for all these objectives: the first step is hand area detection; the second step is hand area storing in a dataset form to use in the future for model training. A framework for human contact was put in place by studying pictures recorded by the camera. It was converted the RGB color space image to the greyscale, the blurring method is used for object noise removing efficaciously. To highlight the edges and curves of the hand, the thresholding method is used. And subtraction of complex background is applied to detect moving objects from a static camera. The objectives of the paper were reliable and favorable which helps deaf and dumb people interact with the environment through the sign language fully approved to extract hand movements. Python language as a programming manner to discover hand gestures. This work has an efficient hand gesture detection process to address the problem of framing from real-time video.

Highlights

  • Hand gesture recognition is one active field of computer vision research

  • The system belongs to human-computer interaction (HCI) techniques that enable the user to interact with the system without any difficulty

  • Communication between humans[2], gesture extraction systems may be used for the beneficent Human-Machine Interface (HMI)

Read more

Summary

INTRODUCTION

Hand gesture recognition is one active field of computer vision research It offers to interact with devices without the additional device use [1]. Communication between humans[2], gesture extraction systems may be used for the beneficent Human-Machine Interface (HMI) This system interface would allow a Human consumer to remotely control a large variety of devices growing hand postures. The proposed framework, introduced in this paper, allows users to communicate with machines by hand postures, being the device under different backgrounds and lighting conditions. This paper proposes a better way in which the background picture is taken at the beginning afterward the background picture is subtracted from the picture to detect the region of interest, which makes it easier to detect gestures Such traditional input instruments are very user-friendly, accessible and easy to learn. In the field of video analysis, video description, video editing and animation, the mainframe are very useful

RELATED WORK
PROPOSED FRAMEWORK METHODOLOGY
Image Pre-processing
Image segmentation
Simple Background
Complex background
1) Background frame capture & Frame differencing
Dataset Building
DATASET COMPARISON WITH OTHER
RESULT
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call