Abstract

Pixel-based images captured by a charge-coupled device (CCD) with infrared (IR) LEDs around the image sensor are the well-known CCD Red–Green–Blue IR (the so-called CCD RGB-IR) data. The CCD RGB-IR data are generally acquired for video surveillance applications. Currently, CCD RGB-IR information has been further used to perform human gesture recognition on surveillance. Gesture recognition, including hand gesture intention recognition, is attracting great attention in the field of deep neural network (DNN) calculations. For further enhancing conventional CCD RGB-IR gesture recognition by DNN, this work proposes a deep learning framework for gesture recognition where a convolution neural network (CNN) incorporated with wavelet image fusion of CCD RGB-IR and additional depth-based depth-grayscale images (captured from depth sensors of the famous Microsoft Kinect device) is constructed for gesture intention recognition. In the proposed CNN with wavelet image fusion, a five-level discrete wavelet transformation (DWT) with three different wavelet decomposition merge strategies, namely, max-min, min-max and mean-mean, is employed; the visual geometry group (VGG)-16 CNN is used for deep learning and recognition of the wavelet fused gesture images. Experiments on the classifications of ten hand gesture intention actions (specified in a scenario of laboratory interactions) show that by additionally incorporating depth-grayscale data into CCD RGB-IR gesture recognition one will be able to further increase the averaged recognition accuracy to 83.88% for the VGG-16 CNN with min-max wavelet image fusion of the CCD RGB-IR and depth-grayscale data, which is obviously superior to the 75.33% of VGG-16 CNN with only CCD RGB-IR.

Highlights

  • Human activity recognition [1], which belongs to the categorization of behavior cognition, has been paid much attention in recent years

  • Experiments on the classifications of ten hand gesture intention actions show that by incorporating depth-grayscale data into charge-coupled device (CCD) RGB-IR gesture recognition one will be able to further increase the averaged recognition accuracy to 83.88% for the visual geometry group (VGG)-16 convolution neural network (CNN) with min-max wavelet image fusion of the CCD RGB-IR and depth-grayscale data, which is obviously superior to the 75.33% of VGG-16

  • As mentioned before, such inverse discrete wavelet transform (IDWT)-decoded images generated from max-min, min-max or mean-mean wavelet image fusion

Read more

Summary

Introduction

Human activity recognition [1], which belongs to the categorization of behavior cognition, has been paid much attention in recent years. Compared with those studies using fusion of VIS and IR images in CCD camerabased surveillance applications with human activity recognition, gesture recognition using a fusion of CCD RGB-IR and depth-grayscale, as per the presented approach, will be much competitive, especially in adverse conditions such as darkness or low illumination Compared with those works by IR thermal image-based approaches for overcoming the problem of gesture recognition in the condition of low lights, the presented approach will be much more advantageous and acceptable given the costs of sensor deployments.

Typical VGG-16 CNN Deep Learning on Recognition
Result
Hand Gesture Intention Recognition by Presented VGG-16 CNN Deep Learning
Five-level
10. The “max-min”
4.4.Experiments
13. Acquisitions modalities of of RGB-IR
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call