Abstract
In this paper, we present our investigation of the 2D Hand Gesture Recognition (HGR) which may be suitable for the control of the Automated Guided Vehicle (AGV). In real conditions, we deal with, among others, a complex background, changing lighting conditions, and different distances of the operator from the AGV. For this reason, in the article, we describe the database of 2D images created during the research. We tested classic algorithms and modified them by us ResNet50 and MobileNetV2 which were retrained partially using the transfer learning approach, as well as proposed a simple and effective Convolutional Neural Network (CNN). As part of our work, we used a closed engineering environment for rapid prototyping of vision algorithms, i.e., Adaptive Vision Studio (AVS), currently Zebra Aurora Vision, as well as an open Python programming environment. In addition, we shortly discuss the results of preliminary work on 3D HGR, which seems to be very promising for future work. The results show that, in our case, from the point of view of implementing the gesture recognition methods in AGVs, better results may be expected for RGB images than grayscale ones. Also using 3D imaging and a depth map may give better results.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.