Abstract

The agricultural machinery experiment is restricted by the crop production season. Missing the crop growth cycle will extend the machine development period. The use of virtual reality technology to complete preassembly and preliminary experiments can reduce the loss caused by this problem. To improve the intelligence and stability of virtual assembly, this paper proposed a more stable dynamic gesture cognition framework: the TCP/IP protocol constituted the network communication terminal, the leap motion-based vision system constituted the gesture data collection terminal, and the CNN-LSTM network constituted the dynamic gesture recognition classification terminal. The dynamic gesture recognition framework and the harvester virtual assembly platform formed a virtual assembly system to achieve gesture interaction. Through experimental analysis, the improved CNN-LSTM network had a small volume and could quickly establish a stable and accurate gesture recognition model with an average accuracy of 98.0% (±0.894). The assembly efficiency of the virtual assembly system with the framework was improved by approximately 15%. The results showed that the accuracy and stability of this model met the requirements, the corresponding assembly parts were robust in the virtual simulation environment of the whole machine, and the harvesting behaviour in the virtual reality scene was close to the real scene. The virtual assembly system under this framework provided technical support for unmanned farms and virtual experiments on agricultural machinery.

Highlights

  • Based on the above understanding, the construction of an agricultural machinery virtual assembly system using dynamic gesture recognitive interaction based on a Convolutional neural networks (CNNs)-Long short-term memory (LSTM) network was conducive to the error analysis of virtual simulation experiments and had practical importance for the design and simulation of agricultural machinery. erefore, this paper proposed a more stable dynamic gesture cognition framework: the TCP/IP protocol constituted the network communication terminal, the leap motion-based vision system constituted the gesture data collection terminal, and the CNN-LSTM network constituted the dynamic gesture recognition classification terminal. e dynamic gesture recognition framework and the harvester virtual assembly platform formed a virtual assembly system to complete gesture interaction

  • We first used a CNN to extract feature vectors, constructed the feature vectors in a time series sequence, and used them as an LSTM network for input data. en, we used an LSTM network for processing gesture classification to obtain gesture classification information

  • Training in the CNN-LSTM network: After the picture with the hand motion information was processed by the input layer, the pixel information with the picture was formed and input into the convolutional layer, and the different features of the input image were extracted through the convolutional layer

Read more

Summary

Virtual Assembly System Architecture and Principles

We used the harvester of an enterprise as an example to construct a virtual assembly system: use Unity3D to build a virtual assembly platform, build a dynamic gesture cognitive interaction system based on the CNN-LSTM algorithm, and use leap motion to obtain real-world hand command information and position information. It consisted of a virtual assembly platform and a dynamic visual recognition system. Part 4, virtual gesture controller: we perceived the gesture information and worked with the equipment object controller to complete the virtual assembly. E main principle of the dynamic gesture recognition visual system was to use leap motion to perceive the posture and depth data of external gestures as input. E above two kinds of information cooperated with the object controller and gesture controller of the virtual assembly platform to complete the virtual assembly process

Construction of Dynamic Gesture Recognition System
Result
Perspective Controller
CNN-LSTM Gesture Recognition Algorithm Performance Analysis
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call