Abstract

To investigate the recognition effect of flag motions based on 9 axis sensors, starting with deep learning methods, this paper proposes a framework for feature extraction and recognition of flag movements recognition by employing an improved Inception-ResNet dual-stream network. In traditional signal recognition studies, Support Vector Machine (SVM), Random Forest (RF) and One Dimensional Convolutional Neural Network (CNN) are usually used to extract signals' features. In the meanwhile, the time series data sets such as flag movements are usually the standard data set after processing. Therefore, there are usually some limitations in traditional systems. First, in actual environment, there exists a lack of effective segmentation detection method for the samples of long time series, resulting in the deviation of the data set in the recognition process. Second, the One-Dimensional CNN framework used and the machine learning frameworks used in previous studies are difficult to process large quantities of data with too high computational memory. Based on these problems, this study proposes a signal change point detection model based on the diversity factory function in the signal segmentation and detection stage, miniaturizes the convolution kernel in the original CNN by using the Inception-ResNet(I-R) dual-flow network separable convolution method, and proposes a CrossEntropy-Logistic(C-L) joint classification loss function. Through conducting comparative experiments, it is found that the average calculation parameter of the CNN framework based on the Inception-ResNet model is 2.7×10 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">7</sup> , which is approximately 37% lower than the number of 3.7×10 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">7</sup> in the original CNN model. Finally, the recognition rate between C-L joint loss function and other models such as Multi-Layer Perceptron and Ensemble Learning in recent years are compared. Compared with Ensemble Learning-CrossEntropy (ELC) model, the C-L joint loss function can improve the recognition rate by nearly 5% according to the results of flag movements identification measured by several classification models.

Highlights

  • Flag movement technology is a command technology with wide application in traffic, ship navigation and engineering fields [1]

  • The traditional methods of flag gesture acquisition include the first generation of human flag gesture acquisition technology based on optical fiber equipment and the second generation of flag gesture image acquisition and recognition technology based on computer vision

  • To conclude, in present studys, the flag signal acquisition and classification learning systems are improved from three aspects of signal segmentation detection, feature extraction framework and classification recognition

Read more

Summary

Introduction

Flag movement technology is a command technology with wide application in traffic, ship navigation and engineering fields [1]. The research on motion data acquisition method and recognition network framework is currently the key content. The traditional methods of flag gesture acquisition include the first generation of human flag gesture acquisition technology based on optical fiber equipment and the second generation of flag gesture image acquisition and recognition technology based on computer vision. The traditional signal-sign movement recognition method requires high requirements on data acquisition and equipment. The signal-sign commander cannot move around the field or change the body position. Sensor-based motion recognition has been widely used in medical treatment and human health state detection

Objectives
Methods
Findings
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.