Abstract

Realizing human activity recognition is an important issue in pedestrian navigation and intelligent prosthetic control. Utilizing miniature multi-sensor wearable networks is a reliable method to improve the efficiency and convenience of the recognition system. Effective feature extraction and fusion of multimodal signals is a key issue in recognition. Therefore, this paper proposes an enhanced algorithm based on PCA sensor coupling analysis for data preprocessing. Subsequently, an innovative two-channel convolutional neural network with an SPF feature fusion layer as the core is built. The network fully analyzes the local and global features of multimodal signals using the local contrast and luminance properties of feature images. Compared with traditional methods, the model can reduce the data dimensionality and automatically identify and fuse the key information of the signals. In addition, most of the current mode recognition only supports simple actions such as walking and running, this paper constructs a database containing sixteen states by building a network with inertial sensors (IMU), curvature sensors (FLEX) and electromyography sensors (EMG). The experimental results show that the proposed system exhibits better results in complex action recognition and provides a new scheme for the realization of feature fusion and enhancement.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.