Abstract

Obtaining robust feature representations from multi-position wearable sensory data is challenging in human activity recognition (HAR) since data from different positions can have unordered implicit correlations. Graph neural networks (GNNs) represent data as structured graphs by mining complex relationships and interdependency via message passing between the nodes of graphs. This paper proposes a novel framework (MhaGNN) that combines GNNs and the multi-head attention mechanism, aiming to learn more informative representations for multi-position HAR tasks. The MhaGNN framework takes the sensor channels from multiple wearing positions as nodes to construct graph-structured data from the spatial dimension. Besides, the multi-head attention mechanism is introduced to complete the message passing and aggregation of the graphs for spatial-temporal feature extraction. The MhaGNN learns correlations among sensor channels that can be used as compensatory features together with the captured features from each single sensor channel to enhance HAR. Experimental evaluations on three publicly available HAR datasets and a ground-truth dataset demonstrate that our proposed MhaGNN achieves state-of-the-art recognition performance with the captured rich features, including PAMAP2, OPPORTUNITY, MHAEATH and MPWHAR.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.