Abstract

Human Activity Recognition (HAR) based on wearable devices has always been a hot topic in health applications, human-object interaction, and smart homes. Despite significant improvements achieved by convolutional neural networks, long short-term memory networks, transformer networks and various hybrid models, there are still two fundamental issues. First, the spatial–temporal dependencies of sensor signals are difficult to be effectively modeled. Second, in multimodal environments, sensors placed on various body positions contribute distinctly to the classification results. In this work, we propose a self-attention based Two-stream Transformer Network (TTN). In view of the former issue, we use two streams, named temporal stream and spatial stream, respectively, to extract the readings-over-time and time-over-readings features from sensor signals. These features extracted from two streams are complementary since the time-over-readings features are able to express additional information which cannot be captured from sensor signals directly. To deal with the latter issue, we assign attention weights to each sensor axis in the spatial channel based on their classification scores. It makes sense that different axis-readings with distinct recognition contributions caused by data heterogeneity be treated unequally. Extensive experiments on four available benchmark datasets (PAMAP2, Opportunity, USC–HAD, and Skoda) reveal that our proposed model is better suited for multimodal HAR than previous state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.