Abstract

With continued improvements in wireless sensing technology, the notion of the Internet of Things (IoT) has been widely adopted and has become pervasive owing to its broad applications in scenarios such as ambient assisted living, smart healthcare, and smart homes. In that regard, human activity recognition (HAR) is a vital element of intelligent systems to undertake persistent surveillance of human behavior. Due to the omnipresent impact of smartphones in each person’s life, smartphone inertial sensors are used as a case study for this research. Most of the conventional approaches regard HAR as a time-series classification problem; yet, the accuracy of recognition degrades for heterogeneous sensors. In this article, we investigate encoding sensory heterogeneous HAR (HHAR) data into three-channel image representation (i.e., RGB), hence treat the HHAR task as an image classification problem. Since present convolutional network models are computationally heavy when deployed in the IoT environment, we propose a lightweight model image encoded HHAR, called multiscale image-encoded HHAR (MS-IE-HHAR). The model employs a hierarchical multiscale extraction (HME) module followed by an improved spatialwise and channelwise attention (ISCA) module to form the main architecture of the model. The HME module is formed by a group of residually connected shuffle group convolutions (SG-Conv) to extract and learn image representations from different receptive fields while reducing the number of network parameters. The ISCA module combines a lightweight spatialwise attention (SwA) block and an improved channelwise attention (CwA) module to enable the network to pay instructive attention to spatial correlations as well as channel interdependency information. Finally, two widely available HHAR public data sets (i.e., HHAR UCI, and MHEALTH) were used to evaluate the performance of the proposed models with accuracy over 98% and 99%, respectively, demonstrating the model superiority for modeling HAR from heterogeneous data sources.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.