Abstract

To facilitate data-driven and informed decision making, a novel deep neural network architecture for human activity recognition based on multiple sensor data is proposed in this work. Specifically, the proposed architecture encodes the time series of sensor data as images (i.e., encoding one time series into a two-channel image), and leverages these transformed images to retain the necessary features for human activity recognition. In other words, based on imaging time series, wearable sensor-based human activity recognition can be realized by using computer vision techniques for image recognition. In particular, to enable heterogeneous sensor data to be trained cooperatively, a fusion residual network is adopted by fusing two networks and training heterogeneous data with pixel-wise correspondence. Moreover, different layers of deep residual networks are used to deal with dataset size differences. The proposed architecture is then extensively evaluated on two human activity recognition datasets (i.e., HHAR dataset and MHEALTH dataset), which comprise various heterogeneous mobile device sensor combinations (i.e., acceleration, angular velocity, and magnetic field orientation). The findings demonstrate that our proposed approach outperforms other competing approaches, in terms of accuracy rate and F1-value.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.