Abstract

With the development of industrial and sensing technology, sensor-based activity recognition has become a promising technology for informatics applications. However, in a typical activity recognition procedure, sensory data segmentation, usually considered a preprocess with sliding windows, rarely has been investigated and significantly affected the recognition performance. In this article, we propose a novel deep-learning method to jointly segment and recognize activities with wearable sensors. Our contributions are three-fold: First, we introduce a multistage temporal convolutional network for sample-level activity prediction to overcome the multiclass windows problem. Second, for alleviating oversegmentation errors, our model forms a multitask learning framework with a boundary prediction module to adjust the entire model’s gradients. Third, we innovatively propose a boundary consistency loss to enforce the consistency of the activity and boundary prediction. Our method shows impressive performance on three public datasets, especially achieving 16% improvement over very recently advanced competing methods with class-average F1-score on the Hospital dataset. The code of this work will be open source on <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/xspc/Segmentation-Sensor-based-HAR</uri> .

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call