Abstract

The development of body sensor networks (BSNs) with rich multimodal signals has enabled highly accurate fine-grained action detection, which is the cornerstone of many human–computer interaction applications. However, in the case of consecutive fine-grained actions, most existing wearable sensor-based detection methods are constrained by sliding windows because of their limited temporal receptive fields, and existing sequence-to-sequence detection methods cannot effectively leverage the potential of multimodal information of wearable sensors. Herein, to give multimodal signals full play in fine-grained action detection, we propose a novel temporal convolutional network by designing a channel attention-based multistream structure. We apply it to a promising application for correct and incorrect patient transfer nursing action detection. A data set is collected from a BSN on a patient when nurses perform patient transfer. Extensive experiments on our data set and public data set (C-MHAD) demonstrate that the proposed method is superior to the state-of-the-art methods, because it can strengthen the utilization of prediction features from the more convincing modal stream at each time frame. The source code is available at https://github.com/zzh-tech/Continuous-Action-Detection .

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call