Abstract

With advances in spatial-temporal internet of things (IoT) technologies, human activity recognition (HAR) has played a major role in human safety and medical health. Recently, most works focus on activity deep feature extraction, offering promising alternatives to manually engineered features. However, how to extract the effective and distinguishable continuity activity features and meanwhile reduce the heavy dependence on labels still remains the key challenge for human activity recognition. This paper proposes the human continuity activity semi-supervised recognizing method in multi-view IoT network scenarios. Our innovation combines supervised activity feature extraction with unsupervised encoder-decoder modules, which can capture continuity activity features from sensor data streams. To be more specific, our work applies convolutional neural network (CNN) to capture the local dependence of sensor data and designs an encoder-decoder architecture to extract temporal features in an unsupervised manner. Then we fuse these two features to recognize activities and train them with manual labels, thereby refining the temporal feature extraction and training CNN module. Experiments on five public datasets demonstrate the exceptional performance of our proposed method, which can achieve a higher recognition accuracy on almost all the datasets and is more robust and adaptive among different datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call