Abstract

Human activity recognition (HAR) based on ambient sensors aims to recognize a conducted activity. A large number of deep learning models (DLMs) for HAR have been proposed. In contrast, human activity prediction (HAP) aims to early predict an activity. Compared to HAR, the advantage of HAP is to prevent a person from being exposed to unexpected cases by early predicting the activity. However, few DLMs for HAP have been proposed. They predict the next activity via a non-end-to-end fashion, e.g., they take a sequence of consecutive activities where the activities were classified from the sensor information. Thus, the information that which sensors are activated is not used in the prediction. In this study, we propose an end-to-end HAP model to predict the next activity from a sequence of consecutive events. The model has an encoder, a classifier, and a regressor. The encoder gives an encoded vector by encoding events. The regressor learns temporal dependency from a sequence of encoded vectors to predict the next encoded vector. The classifier predicts the next activity using the next encoded vector. We use the Milan and Aruba datasets to study a prediction accuracy of the model. We compare our model with a non-end-to-end model based on long-term memory, taking a sequence of past activities. We show that our model achieves the better prediction accuracy than the non-end-to-end model by up to 4.73% and 7.39% for Milan and Aruba, respectively, meaning that the information related to events can be used in the prediction.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.