Abstract

The smartphone-based human activity recognition (HAR) provides abundant healthcare guidance for telemedicine and clinic treatment. The continually increasing daily activities cause many difficulties for recognition and labeling. Although multimodal data fusion and artificial intelligence (AI) techniques can solve these problems, big data collection and labeling are still heavy. This paper proposes a remarkable depth data-guided framework based on smartphones for complex HAR and automatic labeling. The hardware platform is utilized to collect information of depth vision from the Microsoft Kinect camera and Inertial Measurement Unit (IMU) signals from the smartphone simultaneously. This framework consists of five clustering layers and deep learning (DL) based classification model to identify 12 complex daily activities. The results show that the hierarchical k-medoids (Hk-medoids) algorithm obtains the labels with high accuracy (93.89%). Furthermore, the performance evaluation of the DCNN model is evaluated better by comparing it with other machine learning (ML) and DL methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call