Abstract

Wearable devices equipped with a variety of sensors facilitate the measurement of physiological and behavioral characteristics. Activity-based person identification is considered an emerging and fast-evolving technology in security and access control fields. Wearables, such as smartphones, Apple Watch, and Google glass can continuously sense and collect activity-related information of users, and activity patterns can be extracted for differentiating different people. Although various human activities have been widely studied, few of them (gaits and keystrokes) have been used for person identification. In this article, we performed person identification using two public benchmark data sets (UCI-HAR and WISDM2019), which are collected from several different activities using multimodal sensors (accelerometer and gyroscope) embedded in wearable devices (smartphone and smartwatch). We implemented eight classifiers, including an multivariate squeeze-and-excitation network (MSENet), time-series transformer (TST), temporal convolutional network (TCN), CNN-LSTM, ConvLSTM, XGBoost, decision tree, and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$k$ </tex-math></inline-formula> -nearest neighbor. The proposed MSENet can model the relationship between different sensor data. It achieved the best person identification accuracies under different activities of 91.31% and 97.79%, respectively, for the public data sets of UCI-HAR and WISDM2019. We also investigated the effects of sensor modality, human activity, feature fusion, and window size for sensor signal segmentation. Compared to the related work, our approach has achieved the state of the art.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.