Abstract

This study demonstrates robust human activity recognition from a single triaxial accelerometer via bilateral domain adaptation using semi-supervised deep translation networks. Datasets were obtained from previously published studies: University of Michigan (Domain 1) and Georgia Institute of Technology (Domain 2) where triaxial accelerometry was obtained on subjects during defined conditions with the goal of recognizing standing rest, walking (level ground), walking (decline), and walking (incline) with and without stairs (activity classes). Collected accelerometer data was preprocessed then analyzed by AdaptNet, a deep translation network composed of Variational Autoencoders and Generative Adversarial Networks trained with additional cycle-consistency losses to combine information from two data domains over shared latent space. Visualization and quantitative analyses demonstrated that AdaptNet successfully reconstructs self-domain wavelet scalogram inputs and generates realistic cross-domain translations. We found AdaptNet provides up to 36 percentage points (0.75 compared to 0.39) better classification performance measured by average macro-F <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sub> score compared to the existing domain adaptation methods when a small amount of labeled data is provided for both domains. AdaptNet yielded more robust performance than other methods when the sensor placements are different across two domains. By enabling improved ability to fuse datasets with scarce and weak labels, AdaptNet provides valid recognition of real-world locomotor activities, which can be further utilized in digital health tools such as status assessment of patients with chronic diseases.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.