Abstract

Sensor data often lacks intuitive interpretability in its raw form, unlike language or image data. Furthermore, standard end-to-end training leaves little control over local representation learning. We postulate that guided local representation learning could be used to tackle both issues. In this paper we introduce a novel framework for sensor models which uses low-level grounding for guided learning of human sensor models. Our framework is amenable to different model architecture. We demonstrate our method on two different human activity datasets, one containing labels of low-level actions used in performing high-level activities, and one without any low-level labeling. We provide comprehensive analysis of our framework’s performance across many low-level action subsets and demonstrate how it can be easily adapted to data with no low-level labeling. Our results demonstrate that low-level grounding can be used to improve both the interpretability and performance of sensor models.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.