Abstract

In the field of pervasive computing, wearable devices have been widely used for recognizing human activities. One important area in this research is the recognition of activities of daily living where especially inertial sensors and interaction sensors (like RFID tags with scanners) are popular choices as data sources. Using interaction sensors, however, has one drawback: they may not differentiate between proper interaction and simple touching of an object. A positive signal from an interaction sensor is not necessarily caused by a performed activity e.g., when an object is only touched but no interaction occurred afterwards. There are, however, many scenarios like medicine intake that rely heavily on correctly recognized activities. In our work, we aim to address this limitation and present a multimodal egocentric-based activity recognition approach. Our solution relies on object detection that recognizes activity-critical objects in a frame. As it is infeasible to always expect a high quality camera view, we enrich the vision features with inertial sensor data that monitors the users’ arm movement. This way we try to overcome the drawbacks of each respective sensor. We present our results of combining inertial and video features to recognize human activities on different types of scenarios where we achieve an -measure of up to 79.6%.

Highlights

  • Human Activity Recognition is an active field of research in pervasive computing [1,2,3,4]

  • We propose the usage of such off-the-shelf smart-devices to recognize aforementioned activities, where we rely on inertial sensors and an ego-centric camera for our prediction

  • We present our work on a multimodal ego-centric activity recognition approach that relies on smart-watches and smart-glasses to recognize high-level activities, such as the activities of daily living

Read more

Summary

Introduction

Human Activity Recognition is an active field of research in pervasive computing [1,2,3,4]. As the cost for care increases [6,7,8], many fields like health care and nursing could benefit from computer-aided solutions that support care givers. Often, these problems are solved using smart home environments where activities are inferred from ubiquitous sensors in the living area, giving care givers information more . These problems are solved using smart home environments where activities are inferred from ubiquitous sensors in the living area, giving care givers information more These approaches can be very costly, as they often have to adapted to each environment separately and require a fairly large infrastructure. We propose the usage of such off-the-shelf smart-devices to recognize aforementioned activities, where we rely on inertial sensors and an ego-centric camera for our prediction

Objectives
Methods
Findings
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.