Abstract

Wearable cameras provide an informative view of wearer activities, context, and interactions. Video obtained from wearable cameras is useful for life-logging, human activity recognition, visual confirmation, and other tasks widely utilized in mobile computing today. Extracting foreground information related to the wearer and separating irrelevant background pixels is the fundamental operation underlying these tasks. However, current wearer foreground extraction methods that depend on image data alone are slow, energy-inefficient, and even inaccurate in some cases, making many tasks-like activity recognition- challenging to implement in the absence of significant computational resources. To fill this gap, we built ActiSight, a wearable RGB-Thermal video camera that uses thermal information to make wearer segmentation practical for body-worn video. Using ActiSight, we collected a total of 59 hours of video from 6 participants, capturing a wide variety of activities in a natural setting. We show that wearer foreground extracted with ActiSight achieves a high dice similarity score while significantly lowering execution time and energy cost when compared with an RGB-only approach.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.