Abstract
Studies have linked excessive TV watching to obesity in adults and children. In addition, TV content represents an important source of visual exposure to cues which can effect a broad set of health-related behaviors. This paper presents a ubiquitous sensing system which can detect moments of screen-watching during daily life activities. We utilize machine learning techniques to analyze video captured by a head-mounted wearable camera. Although wearable cameras do not directly provide a measure of visual attention, we show that attention to screens can be reliably inferred by detecting and tracking the location of screens within the camera's field-of-view. We utilize a computational model of the head movements associated with TV watching to identify TV watching events. We have evaluated our method on 13 hours of TV watching videos recorded from 16 participants in a home environment. Our model achieves a precision of 0.917 and a recall of 0.945 in identifying attention to screens. We validated the third-person annotations used to determine accuracy and further evaluated our system in a multi-device environment using gold standard attention measurements obtained from a wearable eye-tracker. Finally, we tested our system in a natural environment. Our system achieves a precision of 0.87 and a recall of 0.82 on challenging videos capturing the daily life activities of participants.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.