Abstract

Recent advances in wearable camera technology and computer vision algorithms have greatly enhanced the automatic capture and recognition of human activities in real-world settings. While the appeal and utility of wearable camera devices for human-behavior understanding is indisputable, privacy concerns have limited the broader adoption of this method. To mitigate this problem, we propose a deep learning-based approach that recognizes everyday activities in egocentric photos that have been intentionally degraded in quality to preserve the privacy of bystanders. An evaluation on 2 annotated datasets collected in the field with a combined total of 84,078 egocentric photos showed activity recognition performance with accuracy between 79% and 88% across 17 and 21 activity classes when the images were subjected to blurring (mean filter k=20). To confirm that image degradation does indeed raise the perception of bystander privacy, we conducted a crowd sourced validation study with 640 participants; it showed a statistically significant positive relationship between the amount of image degradation and participants' willingness to be captured by wearable cameras. This work contributes to the field of privacy-sensitive activity recognition with egocentric photos by highlighting the trade-off between perceived bystander privacy protection and activity recognition performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.