Abstract

Wearable cameras can gather first-person images of the environment, opening new opportunities for the development of systems able to assist the users in their daily life. This paper studies the problem of recognizing personal contexts from images acquired by wearable devices, which finds useful applications in daily routine analysis and stress monitoring. To assess the influence of different device-specific features, such as the Field Of View and the wearing modality, a dataset of five personal contexts is acquired using four different devices. We propose a benchmark classification pipeline which combines a one-class classifier to detect the negative samples (i.e., images not representing any of the personal contexts under analysis) with a classic one-vs-one multi-class classifier to discriminate among the contexts. Several experiments are designed to compare the performances of many state-of-the-art representations for object and scene classification when used with data acquired by different wearable devices.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call