Abstract

Contextual awareness in wearable computing allows for construction of intelligent systems, which are able to interact with the user in a more natural way. In this paper, we study how personal locations arising from the user's daily activities can be recognized from egocentric videos. We assume that few training samples are available for learning purposes. Considering the diversity of the devices available on the market, we introduce a benchmark dataset containing egocentric videos of eight personal locations acquired by a user with four different wearable cameras. To make our analysis useful in real-world scenarios, we propose a method to reject negative locations, i.e., those not belonging to any of the categories of interest for the end-user. We assess the performances of the main state-of-the-art representations for scene and object classification on the considered task, as well as the influence of device-specific factors such as the field of view and the wearing modality. Concerning the different device-specific factors, experiments revealed that the best results are obtained using a head-mounted wide-angular device. Our analysis shows the effectiveness of using representations based on convolutional neural networks, employing basic transfer learning techniques and an entropy-based rejection algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call