Abstract

With the widespread application of vision-based wearable devices, temporal segmentation helps people search for and localize all occurrences quickly in egocentric videos. In the same scenario, the activities are similar to each other, e.g., people staying at home typically cook, clean and watch TV. These relations among videos of different individuals are regarded as auxiliary information to improve task performance. Inspired by this, we propose an Information Maximization Multi-task Clustering (IMMC) algorithm for egocentric temporal segmentation. The algorithm mainly includes two parts: (1) within-task clustering: clustering on each task based on an information maximization approach, and (2) cross-task information transferring: a novel strategy is presented to transfer correlation information between tasks, which balances the correlation among clusters in different tasks to improve the performance of the individual task. A draw-merge method is designed to address the optimization problem. Experiments are performed on three publicly available first-person vision data sets and a new data set we construct (Outdoor data set). The results show that IMMC consistently outperforms the state-of-the-art clustering methods in multiple evaluation metrics. Moreover, it achieves relatively good performance on runtime cost and convergence.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.