Abstract

We address the problem of multiview association of articulated objects observed using possibly moving and hand-held cameras. Starting from trajectory data, we encode the temporal evolution of the objects and perform matching without making assumptions on scene geometry and with only weak assumptions on the field-of-view overlaps. After generating a viewpoint invariant representation using self-similarity matrices, we put in correspondence the spatio-temporal object descriptions using spectral methods on the resulting matching graph. We validate the proposed method on three publicly available real-world datasets and compare it with alternative approaches. Moreover, we present an extensive analysis of the accuracy of the proposed method in different contexts, with varying noise levels on the input data, varying amount of overlap between the fields of view, and varying duration of the available observations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call