Abstract

We address the problem of multiview association of articulated objects observed using possibly moving and hand-held cameras. Starting from trajectory data, we encode the temporal evolution of the objects and perform matching without making assumptions on scene geometry and with only weak assumptions on the field-of-view overlaps. After generating a viewpoint invariant representation using self-similarity matrices, we put in correspondence the spatio-temporal object descriptions using spectral methods on the resulting matching graph. We validate the proposed method on three publicly available real-world datasets and compare it with alternative approaches. Moreover, we present an extensive analysis of the accuracy of the proposed method in different contexts, with varying noise levels on the input data, varying amount of overlap between the fields of view, and varying duration of the available observations.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.