Abstract

Any point in the 3D real-world scene could be presented by four non-coplanar points. And this spatial representation relationship is invariant when these points are projected from the 3D scene to a 2D image by a projective transformation. In this paper, we will introduce such spatial representation invariant to trajectory description and matching, and present a video synchronization method. In the traditional video synchronization methods, only the epipolar geometry information between the background images of the input videos or some projective invariants of the trajectories are employed to achieve the matching of the trajectory points. Differently, the proposed video synchronization method achieves that by jointly using the spatial representation relationship between the trajectory points and their background images, and the epipolar geometry constraint between the background images of the two videos. Experimental results demonstrate that the proposed method outperforms the traditional timeline constraint based and the projective invariant representation of trajectories based ones.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call