Abstract

For semantic analysis of activities and events in videos, it is important to capture the spatio-temporal relation among objects in the 3D space. In this paper, we present a probabilistic method that extracts 3D trajectories of objects from 2D videos, captured from a monocular moving camera. Compared with existing methods that rely on restrictive assumptions, we propose a method that can extract 3D trajectories with much less restriction by adopting new example-based techniques, which compensate the lack of information. Here, we estimate the focal length of the camera based on similar candidates, and use it to compute depths of detected objects. Contrary to other 3D trajectory extraction methods, our method is able to process videos taken from a stable camera as well as a non-calibrated moving camera without restrictions. For this, we modify Reversible Jump Markov Chain Monte Carlo particle filtering to be more suitable for camera odometry without relying on geometrical feature points. Moreover, our method decreases time consumption by reducing the number of object detections with keypoint matching. Finally, we evaluate our method on known data sets by showing the robustness of our system and demonstrating its efficiency in dealing with different kind of videos.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.