For semantic analysis of activities and events in videos, it is important to capture the spatio-temporal relation among objects in the 3D space. In this paper, we present a probabilistic method that extracts 3D trajectories of objects from 2D videos, captured from a monocular moving camera. Compared with existing methods that rely on restrictive assumptions, we propose a method that can extract 3D trajectories with much less restriction by adopting new example-based techniques, which compensate the lack of information. Here, we estimate the focal length of the camera based on similar candidates, and use it to compute depths of detected objects. Contrary to other 3D trajectory extraction methods, our method is able to process videos taken from a stable camera as well as a non-calibrated moving camera without restrictions. For this, we modify Reversible Jump Markov Chain Monte Carlo particle filtering to be more suitable for camera odometry without relying on geometrical feature points. Moreover, our method decreases time consumption by reducing the number of object detections with keypoint matching. Finally, we evaluate our method on known data sets by showing the robustness of our system and demonstrating its efficiency in dealing with different kind of videos.