Abstract

This article primarily focuses on the localization and extraction of multiple moving objects in images taken from a moving camera platform, such as image sequences captured by drones. The positions of moving objects in the images are influenced by both the camera's motion and the movement of the objects themselves, while the background position in the images is related to the camera's motion. The main objective of this article was to extract all moving objects from the background in an image. We first constructed a motion feature space containing motion distance and direction, to map the trajectories of feature points. Subsequently, we employed a clustering algorithm based on trajectory distinctiveness to differentiate between moving objects and the background, as well as feature points corresponding to different moving objects. The pixels between the feature points were then designated as source points. Within local regions, complete moving objects were segmented by identifying these pixels. We validated the algorithm on some sequences in the Video Verification of Identity (VIVID) program database and compared it with relevant algorithms. The experimental results demonstrated that, in the test sequences when the feature point trajectories exceed 10 frames, there was a significant difference in the feature space between the feature points on the moving objects and those on the background. Correctly classified frames with feature points accounted for 67% of the total frames.The positions of the moving objects in the images were accurately localized, with an average IOU value of 0.76 and an average contour accuracy of 0.57. This indicated that our algorithm effectively localized and segmented the moving objects in images captured by moving cameras.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call