Abstract

This paper presents a novel method to accurately detect moving objects from a video sequence captured using a nonstationary camera. Although common methods provide effective motion detection for static backgrounds or through only planar-perspective transformation, many detection errors occur when the background contains complex dynamic interferences or the camera undergoes unknown motions. To solve this problem, this study proposed a motion detection method that incorporates temporal motion and spatial structure. In the proposed method, first, spatial semantic planes are segmented, and image registration based on stable background planes is applied to overcome the interferences of the foreground and dynamic background. Thus, the estimated dense temporal motion ensures that small moving objects are not missed. Second, motion pixels are mapped on semantic planes, and then, the spatial distribution constraints of motion pixels, regional shapes and plane semantics, which are integrated into a planar structure, are used to minimise false positives. Finally, based on the dense temporal motion and spatial structure, moving objects are accurately detected. The experimental results on CDnet dataset, Pbi dataset, Aeroscapes dataset, and other challenging self-captured videos under difficult conditions, such as fast camera movement, large zoom variation, video jitters, and dynamic background, revealed that the proposed method can remove background movements, dynamic interferences, and marginal noises and can effectively obtain complete moving objects.© 2017 ElsevierInc.Allrightsreserved.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call