Abstract
This paper presents a novel method for localization and recognition of moving objects in a real airport surface scene. Different from the traditional applications, moving object detection (MOD) in the airport surface is more challenging because the background is an open outdoor environment, which means that the target objects are usually low in resolution and the MOD task is vulnerable to many undesired changes, such as cloud movement and illumination variations. To address these issues, this paper proposes a unified and effective deep-learning-based MOD architecture, which combines both appearance and motion cues. Specifically, a novel moving region proposal generation module is first designed, which can effectively locate the regions of moving object based on the motion information. Meanwhile, a novel cascade multilayer feature fusion module with transposed convolution is applied to produce both enriched-semantics and fine-resolution convolutional feature maps for category recognition. Finally, a large-scale dataset acquired by the daily surveillance videos of a real airport surface is manually constructed. Results show that the proposed methods outperform state-of-the-art solutions in extracting moving objects from airport surface scenes.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.