Abstract

Low visibility at airports can significantly impact airside capacity, leading to ground delays and runway/taxiway incursions. Digital tower technology, enabled by live camera feeds, leverages computer vision to enhance airside surveillance and operational efficiency. However, technical challenges in digital camera systems can introduce low-fidelity transmission effects such as blurring, pixelation, or JPEG compression. Additionally, adverse weather conditions like rain and fog can further reduce visibility for tower controllers, whether from digital video or out-of-tower views. This paper proposes a computer vision framework and deep learning algorithms to detect and track aircraft in low-visibility (due to bad weather) and low-fidelity (due to technical issues) environments to enhance visibility using digital video input. The framework employs a convolutional neural network for aircraft detection and Kalman filters for tracking, especially in low-visibility conditions. Performance enhancements come from pre- and postprocessing algorithms like object filtering, corrupted image detection, and image enhancement. It proves effective on an airport video dataset from Houston Airport, enhancing visibility under adverse weather conditions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.