Abstract

Self-driving baggage tractors on airport ramps or aprons enable better airport operation procedures and support the expansion of the aviation market. Airport ramps have unique mobility requirements in terms of layout, population, demand, and patterns. Avoiding aircraft movement on an airport apron is a top priority because of critical security and safety issues. Existing aircraft detection approaches use remote-sensing images or surveillance cameras. However, these are not compatible with sensors for low-height equipment at airport ramps. Similarly, public road-based self-driving studies have not considered detecting the massive size and concave contours of movable objects. Camera sensors cannot accurately measure the distance of concave contours, whereas a lidar sensor cannot easily cluster or classify an object among point cloud data. In this paper, we present the fusion of cameras and lidar sensors for aircraft and object detection at airport ramps. We use parallel detection from lidar and camera sensors and then integrate both detection results to compensate for any issues. Using the proposed energy optimization model by adapting a conditional random field, we can handle over- and under-segmentation of the point cloud objects caused by the sparse point cloud generated by the aircraft. Our algorithm achieves 31.1% improvement on tracking and 5.5% improvement on classification over other fusion algorithms when applied to a dataset acquired from the Cincinnati and Northern Kentucky airport.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call