Abstract

AbstractRoad detection plays a vital role in automated driving and advanced driver assistance systems. In recent years, mainstream frameworks have suffered from the restrictions of a receptive field and the limitation of modelling long‐range relations. Previous methods fail to segment precise road boundaries when urban roads with similar surface textures are presented. Moreover, road regions are perceived as non‐road areas due to the shadow effect, which affects the completeness of the road in the traffic environment. To this end, for urban road detection, a hierarchical enhanced attention transformation (HEAT) architecture, which holds both fine details (road edges) and global contextual information (road structure) is proposed. The symmetrical data‐fusion residual network fuses visual semantic and spatial structure information. The attention consolidation units model global contextual information at different layers to realize the feature enhancement from coarse to fine. In addition, corresponding local and global features are fused hierarchically in the progressive up‐sampling modules. Comprehensive empirical studies are conducted to compare other mainstream methods in the KITTI and the Cityscapes dataset. HEAT shows highly competitive performance in that confusable areas are correctly distinguished in the presence of obstacles, shadows, and similar road textures. HEAT outperforms state‐of‐the‐art methods in urban road detection.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call