Abstract

As one of the fundamental visual tasks in the unmanned driving area, lane detection attracts increasing attention. In practical applications, lane points are very difficult to localize because they usually appear to be sparse and incomplete due to the influence of illumination and environment. Conventional lane detection methods rely on coarse features and carefully designed postprocessing to detect the lane lines. However, these methods are usually slow, and the stability and generalization ability are unsatisfactory. In this paper, we propose a novel lane detection network – Fast-HBNet(Fast-Hybrid Branch Network), which exploits both global semantic information and spatial contexts. To enlarge receptive fields and encode more detailed information, the compound transformation is employed and the proposed hybrid branch network extracts four diverse feature maps with different receptive fields and spatial contexts. Besides, we design a Hierarchical Feature Learning (HFL) module to learn lane features from the scale, channel, and spatial levels to enhance the generalization ability of our detector. These features are further selectively coalesced to generate unified lane feature maps with large receptive fields and rich detailed information. In other words, our network can encode the global semantic information from the high-resolution feature maps and the fine-grained details in the low-resolution feature maps. Experimental results conducted on the TuSimple (2017) and CULane [Pan <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">et al.</i> (2018)] datasets demonstrate that the proposed Fast-HBNet outperforms numerous state-of-the-art lane detectors in both speed and accuracy. Particularly, Fast-HBNet achieves an accuracy of 96.88% on the TuSimple dataset at a speed of 76 FPS.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call