Abstract
Environment perception is essential for autonomous driving vehicles (ADVs), and vision plays a vital role in environmental perception. Object detection and semantic segmentation as the basic computer vision tasks have become the significant technology in the ADVs’ perception module. With the development of deep learning, the results of both tasks had an obvious leap. However, these advances have been driven by a powerful baseline system, which brings strict computing resource requirements. When deploying these two tasks on the same platform, real-time performance usually gets worse. In this paper, a method was presented for end-to-end lane segmentation and obstacle detection in real-time performance. In this method, a multi-task network was designed by fusing segmentation network architecture and detection network architecture. With a specific training strategy and a modified open dataset, this method has an excellent performance in detecting lane lines and obstacles simultaneously. When comparing with separately running detection and segmentation modules at the same computing platform, the method presented in this paper achieves better real-time performance and lower hardware requirements.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.