Abstract
A vision-based autonomous driving perception system necessitates the accomplishment of a suite of tasks, including vehicle detection, drivable area segmentation, and lane line segmentation. In light of the limited computational resources available, multi-task learning has emerged as the preeminent methodology for crafting such systems. In this article, we introduce a highly efficient end-to-end multi-task learning model that showcases promising performance on all fronts. Our approach entails the development of a reliable feature extraction network by introducing a feature extraction module called C2SPD. Moreover, to account for the disparities among various tasks, we propose a dual-neck architecture. Finally, we present an optimized design for the decoders of each task. Our model evinces strong performance on the demanding BDD100K dataset, attaining remarkable accuracy (Acc) in vehicle detection and superior precision in drivable area segmentation (mIoU). In addition, this is the first work that can process these three visual perception tasks simultaneously in real time on an embedded device Atlas 200I A2 and maintain excellent accuracy.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.