Abstract

With modern science and technology development, vehicles are equipped with intelligent driver assistant systems, of which lane detection is a key function. These complex detection structures (either wide or deep) are investigated to boost the accuracy and overcome the challenges in complicated scenarios. However, the computation and memory storage cost will increase sharply, and the response time will also increase. For resource-constrained devices, lane detection networks with a low cost and short inference time should be implemented. To get more accurate lane detection results, the large (deep and wide) detection structure is framed for high-dimensional and highly robust features, and deep supervision loss is applied on different resolutions and stages. Despite the high-precision advantage, the large detection network cannot be used for embedded devices directly because of the demand for memory and computation. To make the network thinner and lighter, a general training strategy, called self-knowledge distillation (SKD), is proposed. It is different from classical knowledge distillation; there are no independent teacher-student networks, and the knowledge is distilled itself. To evaluate more comprehensively and precisely, a new lane data set is collected. The Caltech Lane date set and TuSimple lane data set are also used for evaluation. Experiments further prove that a small student network and large teacher network have a similar detection accuracy via SKD, and the student network has a shorter inference time and lower memory usage. Thus it can be applied for resource-limited devices flexibly.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.