Abstract
To tackle the complexity and limited applicability of high-precision segmentation models for damaged road markings, this study proposes a Multi-level Adaptive Lightweight Network (MALNet) based on knowledge distillation. By incorporating multi-scale dilated convolution and adaptive spatial channel attention fusion modules, the MALNet model significantly enhances the precision, integrity, and robustness of its segmentation branch. Furthermore, it employs an intricate knowledge distillation strategy, channeling rich, layered insights from a teacher model to a student model, thus elevating the latter’s segmentation ability. Concurrently, it streamlines the student model by markedly reducing its parameter count and computational demands, culminating in a segmentation network that is both high-performing and pragmatic. Rigorous testing on three distinct data sets for damaged road marking detection—CDM_P (Collective Damaged road Marking—Public), CDM_H (Collective Damaged road Marking—Highways), and CDM_C (Collective Damaged road Marking—Cityroad)—underscores the MALNet model’s superior segmentation abilities across all damage types, outperforming competing models in accuracy and completeness. Notably, the MALNet model excels in parameter efficiency, computational economy, and throughput. After distillation, the student model’s parameters and computational load decrease to only 31.78% and 27.40% of the teacher model’s, respectively, while processing speeds increase to 1.9 times, demonstrating a significant improvement in lightweight design.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.