Abstract
Effective traffic sign detection is crucial for the safety and operational efficiency of autonomous vehicle navigation systems, particularly in dynamically changing environments. Addressing the primary challenges of long-range pixel dependencies and enhancing the detectability of small objects in complex scenes, we present VisioSignNet: A Dual-Interactive Neural Network designed for enhanced traffic sign detection. This architecture incorporates Local and Global Interactive Modules (LGIM) and Enhancing Channel and Space Interaction (ECSI) modules. The LGIM is engineered to balance local and global feature interactions, while the ECSI optimizes the interchange of information across channel and spatial dimensions. Their synergistic interaction not only enhances the perceptual field at early processing stages but also significantly improves the recognition of small-scale, critical traffic signs. Evaluated on the TT100K and GTSDB datasets, VisioSignNet achieved mean average precision (mAP) scores of 90.5% and 97%, respectively, with a model size of 26M parameters. Its enhanced variant, VisioSignNet_l, with 34M parameters, reached mAP scores of 93.2% and 97.8%. These outcomes substantiate VisioSignNet’s efficacy in tackling the complexities of traffic sign detection, confirming its potential as a robust solution in the field of autonomous driving technologies.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.