AbstractTraffic sign detection is essential to an intelligent driving assistance system. The deep learning‐based algorithm proposed in this paper aims to address the issue of low detection accuracy caused by the small size and high density of traffic signs in real‐world traffic scenarios. First, to improve the feature extraction module of the backbone network and to increase the model's ability to capture contextual information, partial convolution (PConv) is introduced. Second, to prevent information loss during the downsampling process, a cross‐stage atrous spatial pyramid (ASPPFCSPC) is constructed using atrous convolution. This method combines feature map information from various scales and expands the receptive field. Lastly, the small‐target detection precision is improved by incorporating an additional small‐target detection head, which uses high‐resolution feature maps for shallow features. The detection head is decoupled to extract the location and class information of the predicted target separately, thereby enhancing the generalization ability of the proposed model. Experiments have demonstrated the superiority of the proposed algorithm, as testing on the TT100K dataset resulted in a mAP@0.5 of 91.2% and a mAP@0.5:0.95 of 71.8% using the proposed algorithm.
Read full abstract