Abstract

AbstractCurrently, traffic sign recognition techniques have been brought into the assistive driving of automobiles. However, small traffic sign recognition in real scenes is still a challenging task due to the class imbalance issue and the size limit of the traffic signs. To address the above issues, a feature‐enhanced hybrid attention network is proposed based on YOLOv5s for a small, fast, and accurate traffic sign detector. First, a series of online data augmentation strategies are designed in the preprocessing module for the model training. Second, the hybrid channel and spatial attention module CSAM are integrated into the backbone for a better feature extraction ability. Third, the channel attention module CAM is used in the detection head for a more efficient feature fusion ability. To validate the approach, extensive experiments are conducted based on the Tsinghua‐Tencent 100K dataset. It is found that the novel method achieves state‐of‐the‐art performance with only negligible increases in the model parameter and computational overhead. Specifically, the , parameters, and FLOPs are 85.8%, 7.13 M, and 16.1 G, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call