Abstract

Traffic Sign Recognition (TSR) is an essential component of Intelligent Transportation Systems (ITS) and intelligent vehicles. TSR systems based on deep learning have grown in popularity in recent years. However, since these models belong to the closed-world-oriented learning paradigm, they are only capable of accurately identifying traffic signs that are easy to collect and cannot adapt to the real world. Furthermore, the sample utilization of these methods is insufficient, the resource consumption of model training may become unbearable as the data scale grows. To address this problem, we propose a novel “knowledge <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$+$</tex-math> </inline-formula> data” co-driven solution (i.e., Joint Semantic Representation algorithm, JSR) for TSR. JSR creates a hybrid feature representation by extracting general and principal visual features from traffic sign images. It also realizes the model’s reasoning ability to zero-shot TSR based on prior knowledge of traffic sign design standards. The effectiveness of JSR is demonstrated by experiments on four benchmark datasets and two self-built TSR datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call