We developed a two-stage traffic sign recognition system to enhance safety and prevent tragic traffic incidents involving self-driving cars. In the first stage, YOLOv7 was employed as the detection model for identifying 31 types of traffic signs. Input images were set to 640 × 640 pixels to balance speed and accuracy, with high-definition images split into overlapping sub-images of the same size for training. The YOLOv7 model achieved a training accuracy of 99.2 % and demonstrated robustness across various scenes, earning a testing accuracy of 99 % in both YouTube and self-recorded driving videos. In the second stage, extracted road sign images underwent rectification before processing with OCR tools such as EasyOCR and PaddleOCR. Post-processing steps addressed potential confusion, particularly with city/town names. After extensive testing, the system achieved recognition rates of 97.5 % for alphabets and 99.4 % for Chinese characters. This system significantly enhances the ability of self-driving cars to detect and interpret traffic signs, thereby contributing to safer road travel.