As a popular research direction in the field of intelligent transportation, various scholars have widely concerned themselves with traffic sign detection However, there are still some key issues that need to be further solved in order to thoroughly apply related technologies to real scenarios, such as the feature extraction scheme of traffic sign images, the optimal selection of detection methods, and the objective limitations of detection tasks. For the purpose of overcoming these difficulties, this paper proposes a lightweight real-time traffic sign detection integration framework based on YOLO by combining deep learning methods. The framework optimizes the latency concern by reducing the computational overhead of the network, and facilitates information transfer and sharing at diverse levels. While improving the detection efficiency, it ensures a certain degree of generalization and robustness, and enhances the detection performance of traffic signs in objective environments, such as scale and illumination changes. The proposed model is tested and evaluated on real road scene datasets and compared with the current mainstream advanced detection models to verify its effectiveness. In addition, this paper successfully finds a reasonable balance between detection performance and deployment difficulty by effectively reducing the computational cost, which provides a possibility for realistic deployment on edge devices with limited hardware conditions, such as mobile devices and embedded devices. More importantly, the related theories have certain application potential in technology industries such as artificial intelligence or autonomous driving.