This study develops a vision-based technique for enhancing taillight recognition in autonomous vehicles, aimed at improving real-time decision making by analyzing the driving behaviors of vehicles ahead. The approach utilizes a convolutional 3D neural network (C3D) with feature simplification to classify taillight images into eight distinct states, adapting to various environmental conditions. The problem addressed is the variability in environmental conditions that affect the performance of vision-based systems. Our objective is to improve the accuracy and generalizability of taillight signal recognition under different conditions. The methodology involves using a C3D model to analyze video sequences, capturing both spatial and temporal features. Experimental results demonstrate a significant improvement in the model's accuracy (85.19%) and generalizability, enabling precise interpretation of preceding vehicle maneuvers. The proposed technique effectively enhances autonomous vehicle navigation and safety by ensuring reliable taillight state recognition, with potential for further improvements under nighttime and adverse weather conditions. Additionally, the system reduces latency in signal processing, ensuring faster and more reliable decision making directly on the edge devices installed within the vehicles.
Read full abstract