Abstract
Developing self-driving cars is an important foundation for the development of intelligent transportation systems with advanced telecommunications network infrastructure such as 6G networks. The paper mentions two main problems, namely, lane detection and obstacle detection (road signs, traffic lights, vehicles ahead, etc.) through image processing algorithms. To solve problems such as low detection accuracy of traditional image processing methods and poor real-time performance of methods based on deep learning methods, lane and object detection algorithm barriers for smart traffic are proposed. We first convert the distorting image caused by the camera and use a threshold algorithm for the lane detection algorithm. The image with a top-down view is then determined through the extraction of a region of interest and inverse perspective transform. Finally, we implement the sliding window method to determine pixels belonging to each lane and adapt it to a quadratic equation. YOLO algorithm is suitable for identifying many types of obstacles for identification problems. Finally, we use real-time videos and the TuSimple dataset to perform simulations for the proposed algorithm. The simulation results show that the accuracy of the proposal for detecting lanes is 97.91% and the processing time is 0.0021 seconds. The accuracy of the proposal for detecting obstacles is 81.90%, and the processing time is 0.022 seconds. Compared with the traditional image processing method, the average accuracy and execution time of the proposed method are 89.90% and 0.024 seconds, which is a strong antinoise ability. The results prove that the proposed algorithm can be deployed for self-driving car systems with a high processing speed of the advanced network.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.