Abstract

Recent advancements in artificial intelligence (AI) have greatly improved the object detection capabilities of autonomous vehicles, especially using convolutional neural networks (CNNs). However, achieving high levels of accuracy and speed simultaneously in vehicular environments remains a challenge. Therefore, this paper proposes a hybrid approach that incorporates the features of two state-of-the-art object detection models: You Only Look Once (YOLO) and Faster Region CNN (Faster R-CNN). The proposed hybrid approach combines the detection and boundary box selection capabilities of YOLO with the region of interest (RoI) pooling from Faster R-CNN, resulting in improved segmentation and classification accuracy. Furthermore, we skip the Region Proposal Network (RPN) from the Faster R-CNN architecture to optimize processing time. The hybrid model is trained on a local dataset of 10,000 labeled traffic images collected during driving scenarios, further enhancing its accuracy. The results demonstrate that our proposed hybrid approach outperforms existing state-of-the-art models, providing both high accuracy and practical real-time object detection for autonomous vehicles. It is observed that the proposed hybrid model achieves a significant increase in accuracy, with improvements ranging from 5 to 7 percent compared to the standalone YOLO models. The findings of this research have practical implications for the integration of AI technologies in autonomous driving systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call