Abstract

This paper addresses object detection and scene perception for connected and autonomous vehicles. Road object detection at high accuracy and fast inference speed is a challenging task for safe autonomous driving as false positives arising from false localization can lead to fatal outcomes. The paper proposes a convolutional neural network (CNN) to recognize images to enhance intelligent adaptive behavior in autonomous vehicles by correctly classifying, detecting, and segmenting spatially distributed objects in the driving environment. By focusing on specific regions of an image, the most significant region of the image is learned by appending a CNN with probabilistic attention mechanism aided with transformers. The proposed approach is analyzed for detection efficiency and accuracy to distinguish different objects to make appropriate driving decisions. The proposed method is validated on the publicly available Berkeley deep drive (BDD) dataset and shows an accuracy comparable to other state-of-the-art deep learning algorithms to make driving decisions based on real-time assessment of the temporal states encountered while navigating the driving environment. The proposed model performance is evaluated using mean average precision (mAP) and speed-accuracy trade-off.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.