Abstract

A panoptic driving system generates autonomous driving directives only from images/frames captured by a camera. For autonomous driving, the network must make judgments at fast speed, with high precision, and in real time. The paper proposes Autonomous Driving with You Only Look Once for Panoptic Driving Perception (YOLOP). This network comprises of a common encoder and four distinct decoder heads that handle individual tasks such as traffic and object identification, drivable area segmentation, lane line segmentation, and traffic sign detection and recognition. The proposed network produces cutting-edge results on the Berkely Deep Drive (BDD100K) dataset, as well as the German Traffic Sign Recognition Benchmark (GTSRB) and German Traffic Sign Detection Board (GTSDB) datasets for traffic signs. The model is then implemented on a Raspberry Pi 4 Autonomous bot, which has the potential of making autonomous judgments on numerous characteristics such as left- and right-hand turns, speeding up or slowing down in accordance with the speed limit traffic sign, and halting when it encounters an obstacle. The network achieves a mean average precision (mAP) @ 0.5 of 76.5% for traffic object identification and recognition, an Intersection over Union (IOU) of 91.5% for navigable area segmentation, an accuracy of 70.5% for lane detection, and a mAP @ 0.5 of 99.4% for traffic sign detection and 94% for traffic sign recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call