Abstract

With artificial intelligence continuing to change people’s everyday life in profound ways, the desire to endow vehicles with the ability to drive autonomously has emerged for years. Thus, autonomous driving has become a popular field. The autonomous driving task can be divided into three general procedures: perception, planning, and locomotion. The first and foremost part of these general procedures is the perception task. Among those perception methods, the most prevailing one is semantic segmentation, which is annotating and predicting the object located at the pixel level, meaning nearly all pixels should be classified into certain categories. However, this method provides enough accuracy while bringing a considerable computational burden. Thus, implementing real-time road semantic segmentation on autonomous driving vehicles is still a costly task. In this paper, an adapted model improved upon the Poly-YOLO baseline model is proposed, which is a well-developed object detection algorithm providing bounding polygons to enclose the target object, forming a polygon mask similar to that of semantic segmentation. This paper endeavors to enhance the model’s accuracy in detecting variously sized targets greatly and to fine-tune the model to generate more proximate enclosing polygons. The adapted model has experienced a leap in performance compared to the baseline Poly-YOLO model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call