Abstract

Semantic segmentation is an important step of visual scene understanding for autonomous driving. Recently, Convolutional Neural Network (CNN) based methods have successfully applied in semantic segmentation using narrow-angle or even wide-angle pinhole camera. However, in urban traffic environments, autonomous vehicles need wider field of view to perceive surrounding things and stuff, especially at intersections. This paper describes a CNN-based semantic segmentation solution using fisheye camera which covers a large field of view. To handle the complex scene in the fisheye image, Overlapping Pyramid Pooling (OPP) module is proposed to explore local, global and pyramid local region context information. Based on the OPP module, a network structure called OPP-net is proposed for semantic segmentation. The net is trained and evaluated on a fisheye image dataset for semantic segmentation which is generated from an existing dataset of urban traffic scenes. In addition, zoom augmentation, a novel data augmentation policy specially designed for fisheye image, is proposed to improve the net's generalization performance. Experiments demonstrate the outstanding performance of the OPP-net for urban traffic scenes and the effectiveness of the zoom augmentation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call