Abstract
Environment perception is one of the major issues in autonomous driving systems. In particular, effective and robust drivable road region detection still remains a challenge to be addressed for autonomous vehicles in multi-lane roads, intersections and unstructured road environments. In this paper, a computer vision and neural networks-based drivable road region detection approach is proposed for fixed-route autonomous vehicles (e.g., shuttles, buses and other vehicles operating on fixed routes), using a vehicle-mounted camera, route map and real-time vehicle location. The key idea of the proposed approach is to fuse an image with its corresponding local route map to obtain the map-fusion image (MFI) where the information of the image and route map act as complementary to each other. The information of the image can be utilized in road regions with rich features, while local route map acts as critical heuristics that enable robust drivable road region detection in areas without clear lane marking or borders. A neural network model constructed upon the Convolutional Neural Networks (CNNs), namely FCN-VGG16, is utilized to extract the drivable road region from the fused MFI. The proposed approach is validated using real-world driving scenario videos captured by an industrial camera mounted on a testing vehicle. Experiments demonstrate that the proposed approach outperforms the conventional approach which uses non-fused images in terms of detection accuracy and robustness, and it achieves desirable robustness against undesirable illumination conditions and pavement appearance, as well as projection and map-fusion errors.
Highlights
Environment perception is a critical technical issue for autonomous vehicles and significant progress has been made during the past decade
2) Map fusion: the homogenous local route map of the processed image is extracted from a global route map with the assistance of vehicle location data, and it is fused with the image to generate a map-fusion image (MFI). 3) Drivable road region detection: a Fully Convolutional Network (FCN)-based deep neural network (FCN-VGG16) is utilized to detect drivable road region from the MFIs
The homogenous local route map of the processed image is extracted from a global route map with the assistance of vehicle location data, and it is fused with the image to generate a map-fusion image (MFI). (3) Drivable road region detection: a FCNs-based deep neural network (FCN-VGG16) is utilized to detect drivable road region from the MFIs
Summary
Environment perception is a critical technical issue for autonomous vehicles and significant progress has been made during the past decade. LiDAR has been applied to perception tasks in several autonomous vehicle systems and demonstrated its effectiveness in experimental tests [2,3], its application in drivable road detection is still limited by its disadvantages such as high cost, low resolution and lack of texture information. The real-world autonomous driving scenarios pose significant challenges to camera-based perception systems, due to the following factors: (1) Unstructured road environments: road markings and lane borders are not always available, and the marking/borders may be too vague to be identified; (2) variable illumination conditions: the images may contain shadows and other undesirable illumination conditions; (3) road curvatures: the camera’s field-of-view may not capture the entire region needed due to curved road segments; (4) ununiform pavement appearance and occlusions: the road pavement in the image may contain variable texture and color, and the objects in the camera’s field-of-view may cause occlusions in the image.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have