Abstract

Shadows and normal light illumination and road and non-road areas are two pairs of contradictory symmetrical individuals. To achieve accurate road detection, it is necessary to remove interference caused by uneven illumination, such as shadows. This paper proposes a road detection algorithm based on a learning and illumination-independent image to solve the following problems: First, most road detection methods are sensitive to variation of illumination. Second, with traditional road detection methods based on illumination invariability, it is difficult to determine the calibration angle of the camera axis, and the sampling of road samples can be distorted. The proposed method contains three stages: The establishment of a classifier, the online capturing of an illumination-independent image, and the road detection. During the establishment of a classifier, a support vector machine (SVM) classifier for the road block is generated through training with the multi-feature fusion method. During the online capturing of an illumination-independent image, the road interest region is obtained by using a cascaded Hough transform parameterized by a parallel coordinate system. Five road blocks are obtained through the SVM classifier, and the RGB (Red, Green, Blue) space of the combined road blocks is converted to a geometric mean log chromatic space. Next, the camera axis calibration angle for each frame is determined according to the Shannon entropy so that the illumination-independent image of the respective frame is obtained. During the road detection, road sample points are extracted with the random sampling method. A confidence interval classifier of the road is established, which could separate a road from its background. This paper is based on public datasets and video sequences, which records roads of Chinese cities, suburbs, and schools in different traffic scenes. The author compares the method proposed in this paper with other sound video-based road detection methods and the results show that the method proposed in this paper can achieve a desired detection result with high quality and robustness. Meanwhile, the whole detection system can meet the real-time processing requirement.

Highlights

  • Road detection based on vision in the advanced driving assisted system is an important and challengeable task [1]

  • This paper has three main contributions: (1). It comes up with a learning-based illuminant invariance method, which greatly solves the influences of the major interference of shadow detection in the road images and gains more accurate, robust road detecting results

  • We find that the road detection results by invariance theory to road detection (II-RD) and MII-RD have large errors, but the results of OLII-RD are more accurate and robust

Read more

Summary

Introduction

Road detection based on vision in the advanced driving assisted system is an important and challengeable task [1]. The vision data can provide abundant information about the driving scenes [2], provide an explicit route plan, and can gain both obstacle detection and road profile estimate information precisely. The systems that are vision-based have huge potential in complex road detection scenes [3]. Road detection has been extensively studied as a key part of automated driving systems, especially in vision-based road detection [4,5,6,7]. Symmetry 2018, 10, 707 detection algorithm to road detection, using the Gabor filter to obtain the pixel texture orientation, and the vanishing point position estimation algorithm of an adaptive soft voting scheme and the road region segmentation algorithm with a vanishing point constraint were proposed. Munajat et al [9]

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.