Abstract
Understanding road features such as position and color of lane markings in a live video captured from a moving vehicle is essential in building video-based car navigation systems. In this article, the authors present a framework to detect road features in 2 difficult situations: (a) ambiguous road surface conditions (i.e., damaged roads and occluded lane markings caused by the presence of other vehicles on the road) and (b) poor illumination conditions (e.g., backlight, during sunset). Furthermore, to understand the lane number that a driver is driving on, the authors present a Bayesian network (BN) model, which is necessary to support more sophisticated navigation services for drivers such as recommending lane change at an appropriate time before turning left or right at the next intersection. In the proposed BN approach, evidence from (1) a computer vision engine (e.g., lane-color detection) and (2) a navigation database (e.g., the total number of lanes) was fused to more accurately decide the lane number. Extensive simulation results indicated that the proposed methods are both robust and effective in detecting road features for a video-based car navigation system.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.