Abstract

Lane detection is very important for autonomous navigation of mobile robots. In this paper a method which utilizes both color and texture features to extract lane regions from images is proposed. First based on color features, an improved region-growing algorithm is used to segment the images providing roughly approximated lane regions. However due to variances of scene illumination and shadows, some lane regions may be missed. In order to improve lane detection accuracy, texture features are computed and space adjacencies are also considered, which allow retrieving the lost lane regions. To meet the needs of practical applications, a video processing platform for lane detection is developed. It can detect lanes from dynamic video sequences accurately and show results in real-time. Experimental results show that the proposed algorithm can fulfill the requirements of real-time applications of robots and is robust to different environment conditions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call