Abstract

The autonomous driving market has experienced rapid growth in recent times. From systems that assist drivers in keeping within their lanes to systems that recognize obstacles using sensors and then handle those obstacles, there are various types of systems in autonomous driving. The sensors used in autonomous driving systems include infrared detection devices, lidar, ultrasonic sensors, and cameras. Among these sensors, cameras are widely used. This paper proposes a method for stable lane detection from images captured by camera sensors in diverse environments. First, the system utilizes a bilateral filter and multiscale retinex (MSR) with experimentally optimized set parameters to suppress image noise while increasing contrast. Subsequently, the Canny edge detector is employed to detect the edges of the lane candidates, followed by utilizing the Hough transform to make straight lines from the land candidate images. Then, using a proposed restriction system, only the two lines that the current vehicle is actively driving within are detected from the candidate lines. Furthermore, the lane position information from the previous frame is combined with the lane information from the current frame to correct the current lane position. The Kalman filter is then used to predict the lane position in the next frame. The proposed lane-detection method was evaluated in various scenarios, including rainy conditions, low-light nighttime environments with minimal street lighting, scenarios with interfering guidelines within the lane area, and scenarios with significant noise caused by water droplets on the camera. Both qualitative and quantitative experimental results demonstrate that the lane-detection method presented in this paper effectively suppresses noise and accurately detects the two active lanes during driving.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call