Abstract
This paper proposes two algorithms for adaptive driving in urban environments: the first uses vision deep learning, which is named the sparse spatial convolutional neural network (SSCNN); and the second uses a sensor integration algorithm, named the sensor-weighted integration field (SWIF). These algorithms utilize three kinds of sensors, namely vision, Light Detection and Range (LiDAR), and GPS sensors, and decide critical motions for autonomous vehicle, such as steering angles and vehicle speed. SSCNN, which is used for lane recognition, has 2.7 times faster processing speed than the existing spatial CNN method. Additionally, the dataset for SSCNN was constructed by considering both normal and abnormal driving in 7 classes. Thus, lanes can be recognized by extending lanes for special characteristics in urban settings, in which the lanes can be obscured or erased, or the vehicle can drive in any direction. SWIF generates a two-dimensional matrix, in which elements are weighted by integrating both the object data from LiDAR and waypoints from GPS based on detected lanes. These weights are the integers, indicating the degree of safety. Based on the field formed by SWIF, the safe trajectories for two vehicles’ motions, steering angles, and vehicle speed are generated by applying the cost field. Additionally, to flexibly follow the desired steering angle and vehicle speed, the Proportional-Integral-Differential (PID) control is moderated by an integral anti-windup scheme. Consequently, as the dataset considers characteristics of the urban environment, SSCNN is able to be adopted for lane recognition on urban roads. The SWIF algorithm is also useful for flexible driving owing to the high efficiency of its sensor integration, including having a resolution of 2 cm per pixel and speed of 24 fps. Thus, a vehicle can be successfully maneuvered with minimized steering angle change, without lane or route departure, and without obstacle collision in the presence of diverse disturbances in urban road conditions.
Highlights
In the last three decades, studies on autonomous driving have made remarkable progress due to the efforts of many researchers [1,2]
To verify the performance of the proposed algorithm, the algorithm was tested on diverse roads using both K-City and Daegu Gyeongbuk Institute of Science & Technology (DGIST) campus including using both K-City and Daegu Gyeongbuk Institute of Science & Technology (DGIST) campus straight roads, curved roads, crosswalks, bumps, road markings, school zones, and bus lanes
This paper proposed two novel algorithms: the first algorithm is the vision deep learning method, which is named the sparse spatial convolutional neural network (SSCNN); and the second one is the sensor integration method, which is called the sensor-weighted integration field (SWIF)
Summary
In the last three decades, studies on autonomous driving have made remarkable progress due to the efforts of many researchers [1,2]. Driver convenience is enhanced by providing adaptive cruise control to maintain a constant vehicle speed and a highway driving assist system to prevent lane departure on highways [3]. Research studies on urban environments are still scarce compared to highways, which are relatively simple environments mostly containing vehicles and roads. Various dangerous situations happen in urban environments. Obstacles such as construction sites often block roads or cover lanes, as shown, and sometimes pedestrians appear suddenly on the road (Figure 1b).
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.