Abstract

Studies on self-driving transport vehicles have focused on longitudinal and lateral driving strategies in automated structured road scenarios. In this study, a double parallel network (DP-Net) combined with longitudinal and lateral strategy networks is constructed for self-driving transport vehicles in structured road scenarios, which is based on a convolutional neural network (CNN) and a long short-term memory network (LSTM). First, in feature extraction and perception, a preprocessing module is introduced that can ensure the effective extraction of visual information under complex illumination. Then, a parallel CNN sub-network is designed that is based on multifeature fusion to ensure better autonomous driving strategies. Meanwhile, a parallel LSTM sub-network is designed, which uses vehicle kinematic features as physical constraints to improve the prediction accuracy for steering angle and speed. The Udacity Challenge II dataset is used as the training set with the proposed DP-Net input requirements. Finally, for the proposed DP-Net, the root mean square error (RMSE) is used as the loss function, the mean absolute error (MAE) is used as the metric, and Adam is used as the optimization method. Compared with competing models such as PilotNet, CgNet, and E2E multimodal multitask network, the proposed DP-Net is more robust in handling complex illumination. The RMSE and MAE values for predicting the steering angle of the E2E multimodal multitask network are 0.0584 and 0.0163 rad, respectively; for the proposed DP-Net, those values are 0.0107 and 0.0054 rad, i.e., 81.7% and 66.9% lower, respectively. In addition, the proposed DP-Net also has higher accuracy in speed prediction. Upon testing the collected SYSU Campus dataset, good predictions are also obtained. These results should provide significant guidance for using a DP-Net to deploy multi-axle transport vehicles.

Highlights

  • The limited scenario of structured roads is an important market for the implementation of autonomous driving, and the driving strategy of transport vehicles is the key technology for autonomous driving implementation

  • End-to-end networks directly deduce decision commands based on rich environmental information such as light from the front camera [5,6,7], and it is a need for studies on developing end-to-end autonomous driving strategies

  • We have solved the task of end-to-end vehicle lateral and longitudinal driving strategy in terms of the speed and steering angle

Read more

Summary

Introduction

The limited scenario of structured roads is an important market for the implementation of autonomous driving, and the driving strategy of transport vehicles is the key technology for autonomous driving implementation. End-to-end networks directly deduce decision commands based on rich environmental information such as light from the front camera [5,6,7], and it is a need for studies on developing end-to-end autonomous driving strategies. Deep learning networks are typically trained and validated initially in a public dataset [11,12,13,14] The prediction of these models, despite often being highly accurate, is unknown in the test dataset. The parallel LSTM network utilizes previous vehicle states and temporal consistency of steering actions in vehicle kinematics to produce better actionable longitudinal and lateral decisions (accurate wheel angles, braking, and acceleration). The remainder of this paper is organized as follows: In Section 2, we review previous studies relevant to our research; in Section 3, we explain the problem; in Section 4, we present our proposed DP-Net (double parallel network); comprehensive empirical evaluations and comparisons are provided in Section 5; and in Section 6 we state our conclusions

Related Work
Problem Formulation
Proposed
Spatial-Feature-Extracting
Temporal-Feature-Extracting Sub-Network
Longitudinal and Lateral Prediction Sub-Network
Experiments Setup
Example
Comparison with Competing Algorithms
Method
The proposed
Validation on SYSU Campus
Ablation
Findings
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.