Abstract

With the development of deep learning, more and more attention has been paid to end-to-end autonomous driving. However, affected by the nature of deep learning, end-to-end autonomous driving is currently facing some problems. First, due to the imbalance between the “junctions” and “non-junctions” samples of the road scene, the model is overfitted to a large class of samples during training, resulting in insufficient learning of the ability to turn at intersections; second, it is difficult to evaluate the confidence of the deep learning model, so it is impossible to determine whether the model output is reliable, and then make further decisions, which is an important reason why the end-to-end autonomous driving solution is not recognized; and third, the deep learning model is highly sensitive to disturbances, and the predicted results of the previous and subsequent frames are prone to jumping. To this end, a more robust and reliable end-to-end visual navigation scheme (RREV navigation) is proposed in this paper, which was used to predict a vehicle’s future waypoints from front-view RGB images. First, the scheme adopted a dual-model learning strategy, using two models to independently learn “junctions” and “non-junctions” to eliminate the influence of sample imbalance. Secondly, according to the smoothness and continuity of waypoints, a model confidence quantification method of “Independent Prediction-Fitting Error” (IPFE) was proposed. Finally, IPFE was applied to weight the multi-frame output to eliminate the influence of the prediction jump of the deep learning model and ensure the coherence and smoothness of the output. The experimental results show that the RREV navigation scheme in this paper was more reliable and robust, especially, the steering performance of the model intersection could be greatly improved.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call