Abstract

Accurate and stable recognition of a driver's lateral intention is a crucial prerequisite for the proper functioning of advanced driver-assistance systems (ADAS). Existing studies usually rely on auxiliary sensor signals, such as cameras and eye trackers; however, this reliance poses challenges in applying these methods to vehicles lacking such auxiliary sensors. Furthermore, existing studies have not fully leveraged the inherent temporal dependence of lateral intentions, leading to difficulties in avoiding erroneous recognition interruptions. Thus, this study proposes a deep-learning-based lateral intention recognition method to achieve accurate and stable recognition of lateral intention using onboard sensor signals. First, a real vehicle is used to collect a vast amount of driving data, and thus guarantee the robustness and practicality of the recognition model. Subsequently, vehicle trajectories are extracted, and a trajectory clustering method is used to label lateral intentions of the driving data; these intention labels and a feature selection algorithm are utilized to select the most representative recognition features. Therefore, a lateral driving intention recognition model is constructed using double convolutional neural networks with a long short-term memory layer (CNN-LSTM). This network architecture can fully utilize the temporal dependence of lateral intentions. Finally, the recognition performance of the designed double CNN-LSTM networks is validated using the existing driving data and real-world vehicle tests. The results indicate that the double CNN-LSTM networks can achieve stable recognition of lateral intention in real-time and the accuracy reaches 98.64% in the experiment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call