Abstract
Currently, the objective evaluation of the DCT vehicle drivability requires the accurate identification of the driver’s intention and vehicle state as well as the selection of the targeted evaluation indicators. The existing identification methods usually cannot divide the driver’s intentions in detail and make full use of the characteristics of time-series signals. Simultaneously, external kinematic sensors are more commonly used than the sensors of vehicle powertrain, which impacts the recognition effect. This paper proposes a new method for identifying the DCT vehicle driver’s starting intentions based on an LSTM neural network and multi-sensor data fusion. The DCT vehicle driver’s starting intentions are subdivided and defined based on human–vehicle interaction analysis and K-means clustering. The input of the model consists of 11-dimensional variables that include motion parameters of the vehicle collected by the external sensors and the powertrain parameters collected by onboard sensors. The method proposed in this paper first establishes a recognition window, which is utilized to extract the starting process samples from the DCT vehicle driving data. Second, the 11 variables of each sample are used as one set of multi-dimensional time-series signals, which are preprocessed through wavelet denoising. Finally, the LSTM network is used to identify the samples. The identification results indicate that the highest recognition accuracy of the proposed algorithm is 94.27%, which is approximately 5% higher than conventional methods, such as fully connected neural networks and support vector machines. Furthermore, the model with 11 input variables outperforms the model with fewer input variables. The effectiveness and superiority of the identification model have been demonstrated.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.