Abstract

Lithium battery-based electric vehicles (EVs) are gaining global popularity as an alternative to combat the adverse environmental impacts caused by the utilization of fossil fuels. State of charge (SOC) and state of health (SOH) are vital parameters that assess the battery’s remaining charge and overall health. Precise monitoring of SOC and SOH is critical for effectively operating the battery management system (BMS) in a lithium battery. This article presents an experimental study for the artificial intelligence (AI)-based data-driven prediction of lithium battery parameters SOC and SOH with the help of deep learning algorithms such as Long Short-Term Memory (LSTM) and bidirectional LSTM (BiLSTM). We utilized various gradient descent optimization algorithms with adaptive and constant learning rates with other default parameters. Compared between various gradient descent algorithms, the selection of the optimal one depends on mean absolute error (MAE) and root mean squared error (RMSE) accuracy. We developed an LSTM and BiLSTM model with four hidden layers with 128 LSTM or BiLSTM units per hidden layer that use Panasonic 18650PF Li-ion dataset released by NASA to predict SOC and SOH. Our experimental results advise that the selection of the optimal gradient descent algorithm impacts the model’s accuracy. The article also addresses the problem of overfitting in the LSTM/BiLSTM model. BiLSTM is the best choice to improve the model’s performance but increase the cost. We trained the model with various combinations of parameters and tabulated the accuracies in terms of MAE and RMSE. This optimal LSTM model can predict the SOC of the lithium battery with MAE more minor than 0.0179%, RMSE 0.0227% in the training phase, MAE smaller than 0.695%, and RMSE 0.947% in the testing phase over a 25°C dataset. The BiLSTM can predict the SOC of the 18650PF lithium battery cell with MAE smaller than 0.012% for training and 0.016% for testing. Similarly, using the Adam optimization algorithm, RMSE for training and testing is 0.326% and 0.454% over a 25°C dataset, respectively. BiLSTM with an adaptive learning rate can improve performance. To provide an alternative solution to high power consuming processors such as central processing unit (CPU) and graphics processing unit (GPU), we implemented the model on field programmable gate Aarray (FPGA) PYNQ Z2 hardware device. The LSTM model using FPGA performs better.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.