Abstract

Accurate prediction of the remaining useful life (RUL) of lithium-ion batteries (LIBs) is pivotal for enhancing their operational efficiency and safety in diverse applications. Beyond operational advantages, precise RUL predictions can also expedite advancements in cell design and fast-charging methodologies, thereby reducing cycle testing durations. Despite artificial neural networks (ANNs) showing promise in this domain, determining the best-fit architecture across varied datasets and optimization approaches remains challenging. This study introduces a machine learning framework for systematically evaluating multiple ANN architectures. Using only 30% of a training dataset derived from 124 LIBs subjected to various charging regimes, an extensive evaluation is conducted across 7 ANN architectures. Each architecture is optimized in terms of hyperparameters using this framework, a process that spans 145 days on an NVIDIA GeForce RTX 4090 GPU. By optimizing each model to its best configuration, a fair and standardized basis for comparing their RUL predictions is established. The research also examines the impact of different cycling windows on predictive accuracy. Using a stratified partitioning technique underscores the significance of consistent dataset representation across subsets. Significantly, using only the features derived from individual charge–discharge cycles, our top-performing model, based on data from just 40 cycles, achieves a mean absolute percentage error of 10.7%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call