Multiview state-of-health estimation for lithium-ion batteries using time–frequency image fusion and attention-based deep learning

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Lithium-ion batteries are high-performance energy storage devices that have been widely used in a variety of applications. Accurate early-stage prediction of their remaining useful life is essential for preventing failures and mitigating safety risks. This study proposes a novel multiview approach for estimating the State-of-Health (SOH) of lithium-ion batteries by integrating time-domain and time–frequency features. Firstly, time-domain signals are transformed into time–frequency images using a wavelet transform. Three representative features are then selected and converted into grayscale images, which are combined into three-channel color images as inputs for a convolutional neural network (CNN) to extract spatial features. These features are subsequently passed into a long short-term memory (LSTM) network to capture spatial dependencies. In parallel, raw temporal features are processed through a two-stage attention mechanism to explore both temporal and spatial correlations, followed by another LSTM to model temporal dependencies. The outputs from the two branches are fused using weighted integration and passed through a fully connected layer to generate the final SOH estimate. Comparative experiments with four baseline models demonstrate that the proposed time–frequency fusion architecture significantly enhances prediction accuracy, and that each component makes a meaningful contribution to the overall performance.

Similar Papers
  • Research Article
  • Cite Count Icon 2
  • 10.3390/rs16234550
An Automatic Modulation Recognition Algorithm Based on Time–Frequency Features and Deep Learning with Fading Channels
  • Dec 4, 2024
  • Remote Sensing
  • Xiaoya Zuo + 4 more

Automatic modulation recognition (AMR) stands as a crucial core technology within the realm of signal processing and perception, playing a significant part in harsh electromagnetic environments. The time–frequency image (TFI) of communication signals can manifest modulation characteristics and serve as a foundation for signal modulation recognition and classification. However, under the influence of the electromagnetic environment, communication signals are exposed to varying degrees of interference, which poses a challenge to the recognition of modulation types. Taking into account the effects of interference and channel fading, this paper introduces a communication signal modulation recognition algorithm based on deep learning (DL) and time–frequency analysis. This approach employs short-time Fourier transform (STFT) to generate time–frequency diagrams from time-domain signals. Subsequently, it binarizes the image and feeds it as input data to the neural network. Our research presents a composite deep convolutional neural network (CNN) architecture known as the composite dense-residual neural network (CDRNN). This architecture focuses on enhancing the feature extraction and identification, aiming to achieve accurate recognition of modulation types in harsh electromagnetic environments. Finally, simulation results validate that the proposed deep learning algorithm holds remarkable advantages in boosting the accuracy of modulation type recognition with better adaptability. The algorithm shows better performance even in harsh electromagnetic environments. When the signal-to-noise ratio (SNR) is 18 dB, the recognition accuracy can reach 92.1%.

  • Book Chapter
  • Cite Count Icon 1
  • 10.1007/978-981-16-0708-0_3
Gujarati Task Oriented Dialogue Slot Tagging Using Deep Neural Network Models
  • Jan 1, 2021
  • Rachana Parikh + 1 more

In this paper, the primary focus is of Slot Tagging of Gujarat Dialogue, which enables the Gujarati language communication between human and machine, allowing machines to perform given task and provide desired output. The accuracy of tagging entirely depends on bifurcation of slots and word embedding. It is also very challenging for a researcher to do proper slot tagging as dialogue and speech differs from human to human, which makes the slot tagging methodology more complex. Various deep learning models are available for slot tagging for the researchers, however, in the instant paper it mainly focuses on Long Short-Term Memory (LSTM), Convolutional Neural Network - Long Short-Term Memory (CNN-LSTM) and Long Short-Term Memory – Conditional Random Field (LSTM-CRF), Bidirectional Long Short-Term Memory (BiLSTM), Convolutional Neural Network - Bidirectional Long Short-Term Memory (CNN-BiLSTM) and Bidirectional Long Short-Term Memory – Conditional Random Field (BiLSTM-CRF). While comparing the above models with each other, it is observed that BiLSTM models performs better than LSTM models by a variation ~2% of its F1-measure, as it contains an additional layer which formulates the word string to traverse from backward to forward. Within BiLSTM models, BiLSTM-CRF has outperformed other two Bi-LSTM models. Its F1-measure is better than CNN-BiLSTM by 1.2% and BiLSTM by 2.4%.KeywordsSpoken Language Understanding (SLU)Long Short-Term Memory (LSTM)Slot taggingBidirectional Long Short-Term Memory (BiLSTM)Convolutional Neural Network - Bidirectional Long Short-Term Memory (CNN-BiLSTM)Bidirectional Long Short-Term Memory (BiLSTM-CRF)

  • Research Article
  • Cite Count Icon 21
  • 10.1016/j.jhydrol.2023.129732
Daily suspended sediment concentration forecast in the upper reach of Yellow River using a comprehensive integrated deep learning model
  • Jun 8, 2023
  • Journal of Hydrology
  • Jinsheng Fan + 2 more

Daily suspended sediment concentration forecast in the upper reach of Yellow River using a comprehensive integrated deep learning model

  • Research Article
  • Cite Count Icon 14
  • 10.1109/jsen.2022.3159475
An Interpretable Convolutional Neural Network for P300 Detection: Analysis of Time Frequency Features for Limited Data
  • May 1, 2022
  • IEEE Sensors Journal
  • Mahnoosh Tajmirriahi + 3 more

In this study, a new deep learning-based methodology is developed for P300 detection in brain computer interface (BCI) systems based on time-frequency (TF) features of EEG signals coupled to deep learning. The TF distributions can transform EEG signals to the TF images by simultaneous representation of time and frequency properties of the signal. However, they do not display the energy distribution of signals at different scales identically and their advantages may be greatly incorporated by using them together. Here, four TF images of single-trail EEG signal are computed and the concatenation of the TF (cTF) images of each signal is developed to be used as training data for a simple and lightweight deep learning-based classifier. The applied TF distributions are spectrogram, Wigner–Ville distribution, Morlet-scalogram, and Bertrand distribution. Performance of method is evaluated over limited data acquired from the normal and amyotrophic lateral sclerosis (ALS) datasets and accuracy of 96.56%, and 96.84% are achieved respectively, which is superior to the other comparing algorithms. Moreover, results of cross-subject classification indicate the promising ability of the method in eliminating calibration in BCI systems. Furthermore, the heat maps of the P300 and non-P300 classes are produced to explain important regions of cTF image for classifier decision and investigate which TF may help better classification. Results revealed the efficiency of cTF images for accurate P300 detection in simple structure classifiers having the advantage of fewer data and less memory requirement. This method can be employed in P300 speller BCI systems to improve the character recognition performance in.

  • Research Article
  • 10.57197/jdr-2025-0590
Enhancing Seizure Detection Accuracy in Wearable EEG Devices Using Deep Learning Algorithms
  • Jan 1, 2025
  • Journal of Disability Research
  • Mohammed Alarfaj + 4 more

Wearable electroencephalography (EEG) devices for seizure detection accuracy and reliability are deep learning (DL) applications in the field of epilepsy diagnosis. In this study, we sought to increase the accuracy of seizure detection using advanced DL algorithms on the Children’s Hospital Boston - Massachusetts Institute of Technology (CHB-MIT) EEG database. First, a fully convolutional network (FCN) was trained and assessed using accuracy and recall/precision metrics, and the early stopping technique was used to avoid overfitting. To assess the performance, the FCN was evaluated in terms of various metrics, including accuracy, precision, recall, F1-score, and receiver operating characteristic (ROC)-area under the curve (AUC). In addition, two-dimensional (2D) convolutional neural networks (CNNs) and long short-term memory (LSTM) models were used to model the database, and their performance was thoroughly measured using different metrics, graphs, and confusion matrices. Using LSTM variants, such as standard LSTM, bidirectional LSTM, stacked LSTM, and LSTM attention mechanisms, hybrid convolutional LSTM (ConvLSTM) models were trained and compared. The comparison was conducted based on the training and validation accuracy and loss, as well as the graphs resulting from the precision–recall curves. Apart from DL approaches, EEG signal analysis using time–frequency techniques, such as wavelet transform and short-time Fourier transform, has also been investigated. These methods assisted in the analysis of the time–frequency features of EEG signals in combination with DL models. This study demonstrates that the performance of wearable EEG devices can be augmented using a combination of DL and seizure signal processing techniques. The FCN achieved an accuracy of 92%, with a recall for seizures of 33%, an F1-score of 0.03, and strong ROC-AUC results. The 2D CNN achieved 96% accuracy, a seizure recall of 70%, an F1-score of 0.12, and an ROC-AUC score of 78%. The baseline LSTM struggled with effectiveness at 53% accuracy with a seizure recall of 18%. In contrast, the LSTM model, which incorporated synthetic minority oversampling technique (SMOTE) balancing, was able to reach improvements of up to 89% accuracy, with a precision of 91%, a recall of 86%, an F1-score of 0.89, and a strong ROC curve performance. Among the models, the LSTM with SMOTE was the best performer, with 89% accuracy, 91% precision, 86% recall, and an F1-score of 0.89. These results provide evidence that applying techniques for data balancing in combination with certain DL network architectures significantly improves the detection of seizures using wearable EEG devices worn on the body. We believe that real-time monitoring and high-performance systems are feasible using optimized DL frameworks. The analysis of the performance of different models allows for the understanding of the possibilities of optimizing the architectures of DL algorithms for the modern diagnosis of epilepsy in real time. The source code used to carry out the experiments is publicly available at CHB-MIT EEG Dataset Python Scripts (https://www.kaggle.com/code/adnankust/adnaneeg1).

  • Research Article
  • 10.1149/1945-7111/adbc24
Single Frequency Feature Point Derived from DRT for SOH Estimation of Lithium Ion Battery
  • Mar 1, 2025
  • Journal of The Electrochemical Society
  • Daiyan Jiang + 7 more

High-efficient data feature extraction is crucial for the lithium ion battery state of health (SOH) evaluation with high accuracy and low cost. In this work, an evaluation model constructed by long short-term memory (LSTM) neural network processes the single-frequency impedance data as the feature data to predict the current health state of the battery. The feature data of electrochemical impedance spectroscopy is determined by the frequency (4.36 Hz) corresponding to the highest peak change in the distribution of relaxation time diagram during the cyclic process. The real and imaginary part values of this single frequency feature point are taken as an input set, and the corresponding SOH is taken as an output set. A battery SOH model based on the LSTM is constructed and the experimental results show that this model can accurately estimate the SOH of the lithium ion battery with the low root mean square error of 3.36% and mean absolute percentage error of 2.68%, indicating that this model displays the decreased computational load, high accuracy and good practicability.

  • Research Article
  • Cite Count Icon 14
  • 10.3390/en15186745
A Hybrid Method for State-of-Charge Estimation for Lithium-Ion Batteries Using a Long Short-Term Memory Network Combined with Attention and a Kalman Filter
  • Sep 15, 2022
  • Energies
  • Xinghao Zhang + 5 more

A battery management system (BMS) is an important link between on-board power battery and electric vehicles. The BMS is used to collect, process, and store important information during the operation of a battery pack in real time. Due to the wide application of lithium-ion batteries in electric vehicles, the correct estimation of the state of charge (SOC) of lithium-ion batteries (LIBS) is of great importance in the battery management system. The SOC is used to reflect the remaining capacity of the battery, which is directly related to the efficiency of the power output and management of energy. In this paper, a new long short-term memory network with attention mechanism combined with Kalman filter is proposed to estimate the SOC of the Li-ion battery in the BMS. Several different dynamic driving plans are used for training and testing under different temperatures and initial errors, and the results show that the method is highly reliable for estimating the SOC of the Li-ion battery. The average root mean square error (RMSE) reaches 0.01492 for the US06 condition, 0.01205 for the federal urban driving scheme (FUDS) condition, and 0.00806 for the dynamic stress test (DST) condition. It is demonstrated that the proposed method is more reliable and robust, in terms of SOC estimation accuracy, compared with the traditional long short-term memory (LSTM) neural network, LSTM combined with attention mechanism, or LSTM combined with the Kalman filtering method.

  • Research Article
  • Cite Count Icon 9
  • 10.1016/j.epsr.2022.109065
Online leakage current classification using convolutional neural network long short-term memory for high voltage insulators on web-based service
  • Mar 1, 2023
  • Electric Power Systems Research
  • Phuong Nguyen Thanh + 1 more

Online leakage current classification using convolutional neural network long short-term memory for high voltage insulators on web-based service

  • Book Chapter
  • Cite Count Icon 5
  • 10.1007/978-3-030-51965-0_15
An LSTM-Based Encoder-Decoder Model for State-of-Charge Estimation of Lithium-Ion Batteries
  • Jan 1, 2020
  • Shengmin Cui + 4 more

A lithium-ion battery is rechargeable and is widely used in portable devices and electric vehicles (EVs). State-of-Charge (SOC) estimation is vital function in a battery management system (BMS) since high-accuracy SOC estimation ensures reliability and safety of electronic products using lithium-ion batteries. Unlike traditional SOC estimation methods deep learning based methods are data-driven methods that do not rely much on battery quality. In this paper, an Encoder-Decoder model which can compress sequential inputs into a vector used for decoding sequential outputs is proposed to estimate the SOC based on measured voltage and current. Compared with conventional recurrent networks such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), the proposed model yields better accuracy of estimation. Models are validated on lithium-ion battery data set with dynamical stress testing (DST), Federal Urban Driving Schedule (FUDS), and US06 highway schedule profiles.

  • Research Article
  • 10.55041/ijsrem43585
Battery Lifespan Prediction Using Machine Learning and NASA Aging Dataset
  • Apr 3, 2025
  • INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
  • Dr Jogi John + 5 more

NASA Battery RLU 16.5 plays a crucial role in powering space missions, ensuring reliability and longevity under extreme conditions. Accurate estimation and control of its State of Health (SOH) are essential for maintaining its performance, particularly in the harsh and unpredictable environment of space. This review paper explores the latest advancements in SOH estimation for lithium-ion batteries, focusing on methods applicable to NASA Battery RLU 16.5. Key methods discussed include machine learning models such as Long Short-Term Memory (LSTM) networks, Convolutional Neural Networks (CNN), and hybrid deep learning models, which have shown promising results in accurately predicting SOH and Remaining Useful Life (RUL). Additionally, optimization techniques like ant lion optimization combined with support vector regression and incremental capacity analysis offer high precision in SOH predictions. Temperature-based SOH estimation and the integration of electrochemical models also emerge as essential methods for improving accuracy. Despite the significant progress in SOH estimation, challenges such as the unpredictability of space conditions remain, necessitating further research in hybrid modeling approaches. This paper provides a comprehensive overview of the state-of-the-art SOH estimation techniques and highlights the challenges and future directions in managing NASA’s lithium-ion batteries for long-term missions. Keywords— Lithium-ion battery (LIB), Remaining Useful Life (RUL), Machine learning algorithms, Neural networks, LSTM (Long Short-Term Memory), CNN (Convolutional Neural Network), Battery degradation modeling, Hybrid neural networks, Optimization techniques, Space mission battery management

  • Research Article
  • Cite Count Icon 270
  • 10.1016/j.apenergy.2020.114789
A combined method for state-of-charge estimation for lithium-ion batteries using a long short-term memory network and an adaptive cubature Kalman filter
  • Mar 9, 2020
  • Applied Energy
  • Yong Tian + 4 more

A combined method for state-of-charge estimation for lithium-ion batteries using a long short-term memory network and an adaptive cubature Kalman filter

  • Conference Article
  • Cite Count Icon 8
  • 10.1109/ccdc49329.2020.9164547
Online State-of-Health Estimation for the Lithium-Ion Battery Based on An LSTM Neural Network with Attention Mechanism
  • Aug 1, 2020
  • Jiachang Zhang + 2 more

Online state-of-health(SOH) estimation of lithium-ion battery(LIB) is a critical problem in battery management system. Because the measurement of capacity is limited by the environment, in practical applications, it is difficult to estimate SOH quickly. This paper proposed a online SOH estimation of the LIB method that combines long short-term memory(LSTM) neural network with attention mechanism(AM). The LSTM neural network is used to learn the mapping relationship between voltage, current, temperature and capacity to capture the long-term dependence of capacity reduction process. Then, we use the AM to select relevant hidden state of LSTM across all time steps. The effectiveness of the proposed method is evaluated by using the charge and discharge cycle data set of LIB from the NASA Ames Excellence Prediction Center.

  • Conference Article
  • Cite Count Icon 1
  • 10.1109/phm-nanjing52125.2021.9612807
A Fusion Method to Estimate the State-of-Health of Lithium-ion Batteries
  • Oct 15, 2021
  • Yajun Zhang + 4 more

Accurate state-of-health (SOH) estimation for Lithiumion batteries (LIBs) is vital for the battery management systems (BMS). This paper puts forward a fusion method to estimate battery SOH, which incorporates the incremental capacity analysis (ICA) with the long short-term memory (LSTM) network. First, a revised Lorentzian function-based voltage-capacity (VC) model is adopted to capture the IC curve. By leveraging merely data logged during the constant current (CC) charging stage, battery degradation information contained in the IC curve is concretized as the parameters of the VC model by simple curve fitting. These parameters with specific physical meanings are deemed as features that characterize battery health status. Correlation analysis is then performed for these features, and features of interest (FOIs) are selected as inputs of the LSTM. The LSTM model can learn the long-term dependencies of battery degradation, and thus improve the robustness of the prediction model against noise. Finally, four battery aging datasets with different chemistries are employed for model validation, and results reveal that the proposed method can achieve accurate SOH estimation results, with the maximum mean absolute errors limited within 2%.

  • Research Article
  • Cite Count Icon 1
  • 10.1149/ma2022-02281080mtgabs
State of Health Estimation and Remaining Useful Life Prediction Using Hybrid Kmeans CNN-Lstm Network
  • Oct 9, 2022
  • ECS Meeting Abstracts
  • Yassine Toughzaoui + 4 more

Lithium-ion batteries have become among the most used storage systems in different fields such as electric vehicles, lighting, robotics, etc. These storage systems are known for their fast charging and high energy density (they can store 3 or 4 times more energy per unit mass than other battery technologies). Lithium-ion batteries are being used in more and more areas; however, the major drawback of this system is its high price, and the degradation of its performance makes good maintenance necessary to optimize the battery operation. This good maintenance consists on the real-time follow-up of its state of health (SOH) and the prediction of its remaining useful life (RUL). For this purpose, there are two main categories of methods: model-based methods (such as the sliding mode, and the Kalman filter, etc.) and data-based methods (such as fuzzy logic, genetic programming and artificial intelligence algorithms, etc.). Many authors have based their studies on artificial intelligence models and more specifically artificial neural networks which are known for their high accuracy and their ability to solve complex problems that model based methods find difficulties to deal with. Among the most used neural networks, we find recurrent neural networks (RNN) and more specifically Long Short Term Memory (LSTM). LSTM networks have an internal memory allowing them to process time series flexibly and with high accuracy. These networks have shown good performance in SOH estimation and RUL prediction in several studies. Convolutional neural networks are networks originally dedicated to image processing, but recently several studies have proposed to use these networks for processing time series and have demonstrated good performance in this field. Some studies propose to combine different models for the estimation of the SOH and the prediction of the RUL, among these models, we find the CNN LSTM combination which improves the accuracy of the model and decreases its calculation time. This is the case for our study, where the CNN network was used for data filtering and the LSTM network for processing the filtered data. In addition to that, we added a K-means clustering network which is used to classify the data and makes their processing by the CNN LSTM hybrid model, easier and faster. The major drawback of neural network models is that they need a lot of data for their training. In our study, we used the NASA open source dataset which was extracted from an experiment consisting on charging and discharging LCO 18650 lithium ion batteries with randomly chosen currents between -4.5A and 4.5A to simulate the operation of a lithium battery in electric vehicle driving conditions. The dataset contains the data of 4 batteries, we used the data of 3 of them for training the model and we validated our model on the data of the 4th one.The following figure shows the estimation results of the state of health of the 4th battery obtained by our model.To evaluate the performance of our model, we used three metrics: Root Mean Square Error (RMSE), Mean Square Error (MSE) and Mean Absolute Error (MAE). The performance evaluation demonstrated the improvement in the accuracy of the model and the computation time compared to each of the networks used separately. The following table summarizes the obtained results: Metric/ Model MSE RMSE MAE Hybrid model 0.0002 0.01 0.008 For the prediction of the RUL, we developed a model that predicts the evolution of the capacitance value. For its training, we used the same dataset. The model trains on a percentage of 90% of the data from each battery and its role is to predict the remaining capacitance values until the end of life of the battery is reached. The following figure shows the model results for the prediction of the 4th battery data Figure 1

  • Research Article
  • Cite Count Icon 13
  • 10.2147/ndt.s404528
CNN for a Regression Machine Learning Algorithm for Predicting Cognitive Impairment Using qEEG
  • Apr 12, 2023
  • Neuropsychiatric Disease and Treatment
  • Chanda Simfukwe + 4 more

PurposeElectroencephalogram (EEG) signals give detailed information on the electrical brain activities occurring in the cerebral cortex. They are used to study brain-related disorders such as mild cognitive impairment (MCI) and Alzheimer’s disease (AD). Brain signals obtained using an EEG machine can be a neurophysiological biomarker for early diagnosis of dementia through quantitative EEG (qEEG) analysis. This paper proposes a machine learning methodology to detect MCI and AD from qEEG time-frequency (TF) images of the subjects in an eyes-closed resting state (ECR).Participants and MethodsThe dataset consisted of 16,910 TF images from 890 subjects: 269 healthy controls (HC), 356 MCI, and 265 AD. First, EEG signals were transformed into TF images using a Fast Fourier Transform (FFT) containing different event-rated changes of frequency sub-bands preprocessed from the EEGlab toolbox in the MATLAB R2021a environment software. The preprocessed TF images were applied in a convolutional neural network (CNN) with adjusted parameters. For classification, the computed image features were concatenated with age data and went through the feed-forward neural network (FNN).ResultsThe trained models’, HC vs MCI, HC vs AD, and HC vs CASE (MCI + AD), performance metrics were evaluated based on the test dataset of the subjects. The accuracy, sensitivity, and specificity were evaluated: HC vs MCI was 83%, 93%, and 73%, HC vs AD was 81%, 80%, and 83%, and HC vs CASE (MCI + AD) was 88%, 80%, and 90%, respectively.ConclusionThe proposed models trained with TF images and age can be used to assist clinicians as a biomarker in detecting cognitively impaired subjects at an early stage in clinical sectors.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon