Machine Learning Predictive Algorithm for Temperature-Sensing Electric Vehicle Battery Enclosure
Abstract Electric vehicles (EVs) are a favorable tactic for reducing carbon emissions. However, the most used power source in EVs, lithium-ion batteries (LIBs), can pose a significant safety risk in the form of thermal runaway. This is a rapid failure mode that may lead to fires and explosions. To address this issue, the authors' previous work developed a temperature-sensing composite battery enclosure with embedded temperature microsensors to provide the LIB condition monitoring. The prior work produced extensive experimental and simulation results, characterizing an enclosure-embedded battery management system. It was found that the top composite layer causes a time lag in the temperature detection, impeding an early warning signal. This current study aims to create a regression model leveraging machine learning (ML) strategies to predict battery enclosure interior surface temperatures when trained on the prior study's data. The temperature inference model predicts the enclosure's surface temperatures using embedded temperature measurements in real-time, compensating for the time lag. Random forest and recurrent neural network ML models are compared, considering performance and computational costs. Mean absolute error and mean absolute percentage error are utilized to quantify the prediction accuracy. The temperature inference model enhances the practicality of a temperature-sensing composite battery enclosure as a battery management system, mitigating risks associated with LIB thermal runaway events. By monitoring embedded temperature changes and predicting the temperatures on the interior surface of the enclosure, the system provides insights into potential hazards, enabling timely interventions and ensuring EV safety.
- Conference Article
- 10.1115/smasis2024-140078
- Sep 9, 2024
Electric Vehicles (EVs) are a favorable and rapidly growing tactic for reducing carbon emissions. However, the most commonly used power source in EVs, Lithium-Ion Batteries (LIBs), can pose a significant safety risk in the form of thermal runaway. This is a fast-acting and dangerous failure mode that may lead to fires and explosions. To address this issue, the authors’ previous work developed a self-sensing composite battery enclosure with embedded micro-temperature sensors to provide LIB condition monitoring. The prior work produced extensive experimental and simulation results, characterizing an enclosure-embedded battery management system. It was found that the top composite layer causes a time lag in temperature detection, impeding an early warning signal. This current study aims to create a regression model leveraging machine learning (ML) strategies which is able to predict interior battery enclosure temperatures when trained on the prior study’s thermal experiments and simulations. The temperature inference model predicts the enclosure’s surface temperatures using embedded temperature measurements in real-time, compensating for the time lag. Random Forest (RF) and Recurrent Neural Network (RNN) ML models are compared, considering performance and computational costs. Mean Absolute Error (MAE) and Mean Absolute Percentage Error (MAPE) are utilized to quantify the prediction accuracies of both approaches. The temperature inference model enhances the practicality of a self-sensing composite battery enclosure as a battery management system, mitigating risks associated with LIB thermal runaway events. By monitoring embedded temperature changes and predicting the temperatures on the interior surface of the enclosure, the system provides insights into potential hazards, enabling timely interventions and ensuring EV safety.
- Research Article
12
- 10.1016/j.eja.2024.127297
- Aug 10, 2024
- European Journal of Agronomy
Improving carbon flux estimation in tea plantation ecosystems: A machine learning ensemble approach
- Research Article
14
- 10.1155/2022/8089428
- Jan 1, 2022
- Complexity
The permeability coefficient of soils is an essential measure for designing geotechnical construction. The aim of this paper was to select a highest performance and reliable machine learning (ML) model to predict the permeability coefficient of soil and quantify the feature importance on the predicted value of the soil permeability coefficient with aided machine learning‐based SHapley Additive exPlanations (SHAP) and Partial Dependence Plot 1D (PDP 1D). To acquire this purpose, five single ML algorithms including K‐nearest neighbors (KNN), support vector machine (SVM), light gradient boosting machine (LightGBM), random forest (RF), and gradient boosting (GB) are used to build ML models for predicting the permeability coefficient of soils. Performance criteria for ML models include the coefficient of correlationR2, root mean square error (RMSE), mean absolute percentage error (MAPE), and mean absolute error (MAE). The best performance and reliable single ML model for predicting the permeability coefficient of soil for the testing dataset is the gradient boosting (GB) model, which hasR2 = 0.971, RMSE = 0.199 × 10−11 m/s, MAE = 0.161 × 10−11 m/s, and MAPE = 0.185%. To identify and quantify the feature importance on the permeability coefficient of soil, sensitivity studies using permutation importance, SHapley Additive exPlanations (SHAP), and Partial Dependence Plot 1D (PDP 1D) are performed with the aided best performance and reliable ML model GB. Plasticity index, density > water content, liquid limit, and plastic limit > clay content > void ratio are the order effects on the predicted value of the permeability coefficient. The plasticity index and density of soil are the first priority soil properties to measure when assessing the permeability coefficient of soil.
- Research Article
- 10.3390/jcm14186373
- Sep 10, 2025
- Journal of Clinical Medicine
Background: Suicide remains a leading cause of death among youth, yet effective tools to predict suicide attempts (SA) in individuals under 18 are scarce. This study aims to develop machine learning (ML) models to predict SA in paediatric populations using Google Trends data. Methods: Relative Search Volumes (RSVs) from Google Trends were analysed for terms linked to suicide risk factors. Pearson Correlation Coefficients (PCC) identified terms strongly associated with SA rates. Based on these, several ML models were developed and evaluated, including Random Forest Regression, Support Vector Regression (SVR), XGBoost, and Linear Regression. Model performance was assessed using metrics such as PCC, mean absolute error (MAE), mean squared error (MSE), root mean square error (RMSE), and mean absolute percentage error (MAPE). Results: Terms related to suicide prevention and symptoms, including psychiatrist and anxiety disorder, showed the strongest correlations with SA rates (PCC ≥ 0.90). Random Forest Regression emerged as the top-performing ML model (PCC = 0.953, MAPE = 20.12%, RMSE = 17.21), highlighting burnout, anxiety disorder, antidepressants, and psychiatrist as key predictors of SA. Other models’ scores were XGBoost (PCC = 0.446, MAPE = 22.57%, RMSE = 18.03), SVR (PCC = 0.833, MAPE = 42.23%, RMSE = 47.32) and Linear Regression (PCC = 0.947, MAPE = 23.64%, RMSE = 17.66). Conclusions: Google Trends–based ML models suggest potential utility for short-term prediction of youth SA. These preliminary findings support the utility of search data in identifying real-time suicide risk in paediatric populations.
- Research Article
28
- 10.1136/heartjnl-2020-318726
- Jun 11, 2021
- Heart
ObjectivesTo evaluate a predictive model for robust estimation of daily out-of-hospital cardiac arrest (OHCA) incidence using a suite of machine learning (ML) approaches and high-resolution meteorological and chronological data.MethodsIn this...
- Research Article
- 10.1136/bjo.52.1.40
- Jan 1, 1968
- British Journal of Ophthalmology
<h3>Objectives</h3> To evaluate a predictive model for robust estimation of daily out-of-hospital cardiac arrest (OHCA) incidence using a suite of machine learning (ML) approaches and high-resolution meteorological and chronological data. <h3>Methods</h3> In this population-based study, we combined an OHCA nationwide registry and high-resolution meteorological and chronological datasets from Japan. We developed a model to predict daily OHCA incidence with a training dataset for 2005–2013 using the eXtreme Gradient Boosting algorithm. A dataset for 2014–2015 was used to test the predictive model. The main outcome was the accuracy of the predictive model for the number of daily OHCA events, based on mean absolute error (MAE) and mean absolute percentage error (MAPE). In general, a model with MAPE less than 10% is considered highly accurate. <h3>Results</h3> Among the 1 299 784 OHCA cases, 661 052 OHCA cases of cardiac origin (525 374 cases in the training dataset on which fourfold cross-validation was performed and 135 678 cases in the testing dataset) were included in the analysis. Compared with the ML models using meteorological or chronological variables alone, the ML model with combined meteorological and chronological variables had the highest predictive accuracy in the training (MAE 1.314 and MAPE 7.007%) and testing datasets (MAE 1.547 and MAPE 7.788%). Sunday, Monday, holiday, winter, low ambient temperature and large interday or intraday temperature difference were more strongly associated with OHCA incidence than other the meteorological and chronological variables. <h3>Conclusions</h3> A ML predictive model using comprehensive daily meteorological and chronological data allows for highly precise estimates of OHCA incidence.
- Research Article
20
- 10.1093/jcde/qwaa010
- Feb 1, 2020
- Journal of Computational Design and Engineering
Prediction of deflections of reinforced concrete (RC) flexural structures is vital to evaluate the workability and safety of structures during its life cycle. Empirical methods are limited to predict a long-term deflection of RC structures because they are difficult to consider all influencing factors. This study presents data-driven machine learning (ML) models to early predict the long-term deflections in RC structures. An experimental dataset was used to build and evaluate single and ensemble ML models. The models were trained and tested using the stratified 10-fold cross-validation algorithm. Analytical results revealed that the ML model is effective in predicting the deflection of RC structures with good accuracy of 0.972 in correlation coefficient (R), 8.190 mm in root mean square error (RMSE), 4.597 mm in mean absolute error (MAE), and 16.749% in mean absolute percentage error (MAPE). In performance comparison against with empirical methods, the prediction accuracy of the ML model improved significantly up to 66.41% in the RMSE and up to 82.04% in the MAE. As a contribution, this study proposed the effective ML model to facilitate designers in early forecasting long-term deflections in RC structures and evaluating their long-term serviceability and safety.
- Research Article
5
- 10.1016/j.psj.2024.104458
- Oct 29, 2024
- Poultry Science
Predicting egg production rate and egg weight of broiler breeders based on machine learning and Shapley additive explanations
- Research Article
- 10.1002/est2.542
- Dec 14, 2023
- Energy Storage
Battery State of Health (SoH) estimation is a critical task in the field of battery management, as it provides information about remaining capacity and health of a battery. Various machine learning algorithms, including neural networks, decision trees, support vector machines, and random forests, have been utilized for battery SoH estimation. These models can be trained using different features, such as voltage, current, temperature, impedance, and their combinations. However, the diversity of data is a decisive factor that affects precision of battery SoH estimation using machine learning. In this research, the application of feedforward neural networks (FNNs) and recurrent neural networks (RNNs) is explored for the purpose of accurately estimating the SoH of batteries. These approaches are chosen due to the inherent benefits of FNNs and RNNs in capturing the long‐term dependencies present in sequential data. The battery SoH estimations are evaluated using the single and multichannel input: voltage, current, voltage‐current, voltage‐temperature, and voltage‐current‐temperature. The experimental findings reveal that the proposed RNN model, specifically the RNN with 20 neurons (RNN20) variant, exhibits an enhanced accuracy in predicting the SoH of batteries. For instance, when utilizing voltage and current as inputs, the RNN20 model demonstrated superior performance, achieving an mean absolute error (MAE) of 0.010157 for voltage and 0.010367 for current, outperforming the FNN with 10 neurons (FNN10) model, which yielded an MAE of 0.031635 and 0.065 for voltage and current, respectively. Furthermore, when employing diverse input combinations such as voltage‐current, voltage‐temperature, and voltage‐current‐temperature, the RNN20 model consistently outperformed its counterparts, exhibiting the lowest mean squared error and mean absolute percentage error across all metrics. These results underscore the RNN20 model's robustness and aptitude in accurately predicting battery SoH, affirming the merits of employing RNNs in battery management systems.
- Research Article
3
- 10.1088/1402-4896/ad6cad
- Aug 22, 2024
- Physica Scripta
Incorporating zero-carbon emission sources of energy into the electric grid is essential to meet the growing energy needs in public and industrial sectors. Smart grids, with their cutting-edge sensing and communication technologies, provide an effective approach to integrating renewable energy resources and managing power systems efficiently. Improving solar energy efficiency remains a challenge within smart grid infrastructures. Nonetheless, recent progress in artificial intelligence (AI) techniques presents promising opportunities to improve energy production control and management. In this study, initially, we employed two different Machine learning (ML) models: Recurrent Neural Network (RNN) and Long Short Term Memory (LSTM), to forecast solar power plant parameters. The analysis revealed that the LSTM model performed better than RNN in terms of Mean Absolute Percentage Error (MAPE), Mean Absolute Error (MAE), and Mean Squared Error (MSE). Following a review of the LSTM model’s graphical results, it was further enhanced by combining Autoencoder with LSTM, creating the Autoencoder LSTM (AELSTM) model. Next, a new hybrid model was introduced: Convolutional Neural Network-Autoencoder Long Short-Term Memory (HCAELSTM), designed to boost prediction accuracy. These models were trained on a one-year real-time solar power plant dataset for training and performance assessment. Ultimately, the hybrid HCAELSTM model surpassed the AELSTM model in terms of MAPE, MAE, and MSE. It excelled in MAPE scores for Daily Power Production, Peak Grid Power Production, and Solar Radiance, achieving low scores of 1.175, 2.116, and 1.592 respectively, demonstrating superior accuracy. The study underscores the importance of AI and ML, in particular, the hybrid model HCAELSTM, in enhancing the smart grid’s ability to integrate renewable energy sources. The hybrid model excels at accurately forecasting key measurements, improving solar power generation efficiency within the smart grid system which also plays a key role in the broader shift toward the fourth energy revolution.
- Research Article
28
- 10.1038/s41598-021-04238-z
- Jan 11, 2022
- Scientific Reports
The sea surface temperature (SST) is an environmental indicator closely related to climate, weather, and atmospheric events worldwide. Its forecasting is essential for supporting the decision of governments and environmental organizations. Literature has shown that single machine learning (ML) models are generally more accurate than traditional statistical models for SST time series modeling. However, the parameters tuning of these ML models is a challenging task, mainly when complex phenomena, such as SST forecasting, are addressed. Issues related to misspecification, overfitting, or underfitting of the ML models can lead to underperforming forecasts. This work proposes using hybrid systems (HS) that combine (ML) models using residual forecasting as an alternative to enhance the performance of SST forecasting. In this context, two types of combinations are evaluated using two ML models: support vector regression (SVR) and long short-term memory (LSTM). The experimental evaluation was performed on three datasets from different regions of the Atlantic Ocean using three well-known measures: mean square error (MSE), mean absolute percentage error (MAPE), and mean absolute error (MAE). The best HS based on SVR improved the MSE value for each analyzed series by 82.26%, 98.93%, and 65.03% compared to its respective single model. The HS employing the LSTM improved 92.15%, 98.69%, and 32.41% concerning the single LSTM model. Compared to literature approaches, at least one version of HS attained higher accuracy than statistical and ML models in all study cases. In particular, the nonlinear combination of the ML models obtained the best performance among the proposed HS versions.
- Research Article
9
- 10.1016/j.resourpol.2024.105040
- Apr 30, 2024
- Resources Policy
How good are different machine and deep learning models in forecasting the future price of metals? Full sample versus sub-sample
- Research Article
7
- 10.1016/j.trgeo.2024.101254
- Apr 18, 2024
- Transportation Geotechnics
Optimized machine learning models for predicting crown convergence of plateau mountain tunnels
- Research Article
2
- 10.46481/jnsps.2024.2079
- Sep 8, 2024
- Journal of the Nigerian Society of Physical Sciences
Globally, wind energy if properly harnessed, could serve as a source of energy generation in Africa. This study compared the performance of two Machine Learning (ML) algorithms (Linear regression and Random Forest) in predicting wind speed in five major cities in Africa (Yaoundé, Pretoria, Nairobi, Cairo and Abuja). Wind data were collected between January 1, 2000, and December 31, 2022, using the Solar Radiation Data Archive. The data preprocessing was carried out with 80% of the data used for training and 20% for validation. The performance of these ML algorithms was evaluated using Mean Square Error (MSE), Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE) and coefficient of determination (R2). The result shows that Nairobi (3.814795 m/s) closely followed by Cairo (3.606453 m/s) has the highest mean wind speed while Yaoundé (1.090512 m/s) has the lowest. Based on the performance metrics used, the two Machine Learning algorithms were competitive. Still, the Linear Regression (LR) algorithm outperformed the Random Forest Algorithm in predicting wind speed in all the selected major African cities. In Yaoundé (RMSE = 0.3892, MAE= 0.3001, MAPE =0.5030), Pretoria (RMSE=1.2339, MAE=0.9480, MAPE=0.7450) Nairobi (RMSE= 0.4223, MAE =0.6499, MAPE =0.1872), Nairobi (RMSE=0.6499, MAE=0.5171, MAPE =0.1872), Cairo (RMSE =1.0909, MAE =0.8544, MAPE =0.3541) and Abuja (RMSE = 0.70245, MAE =0.5441, MAPE= 0.4515) the Linear regression algorithms was found to outperformed Random Forest Regression. Therefore, the Linear regression algorithm is more reliable in predicting wind speed compared with the Random Forest regression.
- Research Article
- 10.4314/tjs.v50i2.15
- Jun 30, 2024
- Tanzania Journal of Science
Human immunodeficiency virus infection/acquired immune deficiency syndrome (HIV/AIDS) is a global pandemic that has claimed more than 40 million lives since it was discovered in the late 1970s. In sub-Saharan Africa, including Tanzania. Different measures to combat the diseases have failed to be attained, like the UNAIDS 90-90-90 target, which aimed to reduce HIV by 2020, and it was moved to 2030. The availability of proper tools to control and monitor diseases and ensure proper early intervention is very important. Prediction of disease trends using Machine Learning (ML) models can improve speed towards attaining the UNAIDS targets by providing accurate insights into the disease trends. The performance of ML models depends on many factors, including datasets that influence the generalization of models. This study aims to suggest the best deep-learning model to predict HIV incidences in Tanzania. Four deep learning models, recurrent neural network (RNN), Gated Recurrent unit (GRU), Long Short-Term Memory (LSTM), and 2D convolution layer (CONV2D), have been studied. HIV data is collected from District Health Information System 2 (DHIS2), the national Health Management Information System (HMIS). The HIV data collected is for 26 regions in Mainland Tanzania, recorded from January 2015 to October 2022. The accuracy of the models was evaluated using three metrics: Mean absolute error (MAE), Mean absolute Percentage Error (MAPE), and mean square error (MSE). The results show that Conv2D achieved the lowest average training time for short-term predictions, while RNN records the highest accuracy with the lowest MAE for all considered cases. The GRU was the fastest for the long-term predictions, and the LSTM reported the best accuracy.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.