State of Health Prediction of Lithium-Ion Batteries Based on IDO-GRU
HighlightsA combined model that integrates an improved Dandelion Optimization (IDO) algorithm with a Gated Recurrent Unit (GRU) neural network.Optimize Dandelion Optimization by Euclidean distance strategy, golden sine search mechanism and adaptive inertia weights for combined model optimization.Extract health indicators for SOH indirect prediction related to time, temperature and voltage.The Gaussian filtering technique is applied to preprocess the extracted features to mitigate the adverse effects of noise during data acquisition.The RMSE and MAPE of the SOH prediction results are less than 0.0048 and 0.5%, respectively.
- Conference Article
5
- 10.1109/cac51589.2020.9327749
- Nov 6, 2020
Short time traffic flow forecasting is the heart of matter in intelligent transportation system (ITS). Accurate traffic flow prediction can help people to choose trip mode and trip time. Although gated recurrent unit (GRU) has outstanding performance in traffic flow forecasting, but determines the hyperparameters of the GRU rely by experience reduces the predictive effect of the model. This study uses the adaptive learning strategy improved particle swarm optimization (IPSO) algorithm to optimize the hyperparameters of GRU model. The characteristics of traffic data with network topology are matched by this algorithm, so the accuracy of traffic flow prediction can be improved. To verify the reliability of this algorithm, this study construct IPSO-GRU model by the traffic flow data from California department of transportation and compare IPSO-GRU model with other traffic flow forecasting models. The experimental results shows that, the IPSO-GRU model achieves the lowest mean square error (MSE), Mean Absolute Percentage Error (MAPE) and Mean Absolute Error (MAE) compared to conventional GRU model.
- Research Article
- 10.1063/5.0270185
- Jun 1, 2025
- AIP Advances
Text classification is a key task in natural language processing that entails sorting textual information into specified categories. Over time, techniques for text classification have progressed from rule-based methods to more advanced deep learning and machine learning approaches. Conventional approaches often struggled with the language intricacies, including issues like contextual links, polysemy, and the ambiguity among words. Nevertheless, neural networks have greatly enhanced text classification by identifying intricate relationships and patterns within text data. Although there have been significant advancements, text classification continues to face challenges, especially when dealing with high-dimensional and large-scale datasets, grasping the contextual meanings of words, and capturing sequential dependencies. In the present research, a Gated Recurrent Unit (GRU) optimized by the Improved Seagull Optimization (ISO) algorithm was utilized to address these issues, resulting in notable improvements in classification performance. The methodology utilized in the current research comprised several phases to guarantee optimum results. Preprocessing was an essential phase, which included addressing missing data, special character and punctuation removal, handling contractions, text normalization, noise removal, and stopword removal. Dimensionality reduction was performed with SVD (Singular Value Decomposition) to reduce the feature set by keeping only the most pertinent data. Contextual embeddings were created utilizing BERT, which offered rich semantic representations of the text and further improved the quality of the input features. Ultimately, the GRU was employed for classification, utilizing the ISO for optimization. This integration of preprocessing, feature extraction, and dimensionality reduction was highly efficacious in overcoming the challenges of text classification. The suggested model illustrated improved efficacy on the Yelp-5 and Yelp-2 datasets, gaining mean accuracy, precision, recall, and F1-score values of 98.47%, 98.71%, 98.92%, and 98.81%, respectively. The outcomes highlight the model’s reliability and strength, exceeding all baseline models, namely GRU, BiGRU, BiLSTM, KNN, LSTM, and CNN. The considerable enhancement over these networks represents the suggested method’s capability in addressing intricate text classification tasks. In conclusion, this work presented a strong architecture for text classification that integrates preprocessing techniques, feature extraction using BERT, dimensionality reduction via SVD, and GRU optimized by ISO for classification.
- Research Article
- 10.1063/5.0251643
- Mar 1, 2025
- Physics of Fluids
The pressure fluctuation data in the pump-turbine runner region exhibit significant nonlinearity. The method of utilizing neural networks is employed to analyze pressure fluctuations in order to determine the occurrence of cavitation phenomena. This paper presents a model that utilizes a VMD (variational mode decomposition)-optimized algorithm combined with GRU (gated recurrent unit)–attention for the prediction of pressure fluctuations, aiming to facilitate the forecasting of cavitation-induced failures. Using data collected from a real machine over the course of one day, predictions were made using three different models: a standalone GRU, a combination of GRU and Attention mechanisms, and a combination of VMD and three different optimization algorithms. The evaluation of prediction performance indicates that the VMD–dung beetle optimization–GRU–attention model not only captures the nonlinear characteristics of the actual values but also aligns more closely with the trend of the real data. The error assessment results demonstrate that this model exhibits superior predictive performance. Analyze the prediction of pressure pulsation at three different locations between the runner and the guide vane, top and bottom cover. This method enables effective predictions of cavitation conditions up to 50 minutes in advance, showcasing its potential for practical engineering applications.
- Research Article
19
- 10.1016/j.oceaneng.2023.115977
- Oct 17, 2023
- Ocean Engineering
The prediction of ship motion attitude in seaway based on BSO-VMD-GRU combination model
- Book Chapter
6
- 10.1007/978-3-319-91253-0_50
- Jan 1, 2018
Due to the growing need for metaheuristics with features that allow their implementation for real-time problems, this paper proposes an adaptive individual inertia weight in each iteration considering global and individual analysis, i.e., the best, worst and individual particles’ performance. As a result, the proposed adaptive individual inertia weight presents faster convergence for the Particle Swarm Optimization (PSO) algorithm when compared to other inertia mechanisms. The proposed algorithm is also suitable for real-time problems when the actual optimum is difficult to be attained, since a feasible and optimized solution is found in comparison to an initial solution. In this sense, the PSO with the proposed adaptive individual inertia weight was tested using eight benchmark functions in the continuous domain. The proposed PSO was compared to other three algorithms, reaching better optimized results in six benchmark functions at the end of 2000 iterations. Moreover, it is noteworthy to mention that the proposed adaptive individual inertia weight features rapid convergence for the PSO algorithm in the first 1000 iterations.
- Book Chapter
1
- 10.1007/978-981-15-3425-6_15
- Jan 1, 2020
In recent years, many population-based swarm intelligence (SI) algorithms have been developed for solving optimization problems. The pigeon-inspired optimization (PIO) algorithm is one new method that is considered to be a balanced combination of global and local search by the map and compass operator and the landmark operator. In this paper, we propose a novel method of adaptive nonlinear inertia weight along with the velocity in order to improve the convergence speed. Additionally, the one-dimension modification mechanism is introduced during the iterative process, which aims to avoid the loss of good partial solutions because of interference phenomena among the dimensions. For separable functions, the one-dimension modification mechanism is more effective for the search performance. Our approach effectively combines the method of adaptive inertia weight with the strategy of one-dimension modification, enhancing the ability to explore the search space. Comprehensive experimental results indicates that the proposed PIO outperforms the basic PIO, the other improved PIO and other improved SI methods in terms of the quality of the solution.
- Book Chapter
1
- 10.1007/978-981-16-5078-9_25
- Jan 1, 2021
A multi-objective discrete particle swarm optimization (MODPSO) algorithm is useful in accurately identifying communities in a network by avoiding the pitfalls of modularity optimized discrete PSO algorithms. Inertia weights in a PSO can be used to guide the flight of particles in PSO by modifying step size of the particles. In this paper, we present a new adaptive inertia weight based MODPSO and compare it with other good inertia weight approaches by applying them to three real-world datasets. Our algorithm demonstrates consistently the best results among various inertia weight strategies in three real-world datasets with maximum Q (modularity score) values of 0.457, 0.527728 and 0.60457 for Zachary’s Karate Club, Bottlenose Dolphins and American College Football datasets, respectively. Adaptive inertia weight strategy is able to perform consistently by adaptively determining the step size of the velocity update equation. To the best of our knowledge, this is the first such attempt to explore the adaptive inertia weight technique with MODPSO in the field of community detection in complex networks.
- Research Article
- 10.1108/ijicc-11-2024-0589
- Apr 11, 2025
- International Journal of Intelligent Computing and Cybernetics
PurposeThis study aims to enhance stock price prediction accuracy by integrating a gated recurrent unit (GRU) with an artificial rabbit optimization (ARO) algorithm. The objective is to address the issues in hyperparameter optimization and deliver a high-performance predictive model for stock market trends tested on the Dow Jones Industrial Average (DJIA) dataset.Design/methodology/approachThe proposed ARO-GRU hybrid model uses a GRU for time-series stock price prediction and an ARO to dynamically optimize the model’s parameters. ARO-GRU was benchmarked against various models, including single-layer and multi-layer GRU, BiLSTM and long short-term memory (LSTM) models optimized by genetic algorithms (GA) or ARO. Performance was assessed using metrics such as the mean square error (MSE), mean absolute error (MAE), mean absolute percentage error (MAPE) and R-squared (R2).FindingsThe experimental results showed that the ARO-GRU model significantly outperformed its counterparts. Compared to the best alternative model (LSTM-ARO), ARO-GRU reduced the MSE by 81.8% (from 22.731 to 1.864 for the AAPL stock) and the MAPE by 64% (from 0.025 to 0.009). It achieved an average R2 score improvement of 5.3% across all tested stocks, demonstrating a better model fit. In addition, the ARO-GRU model required 83% less computational time than the LSTM-ARO model, further validating its efficiency.Originality/valueThis study introduces the integration of the ARO algorithm with the GRU for stock market prediction, marking a novel combination of efficiency and optimization. By demonstrating significant improvements in prediction accuracy and computation time, this study provides a robust and scalable solution for dynamic stock-trading systems.
- Research Article
24
- 10.1016/j.jhydrol.2020.125726
- Nov 3, 2020
- Journal of Hydrology
Development of a surrogate method of groundwater modeling using gated recurrent unit to improve the efficiency of parameter auto-calibration and global sensitivity analysis
- Research Article
9
- 10.1155/2021/4060740
- Aug 17, 2021
- Journal of Advanced Transportation
In order to accurately analyse the impact of the rainy environment on the characteristics of highway traffic flow, a short-term traffic flow speed prediction model based on gate recurrent unit (GRU) and adaptive nonlinear inertia weight particle swarm optimization (APSO) was proposed. Firstly, the rainfall and highway traffic flow data were cleaned, and then they are matched according to the spatiotemporal relationship. Secondly, through the method of multivariate analysis of variance, the significance of the impact of potential factors on traffic flow speed was explored. Then, a GRU-based traffic flow speed prediction model in rainy environment is proposed, and the actual road sections under different rainfall scenarios were verified. After that, in view of the problem that the prediction accuracy of the GRU model was low in the continuous rainfall scenario, the APSO algorithm was used to optimize the parameters of the GRU network, and the APSO-GRU prediction model was constructed and verifications under the same road section and rain scene were carried out. The results show that the APSO-GRU model has significantly improved prediction stability than the GRU model and can better extract rainfall features during continuous rainfall, with an average prediction accuracy rate of 96.74%.
- Research Article
- 10.1002/ett.4932
- Jan 7, 2024
- Transactions on Emerging Telecommunications Technologies
Innovative research works in the healthcare sector keep on advancing every day. As the “Internet of Things (IoT)” keeps on evolving, the application of IoT in the medical field is prominent these days. Utilizing IoT devices, alert messages can be sent directly to medical professionals in case of an emergency. So, monitoring the health condition of an individual using IoT technology has become a popular and beneficial method in today's contemporary medical field. With the help of mobile IoT medical equipment, the technology of smart Healthcare Monitoring System (HMS) is proliferating. By utilizing deep learning and IoT technology, the medical diagnosis system has evolved from direct face‐to‐face visits to the hospital to remote telemedicine method. Most of the data generated by the IoT wearable sensors are highly correlated and may consist of outliers. The extraction of the essential attributes from these data is a complicated task. So, “deep learning and machine learning” techniques are adapted to determine the most relevant and appropriate feature required for efficient diagnosis from the unstructured data produced by the IoT devices and thus help in minimizing the redundancy of unnecessary data. Fusing deep learning methods with healthcare IoT made only the essential details to be available for diagnosis. Therefore, a deep learning‐oriented IoT‐based HMS is executed in this work. With the support of several wearable healthcare devices, the required data are acquired. The encryption of the data acquired from standard sources using Optimal Key‐based Advanced Encryption Standard (OK‐AES) is carried out next to assure the security of the sensitive medical data. The keys for AES encryption are optimally chosen with the aid of the Enhanced Heap‐Based Optimizer Algorithm (EHBOA). The encrypted data is transferred to the “cloud platform” for data storage. Once there is a need for the data, then the encrypted data is initially downloaded from the cloud platform. Then using the same AES scheme, the decryption of the data to attain the original data is carried out. From the retrieved data, the extraction of the crucial attributes is carried out. The extracted features are chosen in an optimized manner and are concatenated with the tuned weights to form the weighted feature matrix. This formulated weighted feature matrix is provided as input to the “Adaptive Dilated Transformer Bidirectional Long Short‐Term Memory (Bi‐LSTM) with Gated Recurrent Unit (GRU) (ADTBi‐LSTM‐GRU) model.” The variables in the ADTBG are optimized using the EHBOA for providing an accurate classification outcome. The classified disease outcome is obtained from the deployed ADTBi‐LSTM‐GRU model. Simulations are done to verify the efficiency and reliability of the implemented deep learning and IoT‐based HMS.
- Research Article
8
- 10.3390/ani14060863
- Mar 11, 2024
- Animals : an Open Access Journal from MDPI
Simple SummaryThis study addresses the critical need for accurate prediction of key environmental factors—temperature, humidity, ammonia, and hydrogen sulfide—in pig houses, essential for pigs’ growth and health. Traditional methods face challenges in predictive accuracy and stability. We introduce an innovative OTDBO–TCN–GRU model, a hybrid framework combining the dung beetle algorithm, temporal convolutional network, and gated recurrent unit, enhanced by the Osprey optimization algorithm (OOA). This model synergistically merges DBO’s optimization power, TCN’s long-term dependency handling, and GRU’s proficiency in nonlinear sequence management, offering improved global detection capabilities. The OTDBO–TCN–GRU model showcases superior accuracy in environmental prediction, evident from its mean absolute error (MAE) of 0.0474, mean squared error (MSE) of 0.0039, and correlation coefficient of 0.9871. It significantly surpasses the traditional DBO–TCN–GRU and OOA models, reducing MAE and MSE by 37.2% and 66.7%, and 48.7% and 74.2%, respectively. Moreover, it outperforms mainstream models like GRU, LSTM, and XGBoost in terms of accuracy. This model significantly improves the forecasting of environmental conditions within pig houses, which is vital for maintaining optimal living conditions and ensuring the well-being of pigs.Temperature and humidity, along with concentrations of ammonia and hydrogen sulfide, are critical environmental factors that significantly influence the growth and health of pigs within porcine habitats. The ability to accurately predict these environmental variables in pig houses is pivotal, as it provides crucial decision-making support for the precise and targeted regulation of the internal environmental conditions. This approach ensures an optimal living environment, essential for the well-being and healthy development of the pigs. The existing methodologies for forecasting environmental factors in pig houses are currently hampered by issues of low predictive accuracy and significant fluctuations in environmental conditions. To address these challenges in this study, a hybrid model incorporating the improved dung beetle algorithm (DBO), temporal convolutional networks (TCNs), and gated recurrent units (GRUs) is proposed for the prediction and optimization of environmental factors in pig barns. The model enhances the global search capability of DBO by introducing the Osprey Eagle optimization algorithm (OOA). The hybrid model uses the optimization capability of DBO to initially fit the time-series data of environmental factors, and subsequently combines the long-term dependence capture capability of TCNs and the non-linear sequence processing capability of GRUs to accurately predict the residuals of the DBO fit. In the prediction of ammonia concentration, the OTDBO–TCN–GRU model shows excellent performance with mean absolute error (MAE), mean square error (MSE), and coefficient of determination (R2) of 0.0474, 0.0039, and 0.9871, respectively. Compared with the DBO–TCN–GRU model, OTDBO–TCN–GRU achieves significant reductions of 37.2% and 66.7% in MAE and MSE, respectively, while the R2 value is improved by 2.5%. Compared with the OOA model, the OTDBO–TCN–GRU achieved 48.7% and 74.2% reductions in the MAE and MSE metrics, respectively, while the R2 value improved by 3.6%. In addition, the improved OTDBO–TCN–GRU model has a prediction error of less than 0.3 mg/m3 for environmental gases compared with other algorithms, and has less influence on sudden environmental changes, which shows the robustness and adaptability of the model for environmental prediction. Therefore, the OTDBO–TCN–GRU model, as proposed in this study, optimizes the predictive performance of environmental factor time series and offers substantial decision support for environmental control in pig houses.
- Conference Article
1
- 10.1109/cac51589.2020.9326875
- Nov 6, 2020
Line loss rate is an important economic indicator for evaluating the operation of power grid enterprises. In order to improve the prediction accuracy of the line loss in the station area, a complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN), wavelet packet threshold denoising (WPT), genetic algorithm (GA), generalized regression neural network (GRNN) and gated recurrent unit (GRU) platform line loss variable weighted hybrid model (VWHM). First, use EEMDAN to decompose the electrical characteristic parameters related to the time series into multiple different subsequences, respectively calculate the amplitude variance of the IMF component through Fourier analysis, and combine the WPT to reduce the noise of the high-frequency signal containing noise to extract the representative Variables are used as predictive inputs. Finally, using the global optimization ability of GA, the weight coefficients of the combined prediction model composed of GRNN and GRU are used to solve the sample moving adaptive variable weights, and weighted to obtain the final prediction result. The simulation results show that the proposed variable weight combination model has higher prediction accuracy than the single model and the traditional VWHM, and is superior in line loss prediction.
- Research Article
2
- 10.20517/ir.2024.20
- Oct 18, 2024
- Intelligence & Robotics
Analyzing the evolution trend of rail corrugation using signal processing and deep learning is critical for railway safety, as current traditional methods struggle to capture the complex evolution of corrugation. This present study addresses the challenge of accurately capturing this trend, which relies significantly on expert judgment, by proposing an intelligent prediction method based on self-attention (SA), a bidirectional temporal convolutional network (TCN), and a bidirectional gated recurrent unit (GRU). First, multidomain feature extraction and adaptive feature screening were used to obtain the optimal feature set. These features were then combined with principal component analysis (PCA) and the Mahalanobis distance (MD) method to construct a comprehensive health indicator (CHI) that reflects the evolution of rail corrugation. A bidirectional fusion model architecture was employed to capture the temporal correlations between forward and backward information during corrugation evolution, with SA embedded in the model to enhance the focus on key information. The outcome was a rail corrugation trend prediction network that combined a bidirectional TCN, bidirectional GRU, and SA. Subsequently, a multi-strategy improved crested porcupine optimizer (CPO) algorithm was constructed to automatically obtain the optimal network hyperparameters. The proposed method was validated with on-site rail corrugation data, demonstrating superior predictive performance compared to other advanced methods. In summary, the proposed method can accurately predict the evolution trend of rail corrugation, offering a valuable tool for on-site railway maintenance.
- Research Article
- 10.11591/ijict.v14i1.pp50-58
- Apr 1, 2025
- International Journal of Informatics and Communication Technology (IJ-ICT)
Ponzi schemes deceive investors with promises of high returns, relying on funds from new investors to pay earlier ones, creating a misleading appearance of profitability. These schemes are inherently unsustainable, collapsing when new investments wane, leading to significant financial losses. Many researchers have focused on detecting such schemes, but challenges remain due to their evolving nature. This study proposes a novel hybrid machine-learning approach to enhance Ponzi scheme detection. Initially, we train an XGBoost classifier and extract its features. Meanwhile, we tokenize opcode sequences, train a gated recurrent unit (GRU) model on these sequences, and extract features from the GRU. By concatenating the features from the XGBoost classifier and the GRU, we train a final XGBoost model on this combined feature set. Our methodology, leveraging advanced feature engineering and hybrid modeling, achieves a detection accuracy of 96.57%. This approach demonstrates the efficacy of combining XGBoost and GRU models, along with sophisticated feature engineering, in identifying fraudulent activities in Ethereum smart contracts. The results highlight the potential of this hybrid model to offer more robust and accurate Ponzi scheme detection, addressing the limitations of previous methods.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.