Articles published on short-term-memory
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
52177 Search results
Sort by Recency
- New
- Research Article
- 10.55670/fpll.fuen.4.4.3
- Nov 15, 2025
- Future Energy
- Mahmood Abdoos + 4 more
The forecasting of oil production, demand, and prices holds critical significance for global economic stability and growth. Oil plays a crucial role in determining economic performance, making reliable price estimations essential for shaping public policy and guiding investment decisions. In this study, advanced neural network models were employed to enhance the accuracy of oil market forecasts, with a particular focus on their economic implications. Using Python-based implementations of Long Short-Term Memory (LSTM), Radial Basis Function (RBF), and multilayer perceptron (MLP) networks, the research compares the effectiveness of these approaches in crude oil price forecasting. The evaluation of model outputs using technical indicators revealed that the multilayer perceptron network yielded the best results. During training, it reached an average squared error of 55.28, a root mean squared error of 7.43, and a mean absolute error of 5.55; while in testing, the values were 116.01, 12.96, and 10.73, respectively. Overall, the comparative analysis indicates that the multilayer perceptron consistently surpassed both LSTM and RBF models in minimizing prediction errors. The economic relevance of these findings is underscored by the model's potential to enhance decision-making processes for investors, policymakers, and oil producers by offering more reliable forecasts. By improving accuracy by 20 to 30 percent compared to previous studies, this research provides valuable insights into optimizing resource allocation and mitigating the economic risks associated with oil price volatility.
- New
- Research Article
- 10.52088/ijesty.v5i4.1679
- Nov 14, 2025
- International Journal of Engineering, Science and Information Technology
- Qi Jiang
Predictive maintenance enhances the reliability and efficiency of wind turbine systems through its role in managing these wind energy systems, which represent the most commonly used renewable resource worldwide. This research develops a combined Convolutional Neural Network–Long Short-Term Memory (CNN–LSTM) framework to refine fault detection as well as maintenance tactics using Supervisory Control and Data Acquisition (SCADA) measurements. Through its spatial pattern extraction ability, CNN operates on multivariate sensor data, while LSTM maintains temporal dependencies to recognise complex time-dependent degradation patterns. The proposed Hybrid CNN-LSTM model achieved outstanding predictive maintenance performance for wind turbines with an accuracy of 96.5%, precision of 96%, and recall of 95.5%. It outperformed CNN (accuracy: 91%), LSTM (89.5%), and Random Forest (83.5%) in all key metrics. The model also achieved the highest F1-score (96%) and AUC (0.96), proving its reliability in real-time fault detection. Verification of the methodology involves testing it on real SCADA data from two wind farm sites over two years, where it proves capable of spotting abnormal operations at early stages. Secure wind energy operations, along with efficient cost reduction, become feasible through the use of this solution, which reduces unexpected equipment failures while minimising downtime events.
- New
- Research Article
- 10.1080/10589759.2025.2586799
- Nov 13, 2025
- Nondestructive Testing and Evaluation
- Trushna Jena + 2 more
ABSTRACT This study presents a novel, cost-effective approach for real-time prediction of the mechanical performance of concrete using Electro-Mechanical Impedance (EMI) profiles obtained from embedded piezo sensors (EPS). The experimental process involved embedding a smart clinker—comprising a lead zirconate titanate (PZT) sensor encapsulated in epoxy and mortar—into concrete specimens to record EMI signatures during hydration (30–500 kHz) under controlled curing conditions. This enabled continuous monitoring of stiffness evolution and strength gain. COMSOL Multiphysics simulations were employed to generate an extensive EMI dataset for training various machine learning (ML) and deep learning (DL) models. Equivalent Structural Parameter (ESP) analysis revealed a strong correlation between non-destructively derived stiffness and the percentage increase in compressive strength. Among ML models, cubic support vector machine (SVM) and medium Gaussian SVM yielded reliable strength predictions. Additionally, DL architectures, including one-dimensional convolutional neural networks (1D CNN), 1D CNN combined with Long Short-Term Memory (LSTM), and 1D CNN-Bidirectional LSTM, were further optimized using Orthogonal Matching Pursuit (OMP) for feature selection, significantly enhancing performance. The OMP-based Bi-LSTM achieved an R² of 0.9973, outperforming other models. These results confirm the efficacy of EMI-based DL frameworks for highly accurate, non-destructive estimation of concrete strength, advancing practical structural health monitoring.
- New
- Research Article
- 10.2174/0126662558433038251030110915
- Nov 13, 2025
- Recent Advances in Computer Science and Communications
- Vivekananda Mukherjee + 2 more
Introduction: Millimetre-wave (mm-Wave) communication, spanning 30-300 GHz, suffers from strong atmospheric attenuation due to water vapour and oxygen absorption. These effects are highly variable, influenced by temperature and humidity, which determine the location of atmospheric “window frequencies (fwindows)”, regions of reduced attenuation essential for high-capacity wireless links. Methodology: This study examines the seasonal and geographical variability of these windows across 14 Pacific tropical countries during July–August and January–February, using a refined Millimetre Wave Propagation Model (MPM) up to 200 GHz. Three deep learning models, Long Short-Term Memory (LSTM), Convolutional Neural Network (CNN), and Recurrent Neural Network (RNN) were tested to predict window frequencies at 30 GHz, 94 GHz, and 140 GHz from temperature and water vapour density data. Results: Results show that the RNN consistently achieved the highest accuracy (R² > 0.98, RMSE < 0.015 dB/km), outperforming CNN and LSTM. While CNN proved competitive in cooler conditions, LSTM exhibited greater sensitivity to seasonal shifts. The findings highlight that atmospheric windows are dynamic rather than static, with significant seasonal and spatial variations. Discussion: The study confirms that window frequencies are not fixed but dynamically modulated by climatic parameters, especially in humid tropical zones. The RNN model’s superior performance is attributed to its ability to capture temporal dependencies in meteorological inputs. Larger training sets (80–20 split) enhanced generalization and reduced prediction errors across all models. Conclusion: The proposed RNN-based framework offers a robust, climate-adaptive solution for real-time spectrum allocation, supporting energy efficiency, resilience, and sustainability in future mm-Wave communication networks.
- New
- Research Article
- 10.1088/2631-8695/ae1f65
- Nov 13, 2025
- Engineering Research Express
- Ahmed Shany Khusheef
Abstract Fused deposition modeling (FDM) is a widely used additive manufacturing (AM) process valued for its versatility in rapid prototyping and its 2.5D part-generation capabilities. However, the quality and mechanical properties of FDM-printed parts are highly sensitive to variations in process parameters, such as material properties, temperature, and printing speed, leading to challenges in maintaining consistent part performance. This study addresses real-time sensor-based anomaly detection in FDM by using time-series data collected from a printing device. We compared two approaches: traditional handcrafted feature extraction methods and deep learning (DL) models, which automatically extract features by transforming signals into images suitable for machine vision algorithms. Specifically, we designed and evaluated bespoke hybrid models that combine convolutional neural networks (CNNs) and long short-term memory (LSTM) units (CNN-LSTM) to monitor the FDM process by utilizing acoustic and vibration signals for anomaly detection. Experimental results show that while traditional machine learning methods, particularly support vector machines (SVMs), achieved slightly higher raw classification metrics, statistical analysis confirmed that these differences were not significant. Moreover, CNN-LSTM models demonstrate notable advantages in terms of computational efficiency, robustness to noise, and future scalability, making them strong candidates for real-time and industrial monitoring applications.&#xD;
- New
- Research Article
- 10.1080/03081079.2025.2587707
- Nov 12, 2025
- International Journal of General Systems
- Zixin Zhao + 3 more
Short-term power load data exhibits significant periodicity, often reflecting daily or seasonal patterns. However, existing research has not fully exploited this periodicity. Instead, researchers tend to use complex models to train large volumes of short-interval data, leading to redundancy and inefficiency. To address these issues, we propose a concise yet efficient dual-encoder long short-term memory model (DELSTM) that incorporates both historical load data and environmental information. Specifically, we integrate sequential hourly features into daily features to reduce the training sample size and the number of input series, with the daily environmental feature shared among all hourly samples. We employ two LSTM encoders to extract both load and environmental features, and then fuse these features using an attention mechanism to forecast the day-ahead load. Experimental results on a real dataset demonstrate that our method achieves higher precision and efficiency, significantly reducing training time and mitigating error accumulation.
- New
- Research Article
- 10.12732/ijam.v38i10s.974
- Nov 11, 2025
- International Journal of Applied Mathematics
- Sandeep Gupta
The accelerating impacts of climate change on Indian agriculture demand adaptive, data-driven methods for sustainable crop management. This study develops an AI-based spatio-temporal predictive framework to estimate climate-resilient crop yields by integrating multi-source datasets meteorological parameters, soil moisture, satellite-derived vegetation indices (NDVI, EVI, LST), and socio-agronomic inputs. Using machine learning and deep learning models such as Random Forest, Gradient Boosting, and Long Short-Term Memory (LSTM) networks, the system analyzes historical data across key agro-climatic zones of India to forecast yield fluctuations under varying climatic conditions. The results indicate that LSTM models outperform traditional regression-based methods, achieving an accuracy improvement of over 18% in yield prediction and effectively capturing non-linear temporal dependencies. Spatial pattern analysis reveals high climate vulnerability in rain-fed regions of Maharashtra and central India, while irrigated northern plains exhibit relative yield stability. The integration of AI and remote sensing provides a scalable and near-real-time decision support tool for policy formulation, crop insurance planning, and climate adaptation strategies. This approach underscores the transformative potential of artificial intelligence in fostering resilient agricultural systems and achieving food security amid climatic uncertainties.
- New
- Research Article
- 10.3390/pr13113646
- Nov 11, 2025
- Processes
- Xin Yang + 4 more
Reliable load forecasting is crucial for ensuring optimal dispatch, grid security, and cost efficiency. To address limitations in prediction accuracy and generalization, this paper proposes a hybrid model, GRU-MHSAM-ResNet, which integrates a gated recurrent unit (GRU), multi-head self-attention (MHSAM), and a residual network (ResNet)block. Firstly, GRU is employed as a deep temporal encoder to extract features from historical load data, offering a simpler structure than long short-term memory (LSTM). Then, the MHSAM is used to generate adaptive representations by weighting input features, thereby strengthening the key features. Finally, the features are processed by fully connected layers, while a ResNet block is added to mitigate gradient vanishing and explosion, thus improving prediction accuracy. The experimental results on actual load datasets from systems in China, Australia, and Malaysia demonstrate that the proposed GRU-MHSAM-ResNet model exhibits superior predictive accuracy to compared models, including the CBR model and the LSTM-ResNet model. On the three datasets, the proposed model achieved a mean absolute percentage error (MAPE) of 1.65% (China), 5.52% (Australia), and 1.57% (Malaysia), representing a significant improvement over the other models. Furthermore, in five repeated experiments on the Malaysian dataset, it exhibited lower error fluctuation and greater result stability compared to the benchmark LSTM-ResNet model. Therefore, the proposed model provides a new forecasting method for power system dispatch, exhibiting high accuracy and generalization ability.
- New
- Research Article
- 10.55220/2576-6821.v9.716
- Nov 11, 2025
- Journal of Banking and Financial Dynamics
- Ying Wang + 2 more
Temporal pattern recognition has become increasingly critical for predictive analytics in various domains, particularly in demand forecasting where accurate predictions directly impact business operations and profitability. Neural network (NN) architectures have demonstrated remarkable capabilities in capturing complex temporal dependencies within sequential data, outperforming traditional statistical methods in numerous applications. This review examines the evolution and application of neural network approaches specifically designed for temporal pattern recognition, with emphasis on their utilization in demand forecasting and predictive analytics. The paper provides a comprehensive analysis of recurrent neural networks (RNNs), long short-term memory (LSTM) networks, gated recurrent units (GRUs), convolutional neural networks (CNNs), and transformer-based architectures in the context of time series forecasting. Furthermore, this review explores the integration of attention mechanisms, the emergence of spatiotemporal graph neural networks (STGNNs), and hybrid model architectures that combine multiple approaches to enhance forecasting accuracy. The evaluation metrics commonly employed to assess model performance, including mean absolute error (MAE), root mean squared error (RMSE), and mean absolute percentage error (MAPE), are discussed alongside benchmark datasets utilized in the field. Through systematic examination of recent literature spanning from 2019 to 2025, this review identifies key architectural innovations, practical applications in retail and supply chain management, and emerging trends that define the current state of temporal pattern recognition. The findings reveal that while transformer-based models have gained significant attention for long-sequence forecasting, simpler linear architectures and hybrid approaches often demonstrate competitive or superior performance depending on dataset characteristics and application requirements. This comprehensive review serves as a foundation for researchers and practitioners seeking to understand the landscape of neural network methodologies for temporal pattern recognition and their practical deployment in demand forecasting systems.
- New
- Research Article
- 10.1115/1.4070176
- Nov 11, 2025
- Journal of Manufacturing Science and Engineering
- Xuetao Wang + 3 more
Abstract Discontinuous profile grinding of gears is a critical manufacturing process for high-precision gears. However, a significant phenomenon observed in its applications is the propagation of thermal error from the dressing spindle to the gear tooth surface via the wheel dressing and subsequent gear grinding, severely affecting gear surface accuracy. The mechanism of this phenomenon remains unclear, and effectively mitigating the detrimental impact of dressing spindle thermal error remains an unsolved issue. To reveal this mechanism, this study analyzes the dressing spindle thermal error, to quantify its impact on gear profile deviation, helix deviation, and pitch deviation. To address this issue, a novel thermal error prediction model based on POA-LSTM (pelican optimization algorithm and long short-term memory) is proposed and implemented for active compensation. Comparative experiments involving thermal error compensation were conducted, accurately measuring the dressing spindle thermal error, the accuracy of grinding wheel profile dressing, and the final gear surface accuracy before and after compensation. The experimental results validated the proposed mechanism analysis for the dressing spindle thermal error on gear surface error and confirmed the effectiveness of the compensation, which reduced the dressing spindle thermal error to within 0.5 μm and the cumulative pitch error (FP) of the workpiece gear by at least 45.4%.
- New
- Research Article
- 10.3389/frai.2025.1702924
- Nov 11, 2025
- Frontiers in Artificial Intelligence
- Federico Guede-Fernández + 6 more
IntroductionThe Counter-Strike 2 skin market has developed into a multi-billion-dollar digital asset ecosystem, characterized by high volatility, low liquidity, and pricing inefficiencies that differ substantially from traditional financial markets. Despite the growing economic relevance of virtual items, no previous study has systematically examined the use of artificial intelligence for skin trading.MethodsThis work designs and evaluates an automated trading system that applies deep learning models, specifically Long Short-Term Memory networks and Neural Hierarchical Interpolation for Time Series, to forecast skin prices and guide trading decisions. A dataset of 12,000 unique skins from the Steam Market, covering the period from May 2024 to April 2025, was collected using the CSGOskins.gg application programming interface. To reflect real market conditions, the trading strategy incorporated the Steam Market restrictions of a seven-day minimum holding period and a ten percent transaction cost, and was benchmarked against a traditional buy-and-hold strategy. Backtesting was performed multiple time horizons of two, three, and 6 months. Portfolio selection was based on risk and return criteria, including a Sharpe ratio greater than one, a Sortino ratio greater than two, and a return on investment above five percent.ResultsArtificial intelligence consistently outperforms buy-and-hold, particularly in smaller, more concentrated portfolios and over longer time horizons. For example, in 6-month simulations, artificial intelligence portfolios achieved returns approaching 20%, compared to 5% to 10% for buy-and-hold, with excess returns as high as 75% in small portfolios. Larger portfolios reduced absolute returns but improved risk-adjusted performance, confirming that diversification enhances stability while diluting raw profitability. Analysis of portfolio composition by rarity further revealed that artificial intelligence favors moderately rare and liquid skins such as Mil-Spec, resembling mid-cap equity investment strategies, while buy-and-hold accumulates rarer skins, analogous to small-cap holdings that rely on scarcity premiums.DiscussionThese findings highlight that even in virtual goods markets, the trade-offs between return, risk, and diversification reflect established principles of modern portfolio theory. The study demonstrates both the feasibility and the potential of artificial intelligence-based trading systems in the Counter-Strike 2 skin economy, contributing methodological advances and practical insights for participants in this emerging digital asset market.
- New
- Research Article
- 10.1007/s11060-025-05311-7
- Nov 10, 2025
- Journal of neuro-oncology
- Vikrant Chole + 4 more
Brain tumors are life-threatening neurological conditions that develop due to uncontrolled cell growth in the brain. The survival rate of this illness is gradually decreasing due to the lack of early and accurate diagnosis of brain tumors. Multiple classification methods have been developed for the classification of brain tumors, but they have resulted in several limitations, such as complications in segmentation, inconsistency in tumor features, data inequity, low performance, and high error. To overcome the limitations, this research proposed a Producer Scrounger Foraging Optimized Gated Recurrent Unit-Deep Bidirectional Long Short-Term Memory (PSF-GRBM) model for effective brain tumor classification. Additionally, the integration of PSF optimization in the proposed model reduces the high-complexity problems in various dimensions and achieves effective results. The experimental analysis reveals that the proposed method attained 95.74% of accuracy, 95.67% of sensitivity, and 95.81% of specificity, using the MSD dataset, which highlights the improved performance of the PSF-GRBM model in brain tumor classification and grading. The PSF-GRBM model achieves effective classification through an incentive learning mechanism, categorizing brain tumors into four grades: grade 0 for a normal brain, grade 1 for non-enhancing and necrotic tumor core, grade 2 for peritumoral edema, and grade 3 for Gadolinium (GD) enhancing tumors.
- New
- Research Article
- 10.3390/w17223214
- Nov 10, 2025
- Water
- Jiyeon Park + 3 more
Reliable inflow forecasting represents a challenging and representative problem in long-horizon time series forecasting. Although long-term time series forecasting (LTSF) algorithms have shown strong performance in other domains, their applicability to hydrological inflow prediction has not yet been systematically assessed. Therefore, this study examined two LTSF linear models for inflow forecasting: NLinear and DLinear. LTSF models were trained with a 24 h input window and evaluated for 24 h lead times at eight major dams in South Korea. Long Short-Term Memory (LSTM) network and eXtreme Gradient Boosting (XGBoost) were employed as a conventional AI model. LSTM consistently achieved the highest coefficient of determination (R2) and the lowest normalized root mean square error, DLinear minimized normalized mean square error, and NLinear delivered superior hydrological consistency as measured by Kling–Gupta efficiency. XGBoost showed comparatively larger variability across sites. Spatial heterogeneity was evident; sites were grouped into high-performing, transition, and vulnerable groups. Peak-flow analysis revealed amplitude attenuation and phase lag at longer horizons.
- New
- Research Article
- 10.2174/0126662558393789251013123155
- Nov 10, 2025
- Recent Advances in Computer Science and Communications
- Ritu Raj Kumar + 5 more
Abstract: Diseases and insect pests significantly threaten global crop yield, causing an estimated 20% to 40% annual loss, with climate change exacerbating pest damage and increasing agricultural losses by 10% to 25% per degree of global temperature rise. Traditional disease detection methods, reliant on direct visual diagnosis and chemical treatments, are labourintensive and inefficient for large farms, necessitating the adoption of automated crop monitoring and forecasting systems. Environmental stressors, including biotic and abiotic factors, contribute to plant diseases, ranging from physiological defects to plant mortality, while pesticide use leads to genetic resistance and auxiliary insect mortality. Recent advancements in data processing, sensor technologies, and autonomous monitoring systems have paved the way for more efficient disease detection, with deep learning and computer vision proving particularly effective in predicting plant illnesses. Convolutional Neural Networks (CNNs) facilitate precise disease classification, while Long Short-Term Memory (LSTM) networks enhance pest prediction, enabling commercial smart farming technologies to provide rapid and accurate crop disease diagnostics. Research leveraging deep learning models such as ResNet, DenseNet, Inception, GoogleNet, MobileNet, and LSTM has improved disease recognition, with models like DenseNet combined with Xception enhancing detection accuracy through global mean pooling. AI-driven approaches utilising air temperature, humidity, dew point, and CO2 concentration data have demonstrated promise in early disease prediction and prevention, thereby improving agricultural sustainability. Despite technological advancements, challenges such as data quality, computational complexity, and interdisciplinary collaboration persist. This study explores the implementation of machine learning in agriculture, assessing its benefits, limitations, and future potential, with a focus on deep learning as a transformative tool in crop disease identification, ultimately advancing food security and sustainable farming practices.
- New
- Research Article
- 10.5713/ab.250595
- Nov 10, 2025
- Animal bioscience
- Eunjeong Jeon + 1 more
Accurate early prediction of final body weight (BW) is essential for optimized feeding strategies and slaughter planning in beef cattle production. This study evaluated the performance of three machine learning models (k-nearest neighbors, Random Forest, eXtreme Gradient Boosting), and one deep learning model [long short-term memory (LSTM)] to forecast the final BW of Hanwoo steers at various time points prior to slaughter. A total of 196 Hanwoo steers (7 to 31 months of age) from a commercial farm were utilized. Input data included monthly BW and feed nutrient intake (crude protein, ether extract, neutral detergent fiber, total digestible nutrients) across three growth stages. Six input configurations (I1-I6) were designed to predict the final BW at 17, 13, 9, 6, 3, and 1 month(s) before slaughter, with a target age of 31 months. The machine and deep learning models were assessed by five-fold cross-validation (training set) and a test set and evaluated via the coefficient of determination (R²) and root mean squared error (RMSE). Among the tested models, the LSTM achieved the highest prediction accuracy across all the configurations. The performance of the LSTM improved as the prediction point approached the target slaughter age: I1 (R² = 0.60, RMSE = 52.80), I2 (0.72, 45.40), I3 (0.76, 40.92), I4 (0.83, 35.84), I5 (0.90, 33.12), and I6 (0.97, 22.62). These results demonstrated that LSTM effectively captured temporal dependencies in sequential data, enabling more accurate BW forecasting under commercial conditions. While I6 achieved the highest prediction accuracy, the 3-6 month predictions (I4 and I5) demonstrated reasonably high accuracy, which could provide a practical timeframe for farm-level management and planning. This approach could be utilized in evidence-based decision-making in Hanwoo production by providing reliable predictions well ahead of slaughter.
- Research Article
- 10.1016/j.watres.2025.124931
- Nov 9, 2025
- Water research
- Jeimy L Martinez De La Hoz + 4 more
Interpretable forecasting of dissolved oxygen leveraging foundation model for proactive aeration in rural wastewater treatment systems.
- Research Article
- 10.65521/ijacect.v14i1.818
- Nov 9, 2025
- International Journal on Advanced Computer Engineering and Communication Technology
- Prof G G Sayyad + 4 more
The proliferation of automated software agents, or ”bots,” presents a significant and evolving threat to web security, data integrity, and user trust. Traditional defense mechanisms, most notably CAPTCHAS, have been systematically defeated by advancements in artificial intelligence, rendering them increasingly ineffective and detrimental to user experience. In response, the field of cybersecurity has shifted its focus towards behavioral biometrics, a paradigm that seeks to distinguish humans from bots based on their intrinsic interaction patterns. This survey provides a comprehensive review of the literature on bot detection with a specific focus on mouse dynamics—the analysis of a user’s cursor movement patterns. We trace the evolution of this field from foundational concepts and statistical feature engineering to the adoption of sophisticated deep learning models like Long Short-Term Memory (LSTM) networks and Convolutional Neural Networks (CNNs). Furthermore, we examine the critical role of public datasets in advancing research, explore the challenges posed by advanced threats such as session-replay bots and adversarial attacks, and identify key research gaps. This review synthesizes the current state-of-the-art and establishes a clear justification for the development of next-generation, robust, and frictionless bot detection systems.
- Research Article
- 10.51584/ijrias.2025.1010000080
- Nov 8, 2025
- International Journal of Research and Innovation in Applied Science
- Adrales, Lorelyn F + 4 more
Language is a fundamental aspect of human identity, deeply connected to geographical origins, cultural heritage, and social belonging. However, many indigenous languages across the world are gradually declining due to modernization, migration, and the growing influence of technology and global languages. The loss of these languages often leads to the disappearance of cultural values, oral traditions, and historical knowledge. This study explores the integration of machine learning techniques such as Long Short-Term Memory (LSTM), Yoon Kim’s Convolutional Neural Network model, and TextConvoNet in developing a mobile text-to-text identification and translation application for Blaan dialects spoken in General Santos City, Polomolok, and Sarangani. The goal of the application is to aid in the preservation and revitalization of the Blaan language while providing an accessible platform for both native speakers and learners to understand, translate, and communicate in their local dialects. To evaluate the usability and effectiveness of the application, User Acceptance Testing (UAT) was conducted among selected users. Data were collected through structured interviews, document analysis, and standardized evaluation tools to ensure comprehensive assessment and validation. Experimental results showed that the TextConvoNet model achieved the highest accuracy rate of 74.00 percent, surpassing the performance of both LSTM and CNN-based models. This demonstrates the model’s efficiency in identifying and classifying Blaan dialects, highlighting its potential in the field of Natural Language Processing (NLP). Future research should focus on expanding the dataset by collecting transcriptions from diverse age groups, locations, and communication contexts to improve model generalization and accuracy. Further refinement of the model’s architecture and parameter tuning is also recommended to enhance dialect classification and translation capabilities. Moreover, integrating speech-to-text and text-to-speech functionalities could facilitate real-time translation, pronunciation learning, and accessibility for non-literate speakers, ensuring the continued preservation and appreciation of indigenous languages.
- Research Article
- 10.1088/1361-6560/ae1803
- Nov 7, 2025
- Physics in Medicine & Biology
- Lina Mekki + 2 more
Objective.To develop and evaluate a deep reinforcement learning (RL) framework for rapid and automatic machine parameter optimization of volumetric modulated arc therapy (VMAT) treatment plans for localized prostate cancer.Approach.A multi-task policy network combining convolution and long short-term memory was trained to sequentially predict the set of actions on the dose rate and multi-leaf collimator positions over the range of two arcs. The network uses as input the cumulative dose grid at the current gantry angle, contours of the planning target volume (PTV) and organs at risk, and the set of machine parameters at all preceding gantry angles. The method was evaluated on a set of 15 localized prostate cancer patients for a prescription dose of 60 Gy in 20 fractions. For each case, the final state dose distribution was compared against clinical plans. For seamless integration with the clinical workflow, the proposed model was integrated into a clinical treatment planning system (TPS), enabling dosimetric review and final plan adjustments.Main results.The RL framework produced deliverable dual-arc VMAT plans in an average of 20.7 ± 5.0 s over the test set. Dosimetric comparison to clinical plans showed no statistically significant differences for the mean rectum dose as well as for the bladder V6160 Gy, indicating that the RL model was as efficient in sparing these structures as human planners. While the approach showed limitations in terms of PTV coverage and maximum body dose, our proposed integration to TPS showed the RL plans could be automatically refined to clinical quality in an additional 83.8 ± 7.2 s.Significance.The accuracy and fast run time of the approach show the potential of the framework to significantly streamline VMAT treatment planning and enable adaptive radiation therapy.
- Research Article
- 10.1088/2631-8695/ae1d16
- Nov 7, 2025
- Engineering Research Express
- Fanjun Su + 2 more
Abstract The complex corrosion evolution of pipelines presents a significant challenge for integrity management, as traditional physical models often fail in long-term prediction and purely data-driven methods struggle with limited, noisy data. To address this, this study proposes a novel hybrid physics-informed long short-term memory (PI-LSTM) network. The framework utilizes a physical model to capture the primary corrosion trend, while an LSTM is trained to learn the remaining nonlinear residuals. To ensure physical plausibility, the model embeds constraints derived from the ordinary differential equation (ODE) governing corrosion kinetics into the composite loss function.&#xD;The proposed PI-LSTM model was validated on field monitoring data and compared against multiple benchmarks. The experimental results demonstrate its superior performance, achieving a mean Root Mean Square Error (RMSE) of 2.46 ± 0.28. This result is not only more accurate than the traditional Velázquez model (RMSE: 2.74) but also significantly surpasses other data-driven and time-series models. An ablation study further confirmed that the physical constraint was crucial, improving both accuracy and stability over the standard LSTM model (RMSE: 2.63 ± 0.51), with a 47% reduction in standard deviation highlighting its powerful regularization effect.&#xD;The superior performance across all evaluation metrics indicates that the proposed method has high prediction accuracy for the dataset under investigation. While further validation on diverse datasets is required to fully establish its generalizability, this study demonstrates that the hybrid, physics-informed framework offers a promising and robust new approach for pipeline corrosion research.