Published in last 50 years
Articles published on LSTM
- New
- Research Article
- 10.51584/ijrias.2025.1010000080
- Nov 8, 2025
- International Journal of Research and Innovation in Applied Science
- Adrales, Lorelyn F + 4 more
Language is a fundamental aspect of human identity, deeply connected to geographical origins, cultural heritage, and social belonging. However, many indigenous languages across the world are gradually declining due to modernization, migration, and the growing influence of technology and global languages. The loss of these languages often leads to the disappearance of cultural values, oral traditions, and historical knowledge. This study explores the integration of machine learning techniques such as Long Short-Term Memory (LSTM), Yoon Kim’s Convolutional Neural Network model, and TextConvoNet in developing a mobile text-to-text identification and translation application for Blaan dialects spoken in General Santos City, Polomolok, and Sarangani. The goal of the application is to aid in the preservation and revitalization of the Blaan language while providing an accessible platform for both native speakers and learners to understand, translate, and communicate in their local dialects. To evaluate the usability and effectiveness of the application, User Acceptance Testing (UAT) was conducted among selected users. Data were collected through structured interviews, document analysis, and standardized evaluation tools to ensure comprehensive assessment and validation. Experimental results showed that the TextConvoNet model achieved the highest accuracy rate of 74.00 percent, surpassing the performance of both LSTM and CNN-based models. This demonstrates the model’s efficiency in identifying and classifying Blaan dialects, highlighting its potential in the field of Natural Language Processing (NLP). Future research should focus on expanding the dataset by collecting transcriptions from diverse age groups, locations, and communication contexts to improve model generalization and accuracy. Further refinement of the model’s architecture and parameter tuning is also recommended to enhance dialect classification and translation capabilities. Moreover, integrating speech-to-text and text-to-speech functionalities could facilitate real-time translation, pronunciation learning, and accessibility for non-literate speakers, ensuring the continued preservation and appreciation of indigenous languages.
- New
- Research Article
- 10.1038/s41598-025-23455-4
- Nov 7, 2025
- Scientific reports
- Shehroz S Khan + 1 more
Accurate prediction of air and surface temperature is essential for urban planning and climate resilience, especially in arid regions. This study evaluates the performance of multi-output regression models using high-frequency climate data collected every 5 min over four years in Kuwait. Thirty environmental variables (e.g., including humidity, solar radiation, dew point, and wind direction) were used to predict six air and surface temperature-related outcomes simultaneously. Ten models, including deep learning and traditional machine learning approaches, were benchmarked using a leave-1-year-out validation strategy. Results show that contextual embeddings-based Transformer (FTTransformer) and Long Short-Term Memory (LSTM) achieved strong predictive performance with an [Formula: see text] of 0.998, a mean squared error of 0.13, and a mean absolute error of 0.24 when forecasting six temperature variables at 5-min resolution. These results significantly outperform traditional machine learning models and demonstrate the robustness of deep learning approaches for high-frequency climate prediction. While deep learning models outperformed conventional methods, LSTM's performance degraded on anomalous data from previous years, whereas FTTransformer maintained stable accuracy across years. Model interpretation using SHAP and permutation importance identified key predictors for this task, underlining the significance of diverse climate features.
- New
- Research Article
- 10.1038/s41598-025-23733-1
- Nov 7, 2025
- Scientific reports
- Yaly Mevorach + 7 more
Sperm whales (Physeter macrocephalus) navigate complex oceanic environments and social structures. In the waters off Dominica, female and juvenile whales form long-lasting social units and vocal clans, distinguished by unique click dialects known as codas. While prey availability is often seen as a driver of whale movements, we highlight the role of sociality in shaping spatial behavior. Using 20 years of photo-identification data, we examined the sequential presence of social units for predictable patterns linked to social structure. Applying long short-term memory (LSTM) neural networks to sequences of one to five days across 16 states including 14 units, mature males and unknown units, we achieved prediction accuracies over 60%, far exceeding random chance (0.00001526). We then compared unit-to-unit transition probabilities to their social association matrix using a Hemelrijk test, revealing strong alignment between movement and social bonds for some of the units. To support long-term monitoring, we developed an acoustic classification method based on inter-pulse intervals (IPIs) in echolocation clicks, serving as acoustic fingerprints linked to body size. Kernel Density Estimation (KDE) classified units with 78.26% accuracy. Our findings provide quantitative evidence that sperm whale movements are socially coordinated and predictable, offering new insights into the spatial and social dynamics of sperm whale societies and highlighting the role of social affiliation in shaping large-scale movement patterns.
- New
- Research Article
- 10.54097/61dqqx35
- Nov 6, 2025
- Highlights in Business, Economics and Management
- Yuhan Zhang + 1 more
Stock price prediction plays a critical role in investment decision-making and financial regulation. However, traditional time series models and early neural networks are limited either by restrictive assumptions or by their inability to effectively handle long sequences, resulting in suboptimal prediction performance. This paper proposes a hybrid predictive model that integrates multi-feature fusion, the attention mechanism, and Bayesian optimization into a Long Short-Term Memory (LSTM) framework to enhance prediction accuracy and stability. Using daily data from the S&P 500 Index from 2020 to 2022, the study employs LSTM to capture long-term temporal dependencies, introduces an attention mechanism to highlight key sequential features, and utilizes Bayesian optimization for adaptive hyperparameter tuning. Empirical results demonstrate that compared with conventional LSTM, attention-enhanced LSTM, and Bayesian-optimized LSTM models, the proposed Multi-Feature Bayesian Optimized Attention-LSTM achieves significantly lower Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and Mean Absolute Percentage Error (MAPE), reaching 25.51, 18.35, and 0.66%, respectively. Even during periods of extreme market volatility such as the Russia–Ukraine conflict and the U.S. Federal Reserve’s interest rate hikes in 2022, the MAPE remained below 0.70%. These findings validate the synergistic effect of multi-feature fusion, the attention mechanism, and Bayesian optimization, providing more reliable decision support for financial market participants.
- New
- Research Article
- 10.3389/frai.2025.1537432
- Nov 6, 2025
- Frontiers in Artificial Intelligence
- Md Julkar Naeen + 5 more
Classifying scattered Bengali text is the primary focus of this study, with an emphasis on explainability in Natural Language Processing (NLP) for low-resource languages. We employed supervised Machine Learning (ML) models as a baseline and compared their performance with Long Short-Term Memory (LSTM) networks from the deep learning domain. Subsequently, we implemented transformer models designed for sequential learning. To prepare the dataset, we collected recent Bengali news articles online and performed extensive feature engineering. Given the inherent noise in Bengali datasets, significant preprocessing was required. Among the models tested, XLM-RoBERTa Base achieved the highest accuracy 0.91. Furthermore, we integrated explainable AI techniques to interpret the model’s predictions, enhancing transparency and fostering trust in the classification outcomes. Additionally, we employed LIME (Local Interpretable Model-agnostic Explanations) to identify key features and the most weighted words responsible for classifying news titles, which validated the accuracy of Bengali news classification results. This study underscores the potential of deep learning models in advancing text classification for the Bengali language and emphasizes the critical role of explainability in AI-driven solutions.
- New
- Research Article
- 10.1080/17499518.2025.2575471
- Nov 6, 2025
- Georisk: Assessment and Management of Risk for Engineered Systems and Geohazards
- Dianqing Li + 4 more
ABSTRACT Accurate seepage flow prediction is essential for risk assessment of earth-rockfill dams. This study incorporates convolutional neural network (CNN) with long short-term memory (LSTM), taking into account the advantages of both feature recognition and time series processing, and the CNN-LSTM model is first introduced to predict the dam seepage flow. Afterwards, the efficacy of the developed model is verified through nearly 20 years of monitoring data from two dams at Shenzhen Reservoir. Furthermore, another five well-developed models including support vector regression (SVR), adaptive boosting, artificial neural network, recurrent neural network, and LSTM models are also discussed. The hyperparametric optimisation of these six models is discussed as well. Results show that the type of kernel function significantly influences the accuracy of the SVR model, where the radial basis kernel function performs best. For the neural network model, more attention should be drawn to the choice of hidden nodes over other hyperparameters. In general, the prediction accuracy rankings of the five comparison models differ between the two dams, indicating that the model performance largely depends on the site characteristics. The CNN-LSTM model demonstrates consistently high predictive accuracy, suggesting its potential as a reliable tool for dam seepage flow forecasting.
- New
- Research Article
- 10.1038/s41598-025-22701-z
- Nov 6, 2025
- Scientific reports
- Yang Yang + 5 more
Sports motion recognition is essential for performance analysis, injury prevention, and athlete monitoring. Traditional deep learning models, such as Long Short-Term Memory (LSTM) and Transformer-based architectures, struggle to capture motion dynamics with long-term dependencies or noise in their inputs. To overcome these limitations, this work proposes an Evolved Parallel Recurrent Network (EPRN) with wavelet transforms for high-precision motion recognition. The EPRN framework utilizes parallel recurrent pathways to enhance temporal modeling, while wavelet-based feature extraction preserves the fine-grained details of motion at multiple spatial and temporal resolutions. The proposed method has been tested on benchmark sports motion data and compared with several common architectures, including LSTM, Gated Recurrent Units (GRUs), and Convolutional Neural Network (CNN) models. The experiments demonstrated that EPRN outperforms the other models, reducing the root mean squared error (RMSE) by 23.5% and increasing the structural similarity index (SSIM) by 12.7%, indicating its effectiveness in reconstructing motion trajectories with reduced error. Furthermore, the residual analysis confirms the result that EPRN has lower error variability and less sensitivity to abrupt motion transitions, thus being a more robust solution for real-world applications. The results, therefore, indicate that combining wavelet-transform-based feature extraction with recurrent deep learning significantly enhances the accuracy of motion recognition. The real-life applications of this work include sports performance analysis, real-time motion tracking, and rehabilitation systems. Future work will focus on multimodal data fusion (e.g., video plus wearable sensor data) and also lightweight EPRN variants suitable for real-time applications.
- New
- Research Article
- 10.54254/2755-2721/2025.ld28972
- Nov 5, 2025
- Applied and Computational Engineering
- Fangruo Wang
In recent years, the integration of computer vision and deep learning in the financial sector has become a research hotspot. Traditional stock price prediction primarily relies on time-series data, while individual investors in the market often make decisions by analyzing visual charts such as candlestick charts. Consequently, research utilizing stock price-related images as input has gradually emerged. This paper proposes an architecture based on the Long Short-Term Memory (LSTM) network to predict future trends using stock price images. By converting stock price informationincluding opening price, high price, low price, closing price into standardized images.Then employs the enhanced LSTM structure proposed in Sequencer (bidirectional LSTM, Bi-LSTM) for feature extraction and modeling, using machine learning to simulate human investment decisions. Experimental results demonstrate that this model not only outperforms traditional momentum strategies and short-term reversal strategies in stock price prediction accuracy but also maintains robust performance across varying market conditions and transaction delays. This research offers novel insights for stock price forecasting and validates the effectiveness of LSTM in processing image-based financial data.
- New
- Research Article
- 10.51584/ijrias.2025.1010000060
- Nov 5, 2025
- International Journal of Research and Innovation in Applied Science
- Gweneth Bonto + 4 more
SENTIMART: A Web-Based Ordering, Inventory, and Feedback System based on Long Short-Term Memory (LSTM)-based Sentiment Analysis on RTEA Shop is a one-stop web-based system that was created to help manage operational drawbacks in the field of specialty tea retail. The main innovation of it is that it applies the LSTM-based sentiment analysis algorithm to customer feedback to produce predictive analysis of the feedback, used to forecast the inventory and customized advice on which products to purchase. Constructed using PHP-MySQL and evaluated as per ISO 25010 standards of Software Quality, the system proved to be very functional, reliable, user friendly and efficient. SENTIMART automates and data-driven characteristics streamline the operations, decrease the number of manual activities, and improve customer experience. According to the reviews of both technical specialists and end customers, the system has a good consensus in ISO 25010 results in assessments, which proves the effectiveness of the system as a trustworthy and convenient tool to enhance the services of RTEA Shop. Technical respondents expressed confidence in the system's sound architecture and the precision of its LSTM sentiment analysis engine, giving it excellent grades for functionality and reliability. Although it was still given a high rating, portability scored somewhat lower, indicating a small opportunity for future improvement in terms of cross-platform consistency. This paper Sentimart: A Web-Based Ordering, Inventory & Feedback System using Long Short-Term Memory (LSTM)-Based Sentiment Analysis for RTEA Shop” is a developmental type of technology research that aims to design, develop, and evaluate a web-based system guided by the ISO/IEC 25010 software quality model focusing on functionality, reliability, efficiency, usability, and portability. The development follows the Waterfall Model, which involves sequential phases such as requirements analysis, system design, implementation, testing, deployment, and maintenance to ensure systematic progress and quality assurance. Data collection instruments include a needs assessment survey and interview conducted before development to identify user requirements, and a Likert-scale questionnaire administered after system implementation to evaluate user satisfaction and system performance based on ISO 25010 criteria. Each phase ensures that the system meets expected behavioral standards and technical quality attributes for optimal performance. Overall, this structured approach guarantees that Sentimart delivers a reliable, efficient, and user-friendly solution adaptable to various platforms. The study concludes that SENTIMART is a reliable, efficient, and easy-to-use solution that can improve specialized tea stores' customer-facing services as well as their operational structure. The researchers suggest additional improvements based on the evaluation results and project constraints. These include including more input channels like speech and emoji analysis, putting OTP verification into place for enhanced security, and refining the server infrastructure and LSTM model to get rid of any possible real-time processing delays under heavy traffic. Furthermore, it is recommended to conduct a continuous trial to more accurately evaluate the ystem's long-term impact on sales and customer loyalty, ensuring its scalability and sustaining success in the competitive food and beverage industry.
- New
- Research Article
- 10.1177/01423312251379132
- Nov 5, 2025
- Transactions of the Institute of Measurement and Control
- Xiaoping Guo + 2 more
Industrial process data often possess characteristics such as time series correlation, high dimensionality, and noise. A fault classification method based on the Long Short-Term Memory (LSTM) combined with a Variational Autoencoder (VAE) model (LSTM-VAE) integrates the advantages of LSTM in handling long time series and the VAE in anomaly detection. However, when extracting features, the method mainly focuses on directly using the time series processed through sliding time steps to extract features via the LSTM network, which may lead to neglect or minor influence of abnormal signals during the generation of latent variables from long time series. To address these issues, this paper proposes a Difference Fusion Multi-Latent-Layer Temporal Feature (DFMLF) extraction method. The method calculates the latent variables by weighting the differences of input time series with the hidden states of the LSTM-VAE network to enhance the VAE’s ability to construct features. To further extract features from the generated latent variables that still have temporal characteristics, gated recurrent units are utilized. To prevent information loss, the latent variables before and after modeling are concatenated in dimensions and classified using a Convolutional Neural Networks. This method was evaluated on the Tennessee Eastman process and a real three-phase flow process, comparing it with other six different models. The results validate the effectiveness of the proposed model.
- New
- Research Article
- 10.2166/wpt.2025.133
- Nov 5, 2025
- Water Practice & Technology
- Nisha C M + 1 more
ABSTRACT Accurate prediction of water inflow is essential for effective water resource management and flood prevention. For inflow prediction of water into the dam, this paper offers a thorough analysis of several machine learning models, such as transformer architectures, a hybrid model of transformer with long short-term memory (LSTM), and different diffusion models. Various diffusion models like the simple diffusion model, diffusion with LSTM, diffusion with optimized hyperparameters, random search, grid search, and the Bayesian model are studied. The research is relevant since the prediction of the inflow of water helps the authorities to take appropriate decisions on opening the dam shutters, thereby mitigating flood and drought situations. Several configurations with five and nine features and various epoch settings were examined using a dataset of daily observations spanning 10 years of the Malampuzha dam. The R-squared value (R2) and error metrics (mean absolute error, mean squared error, and root mean squared error) were used to evaluate the performance of these models. According to our findings, the hybrid model of transformer with LSTM performs notably better in terms of accuracy and dependability than both transformer and diffusion models.
- New
- Research Article
- 10.14419/kexs8879
- Nov 5, 2025
- International Journal of Basic and Applied Sciences
- Sa’Eed Serwan Abdulsattar + 3 more
This study presents an advanced deep learning framework for forecasting gold futures prices by integrating a dual-attention Long Short-Term Memory (LSTM) network with a comprehensive set of engineered technical indicators. The proposed model employs temporal attention to dynamically reweight historical time steps and feature-level attention to adaptively prioritize influential indicators such as momentum, volatility, and trend measures. This dual mechanism enables the network to capture nonlinear dependencies and shifting market regimes more effectively than conventional models. Empirical evaluation using COMEX Gold Futures (Ticker: GC1) data obtained from Investing.com, covering the period January 2010 to May 2025, demonstrates the model’s superior forecasting accuracy. The proposed framework achieves a Mean Absolute Percentage Error (MAPE) of 0.91% and a coefficient of determination (R²) of 0.995 after calibration, representing a 34.7% reduction in MAE compared with the baseline LSTM. The lag-aware calibration module further refines short-term directional forecasts. At the same time, the dual-attention layers enhance interpretability by revealing the relative importance of indicators and time intervals across market conditions. By combining sequential modeling, adaptive feature selection, and explainable attention visualization, this research delivers a transparent, scalable, and high-performance forecasting framework. The findings offer practical value for traders, analysts, and policymakers seeking reliable and interpretable tools to navigate uncertainty and volatility in commodity markets.
- New
- Research Article
- 10.3390/pr13113559
- Nov 5, 2025
- Processes
- Hongwen Xu + 2 more
Electric vehicles (EVs) powered by lithium-ion batteries are crucial for sustainable transportation. Accurate State of Charge (SOC) estimation, a core function of Battery Management Systems (BMS), enhances battery performance, lifespan, and safety. This paper proposes a hybrid CNN-LSTM-AKF model integrating Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) Neural Networks with an Adaptive Kalman Filter. CNN extracts spatial features from current, voltage, and temperature data, while LSTM processes temporal dependencies. AKF reduces output fluctuations. Trained on datasets under three operating conditions, the model was tested across various temperatures and initial SOC states. Results demonstrate that the proposed model significantly outperforms standalone LSTM and LSTM-AKF model, particularly at low temperatures. Within 0 °C to 50 °C, it achieves Root Mean Square Error (RMSE) and Mean Absolute Error (MAE) below 1.51% and 1.18%, respectively. With an initial SOC of 80%, the model achieves an RMSE of 1.09% and MAE of 0.88%, showing rapid convergence. The model exhibits high accuracy, strong adaptability, and robust performance.
- New
- Research Article
- 10.3390/forecast7040066
- Nov 5, 2025
- Forecasting
- Adam Booth + 3 more
Accurately forecasting air quality could lead to the development of dynamic, data-driven policy-making and improved early warning detection systems. Deep learning has demonstrated the potential to produce highly accurate forecasting models, but it is noted that much literature focuses on narrow datasets and typically considers one geographic area. In this research, three diverse air quality datasets are utilised to evaluate four deep learning algorithms, which are feedforward neural networks, Long Short-Term Memory (LSTM) recurrent neural networks, DeepAR and Temporal Fusion Transformers (TFTs). The study uses these modules to forecast CO, NO2, O3, and particulate matter 2.5 and 10 (PM2.5, PM10) individually, producing a 24 h forecast for a given sensor and pollutant. Each model is optimised using a hyperparameter and a feature selection process, evaluating the utility of exogenous data such as meteorological data, including wind speed and temperature, along with the inclusion of other pollutants. The findings show that the TFT and DeepAR algorithms achieve superior performance over their simpler counterparts, though they may prove challenging in practical applications. It is noted that while some covariates such as CO are important covariates for predicting NO2 across all three datasets, other parameters such as context length proved inconsistent across the three areas, suggesting that parameters such as context length are location and pollutant specific.
- New
- Research Article
- 10.1371/journal.pone.0323941
- Nov 5, 2025
- PloS one
- Yifan Pang + 1 more
Strong verbal and written communication abilities are more valuable in today's globalized world because of the increased frequency and complexity of cross-border encounters. Professionals require a high degree of linguistic competency and flexibility because of the frequent international communication necessary to handle complex business scenarios, laws, and fluctuating market conditions. The study is driven by a desire to customize language instruction to suit the unique needs of professionals involved in cross-border trade. The goal is to ensure that the skills students learn are relevant to the complexities of this industry. This study tackles the challenge of improving Cross-Border Trade English Education by integrating big data and Artificial Intelligence (AI). The Artificial Intelligence-based Cross-Border Trade English Education (AI-CTEE) uses Long Short-Term Memory (LSTM) networks to create personalized learning experiences, adapt the curriculum dynamically, and provide real-time language support. The AI-CTEE model examines long-term dependencies in sequential data to determine how LSTM-powered language education affects linguistic competency in cross-border trade. The longitudinal study uses LSTM networks to track language proficiency. Academics, communication, and cross-cultural adaptability are assessed. This study investigates the effects of ongoing exposure to LSTM-powered language instruction on the maintenance of language acquisition and the effectiveness of its practitioners in foreign trade settings. Insights into the long-term effects of combining AI with big data in the AI-CTEE model are provided by the study's main conclusions and outcomes. This study highlights the necessity to strategically enhance language skills to survive in the ever-changing world of global trade, contributing to the continuing discourse regarding new language education methods. The proposed AI-CTEE model increases the retention rate by 98.5%, CPU utilization by 59%, memory consumption rate by 60%, response time analysis of 194 milliseconds, and interaction period by 78 minutes compared to other existing models.
- New
- Research Article
- 10.3390/rs17213651
- Nov 5, 2025
- Remote Sensing
- Meron Lakew Tefera + 5 more
In semiarid, fragmented landscapes where data scarcity challenges effective land management, accurate soil moisture monitoring is critical. This study presents a high-resolution analysis that integrates remote sensing, in situ data, and machine learning to predict soil moisture and evaluate the impact of land conservation practices. A Long Short-Term Memory (LSTM) model combined with Random Forest gap-filling achieved strong predictive performance (R2 = 0.84; RMSE = 0.103 cm3 cm−3), outperforming SMAP satellite estimates by approximately 30% across key accuracy metrics. The model was applied to 222 field sites in northern Ghana to quantify the effects of stone bunds on soil moisture retention. The results revealed that fields with stone bunds maintained 4–6% higher moisture than non-bunded fields, particularly on steep slopes and in areas with low to moderate topographic wetness. These findings demonstrate the capability of combining remote sensing and deep learning for fine-scale soil-moisture prediction and provide quantitative evidence of how nature-based solutions enhance water retention and climate resilience in dryland agricultural systems.
- New
- Research Article
- 10.54254/2755-2721/2026.tj29132
- Nov 5, 2025
- Applied and Computational Engineering
- Sin-Li Tseng + 3 more
This study presents a forearm gesture classification method based on electromyography (EMG), recurrent neural network (RNN), and a two-layer Long Short-Term Memory (LSTM). The gesture data of one experimental subject was collected experimentally. After data preprocessing such as filtering and reshaping processes, an RNN model was trained for automatic movement feature extraction and classification. The experimental result shows that when the selected muscle pair is biceps brachii and extensor carpi radialis longus, the classification accuracy of the model achieves the highest classification accuracy of 97.5%, which is significantly more accurate than the other three experimented muscle pairs. This study verifies and confirms that the effectiveness of movement classification based on EMG is accurate enough for the appropriate pair of muscles, and provides a direction for future optimization and applications, such as enabling touchless and gesture-based control through muscle signals and improving the ability of complex gesture recognition.
- New
- Research Article
- 10.54254/2754-1169/2025.bj28943
- Nov 5, 2025
- Advances in Economics, Management and Political Sciences
- Waifong Tian
With the increasing volatility of global capital markets, particularly in the stock market, efficiently forecasting price fluctuations has become a core issue in financial technology. The traditional AutoRegressive Integrated Moving Average (ARIMA) model performs well in fitting linear relationships but struggles to capture complex nonlinear behaviors. In contrast, the deep learning method Long Short-Term Memory (LSTM) is capable of handling nonlinear dependencies and long-sequence features. Based on the assumption that stock market data exhibit both linear and nonlinear characteristics, this study develops a hybrid LSTM-ARIMA model that integrates the strengths of both approaches: ARIMA is first employed to extract linear trends and generate residuals, which are then combined with technical indicators such as Relative Strength Index (RSI) and Moving Average Convergence Divergence (MACD) and fed into LSTM to capture nonlinear fluctuations. Experiments are conducted using 30-minute frequency data of the Standard & Poor's (S&P) 500 index, adopting two integration strategies: dynamic weighting and stacking. The results indicate that during the Federal Reserve's interest rate cut cycle, the model outperforms the single model in terms of accuracy and stability in short-term volatility prediction. Specifically, stacking demonstrates stronger adaptability during policy shock periods by correcting residuals, whereas dynamic weighting, which relies on historical Mean Squared Error (MSE), proves slightly insufficient under regime shifts. This study provides empirical evidence and quantitative insights for financial time series forecasting and volatility management during interest rate cut cycles.
- New
- Research Article
- 10.1371/journal.pone.0323941.r004
- Nov 5, 2025
- PLOS One
- Yifan Pang + 4 more
Strong verbal and written communication abilities are more valuable in today’s globalized world because of the increased frequency and complexity of cross-border encounters. Professionals require a high degree of linguistic competency and flexibility because of the frequent international communication necessary to handle complex business scenarios, laws, and fluctuating market conditions. The study is driven by a desire to customize language instruction to suit the unique needs of professionals involved in cross-border trade. The goal is to ensure that the skills students learn are relevant to the complexities of this industry. This study tackles the challenge of improving Cross-Border Trade English Education by integrating big data and Artificial Intelligence (AI). The Artificial Intelligence-based Cross-Border Trade English Education (AI-CTEE) uses Long Short-Term Memory (LSTM) networks to create personalized learning experiences, adapt the curriculum dynamically, and provide real-time language support. The AI-CTEE model examines long-term dependencies in sequential data to determine how LSTM-powered language education affects linguistic competency in cross-border trade. The longitudinal study uses LSTM networks to track language proficiency. Academics, communication, and cross-cultural adaptability are assessed. This study investigates the effects of ongoing exposure to LSTM-powered language instruction on the maintenance of language acquisition and the effectiveness of its practitioners in foreign trade settings. Insights into the long-term effects of combining AI with big data in the AI-CTEE model are provided by the study’s main conclusions and outcomes. This study highlights the necessity to strategically enhance language skills to survive in the ever-changing world of global trade, contributing to the continuing discourse regarding new language education methods. The proposed AI-CTEE model increases the retention rate by 98.5%, CPU utilization by 59%, memory consumption rate by 60%, response time analysis of 194 milliseconds, and interaction period by 78 minutes compared to other existing models.
- New
- Research Article
- 10.3389/fnins.2025.1692122
- Nov 5, 2025
- Frontiers in Neuroscience
- Zhe Cheng + 3 more
The proliferation of deepfake technologies presents serious challenges for forensic speech authentication. We propose a deep learning framework combining Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks to improve detection of manipulated audio. Leveraging the spectral feature extraction of CNNs and the temporal modeling of LSTMs, the model demonstrates superior accuracy and generalization across the ASVspoof2019 LA and WaveFake datasets. Linear Frequency Cepstral Coefficients (LFCCs) were employed as acoustic features and outperformed MFCC and GFCC representations. To enhance transparency and trustworthiness, explainable artificial intelligence (XAI) techniques, including Grad-CAM and SHAP, were applied, revealing that the model focuses on high-frequency artifacts and temporal inconsistencies. These interpretable analyses validate both the models design and the forensic relevance of LFCC features. The proposed approach thus provides a robust, interpretable, and XAI-driven solution for forensic authentic detection.