Articles published on Sequence
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
19994 Search results
Sort by Recency
- New
- Research Article
- 10.1016/j.ygyno.2025.10.029
- Dec 1, 2025
- Gynecologic oncology
- Junsik Park + 7 more
Prognostic implications of HER2 in ovarian cancer: Associations with homologous recombination deficiency and folate receptor alpha expression.
- New
- Research Article
- 10.1016/j.engappai.2025.112289
- Dec 1, 2025
- Engineering Applications of Artificial Intelligence
- Safa Ameur + 2 more
Spatial-temporal generative network based on deep long short-term memory autoencoder for hand skeleton data sequences reconstruction and recognition
- New
- Research Article
- 10.1016/j.neunet.2025.107897
- Dec 1, 2025
- Neural networks : the official journal of the International Neural Network Society
- Selim Reza + 3 more
Enhancing intelligent transportation systems with a more efficient model for long-term traffic predictions based on an attention mechanism and a residual temporal convolutional network.
- New
- Research Article
- 10.61132/neptunus.v3i4.1203
- Nov 30, 2025
- Neptunus: Jurnal Ilmu Komputer Dan Teknologi Informasi
- Ratih Adinda Destari
The exchange of information in the digital era has become a general need for society. However, the information sent often has a public or confidential nature. Therefore, security is needed so that confidential information remains safe. Cryptography is a field of knowledge used to secure information using encryption and decryption processes. One of the cryptographic methods used is the permutation method, which changes the layout, sequence, or structure of data into a form that is difficult to understand without knowledge of the exact key. Implementing cryptography using the permutation method in Android-based applications can increase the security and privacy of user data, as well as protect sensitive information from unauthorized access. This research aims to implement permutation method cryptography in Android-based applications to protect the confidentiality or privacy of user data. By using the permutation method, the sequence of bits or characters in the data is scrambled so that it is difficult for unauthorized parties to understand. The research results show that the implementation of permutation method cryptography in Android applications can provide a higher level of security in maintaining data confidentiality. However, it is worth considering that simple permutation methods may not be secure enough to deal with more sophisticated attacks. Therefore, choosing a stronger cryptographic method needs to be considered to achieve a higher level of security. In conclusion, implementing cryptography using the permutation method in Android-based applications can increase the security and privacy of user data. The permutation method is able to randomize the sequence of bits or characters in the data so that it is difficult for unauthorized parties to understand. However, for a higher level of security, it is necessary to consider stronger cryptographic methods.
- New
- Research Article
- 10.1108/jhom-10-2024-0434
- Nov 28, 2025
- Journal of health organization and management
- Y Prathima + 1 more
A new method known as Lionized Remora optimization based Recurrent Neural Network (LRObRNN) is recommended to enhance the safety of medical information stored on cloud servers to tackle these issues. To safeguard patient data, healthcare organizations must thoughtfully choose reliable and compliant cloud service providers while implementing robust security measures. Storing patient information in cloud systems raises issues with illegal access and data breaches. The LRObRNN generates a secret key using Lionized Remora optimization and employs cryptography to encrypt sensitive healthcare data. Continuous monitoring ensures the security of data transmission by identifying irregularities. Leveraging Recurrent Neural Networks the system analyzes sequential data, enabling the detection of patterns and potential security breaches during data transmission. The performance evaluation includes metrics such as encryption and decryption time, confidentiality rate, processing time, resource usage and efficiency, which are compared with other existing models.
- New
- Research Article
- 10.1007/s00603-025-05125-z
- Nov 25, 2025
- Rock Mechanics and Rock Engineering
- Ghasem Aghli + 4 more
Abstract Open fractures are an important contributor to reservoir quality in carbonate formations. Herein, their development and relevant parameters are controlled by various factors, among which lithology is considered the most critical. With this in mind, we investigated differences in open-fracture parameters between dolomite and calcite intervals to understand how lithology influences fracturing. To this end, several wells within a sequence of dolomitic and calcitic intervals from the major oil producing Asmari Formation (southwest Iran) which is a major fractured reservoir were chosen. Detailed petrographic and petrophysical analyses were carried out to characterize the fracture intervals in this reservoir. Then, fracture parameters such as density, aperture, porosity, and dip as well as reservoir heterogeneity were quantified using high-quality electrical image logs and core data. Finally, fracture parameters were compared between the two lithologies to delineate their variations. Furthermore, sonic waveforms and well-test data from the selected wells confirmed the presence of fractures and their estimated parameters. Our results indicate that all dolomites in this reservoir are diagenetic and range from fine- to coarse-crystalline. Image logs confirmed that all detected fractures are structural (due to folding and faulting) and show larger apertures and better connectivity in the dolomite intervals, particularly in the coarser sections, than in calcites. Moreover, open fractures with large apertures appear to be the main source of heterogeneity within the dolomite intervals. Therefore, crystal size and structural features were identified as two additional governing factors in fracture development. This enhanced fracture network was distinguished by fluctuations in sonic waveforms within the coarser dolomite intervals (due to a better fracture network). Finally, the role of effective fracture parameters on reservoir heterogeneity is discussed. These findings provide valuable insights into fracture characterization in heterogeneous carbonate systems and can support more accurate reservoir modeling and improved development strategies in similar fractured reservoirs worldwide.
- New
- Research Article
- 10.1017/jfm.2025.10863
- Nov 24, 2025
- Journal of Fluid Mechanics
- Lucas Villanueva + 3 more
A data assimilation (DA) strategy based on an ensemble Kalman filter (EnKF) is used to enhance the predictive capabilities of scale-resolving numerical tools for the analysis of flows exhibiting cyclic behaviour. More precisely, an ensemble of numerical runs using large-eddy simulations (LES) for a compressible intake flow rig is augmented via the integration of high-fidelity data. This observation is in the form of instantaneous velocity measurements, which are sampled at localised sensors in the physical domain. Two objectives are targeted. The first objective is the calibration of an unsteady inlet condition suitable to capture the cyclic flow investigated. The second objective is the analysis of the synchronisation of the LES velocity field with the available observations. In order to reduce the computational costs required for this analysis, a hyper-localisation procedure (HLEnKF) is proposed and integrated in the library CONES, tailored to perform fast online DA. The proposed strategy performs a satisfactory calibration of the inlet conditions, and its robustness is assessed using two different prior distributions for the free parameters optimised in this task. The DA state estimation is efficient in obtaining accurate local synchronisation of the inferred velocity fields with the observed data. The modal analysis of the kinetic energy field provides additional insight into the improved reconstruction quality of the velocity field. Thus, the HLEnKF shows promising features for the calibration and synchronisation of scale-resolved turbulent flows, opening perspectives of applications for complex phenomena using advanced tools such as digital twins.
- New
- Research Article
- 10.14710/lenpust.v11i2.76832
- Nov 19, 2025
- Lentera Pustaka: Jurnal Kajian Ilmu Perpustakaan, Informasi dan Kearsipan
- Rheza Ega Winastwan + 3 more
Background: Community-based libraries rely heavily on volunteers to sustain their programs and services. However, volunteer empowerment and organizational sustainability can be limited without structured knowledge sharing and management. Limbah Pustaka Library, a community library in Indonesia, has implemented various initiatives to optimize the contribution of its volunteers. Objective. This study explored the importance of knowledge management in empowering volunteers at Limbah Pustaka Library, Purbalingga. The research is motivated by the limited human resources in village libraries and the need for knowledge management strategies to enable volunteers to contribute sustainably to library services and community development.Research Methods. A qualitative descriptive approach was employed. Data were collected through interviews with volunteers and library staff, participant observation of library activities, and review of relevant documents. The data analysis followed a sequential data condensation, presentation, and concluding process.Data Analysis. In addition to descriptive analysis, data were analyzed thematically to identify evidence of each SECI stage and assess its impact on volunteer empowerment. The analysis focused on knowledge sharing and creation patterns corresponding to the SECI processes.Result. The results indicate that all four SECI stages manifested in the volunteer program. Socialization occurred through group discussions and collaborative activities among volunteers; externalization was achieved by documenting volunteers' tacit knowledge into guides and procedures; combination involved integrating various explicit information into shared resources; and Internalization took place through training and hands-on practice. Implementing the SECI model enhanced knowledge sharing and volunteer collaboration, improving service innovation and community engagement.Conclusion. In conclusion, SECI-based knowledge management effectively empowered the Library Wastevolunteers, as evidenced by increased collaboration and innovative service delivery. These findings support using structured knowledge management strategies in community library programs.
- New
- Research Article
- 10.55220/2576-6821.v9.716
- Nov 11, 2025
- Journal of Banking and Financial Dynamics
- Ying Wang + 2 more
Temporal pattern recognition has become increasingly critical for predictive analytics in various domains, particularly in demand forecasting where accurate predictions directly impact business operations and profitability. Neural network (NN) architectures have demonstrated remarkable capabilities in capturing complex temporal dependencies within sequential data, outperforming traditional statistical methods in numerous applications. This review examines the evolution and application of neural network approaches specifically designed for temporal pattern recognition, with emphasis on their utilization in demand forecasting and predictive analytics. The paper provides a comprehensive analysis of recurrent neural networks (RNNs), long short-term memory (LSTM) networks, gated recurrent units (GRUs), convolutional neural networks (CNNs), and transformer-based architectures in the context of time series forecasting. Furthermore, this review explores the integration of attention mechanisms, the emergence of spatiotemporal graph neural networks (STGNNs), and hybrid model architectures that combine multiple approaches to enhance forecasting accuracy. The evaluation metrics commonly employed to assess model performance, including mean absolute error (MAE), root mean squared error (RMSE), and mean absolute percentage error (MAPE), are discussed alongside benchmark datasets utilized in the field. Through systematic examination of recent literature spanning from 2019 to 2025, this review identifies key architectural innovations, practical applications in retail and supply chain management, and emerging trends that define the current state of temporal pattern recognition. The findings reveal that while transformer-based models have gained significant attention for long-sequence forecasting, simpler linear architectures and hybrid approaches often demonstrate competitive or superior performance depending on dataset characteristics and application requirements. This comprehensive review serves as a foundation for researchers and practitioners seeking to understand the landscape of neural network methodologies for temporal pattern recognition and their practical deployment in demand forecasting systems.
- New
- Research Article
- 10.5713/ab.250595
- Nov 10, 2025
- Animal bioscience
- Eunjeong Jeon + 1 more
Accurate early prediction of final body weight (BW) is essential for optimized feeding strategies and slaughter planning in beef cattle production. This study evaluated the performance of three machine learning models (k-nearest neighbors, Random Forest, eXtreme Gradient Boosting), and one deep learning model [long short-term memory (LSTM)] to forecast the final BW of Hanwoo steers at various time points prior to slaughter. A total of 196 Hanwoo steers (7 to 31 months of age) from a commercial farm were utilized. Input data included monthly BW and feed nutrient intake (crude protein, ether extract, neutral detergent fiber, total digestible nutrients) across three growth stages. Six input configurations (I1-I6) were designed to predict the final BW at 17, 13, 9, 6, 3, and 1 month(s) before slaughter, with a target age of 31 months. The machine and deep learning models were assessed by five-fold cross-validation (training set) and a test set and evaluated via the coefficient of determination (R²) and root mean squared error (RMSE). Among the tested models, the LSTM achieved the highest prediction accuracy across all the configurations. The performance of the LSTM improved as the prediction point approached the target slaughter age: I1 (R² = 0.60, RMSE = 52.80), I2 (0.72, 45.40), I3 (0.76, 40.92), I4 (0.83, 35.84), I5 (0.90, 33.12), and I6 (0.97, 22.62). These results demonstrated that LSTM effectively captured temporal dependencies in sequential data, enabling more accurate BW forecasting under commercial conditions. While I6 achieved the highest prediction accuracy, the 3-6 month predictions (I4 and I5) demonstrated reasonably high accuracy, which could provide a practical timeframe for farm-level management and planning. This approach could be utilized in evidence-based decision-making in Hanwoo production by providing reliable predictions well ahead of slaughter.
- New
- Research Article
- 10.3390/genes16111356
- Nov 10, 2025
- Genes
- Haibo Liu + 2 more
Background/Objectives: BrdU (5′-bromo-2′-deoxyuridine), a synthetic thymidine (T) analog, is widely used to study cell proliferation and DNA synthesis. To precisely identify where and when DNA replication starts and terminates, it is essential to determine the BrdU incorporation rate and sites at a single-nucleotide resolution. Although several deep learning-based methods have been developed for detecting BrdU using Oxford nanopore sequencing data, there is a lack of accessible, easy-to-follow tutorials to guide researchers in preparing training data and implementing deep learning approaches as the nanopore sequencing technologies continue to evolve. Methods: Due to the lack of ground truth BrdU-positive data generated on the latest R10 flow cells, we prepared model training data from legacy R9 flow cells, consistent with existing tools. We processed publicly available synthetic and real nanopore DNA sequencing datasets, with and without BrdU incorporation, using a combination of open-source and custom software tools. Subsequently, we trained bidirectional gated recurrent unit (BiGRU)-based recurrent neural networks (RNNs) for BrdU detection using the TensorFlow library on the Google Colab platform. Results: We trained BiGRU-based RNNs for BrdU detection with a high specificity (>94%) but a moderate sensitivity due to limited BrdU-positive data. We detail the setup, training, testing, and fine-tuning of the model using both synthetic and real DNA sequencing data. Conclusions: Though the models were trained with data generated on legacy flow cells, we believe that this detailed protocol, covering both data preparation and model development, can be readily extended to R10 flow cells and basecallers for other base modifications. This work will facilitate the broader adoption of deep learning neural networks in biological research, particularly RNNs, which are well suited for modeling sequential and time-series data.
- Research Article
- 10.54392/irjmt2569
- Nov 6, 2025
- International Research Journal of Multidisciplinary Technovation
- Pranjali Kasture + 1 more
Stock price prediction is a complex problem because financial time series data are volatile and complicated. The model should learn the temporal relationship and complex spatial patterns in data for precise stock price prediction. Conventional methods used for stock price forecasting have many limitations regarding handling nonlinear, complex, and dynamic data. This study assesses a hybrid deep learning model integrated with a triple attention mechanism to predict stock prices. It is experimental that the proposed MTA-HDCRNN model performs well on intricate data. The deep CNN works well on finding the local patterns in the data, whereas the simple RNN supports to learn sequential data. The triple attention mechanism emphasizes which features to focus on and where to focus. The dataset used for analysis is the BSE and Nifty 50. Web scraping is done to get the news data. Feature extraction includes statistical features, entropy features, PCA features, and technical indicators. Overall, the complete architecture of the proposed model is vigorous. It is observed that there is a 2% to 6% decrease in error values when the model is compared with existing state-of-the-art models. Experimentation shows that the proposed model enhances the stock price prediction, making it useful for investors and financial analysts for decision-making.
- Research Article
- 10.1371/journal.pone.0323941
- Nov 5, 2025
- PloS one
- Yifan Pang + 1 more
Strong verbal and written communication abilities are more valuable in today's globalized world because of the increased frequency and complexity of cross-border encounters. Professionals require a high degree of linguistic competency and flexibility because of the frequent international communication necessary to handle complex business scenarios, laws, and fluctuating market conditions. The study is driven by a desire to customize language instruction to suit the unique needs of professionals involved in cross-border trade. The goal is to ensure that the skills students learn are relevant to the complexities of this industry. This study tackles the challenge of improving Cross-Border Trade English Education by integrating big data and Artificial Intelligence (AI). The Artificial Intelligence-based Cross-Border Trade English Education (AI-CTEE) uses Long Short-Term Memory (LSTM) networks to create personalized learning experiences, adapt the curriculum dynamically, and provide real-time language support. The AI-CTEE model examines long-term dependencies in sequential data to determine how LSTM-powered language education affects linguistic competency in cross-border trade. The longitudinal study uses LSTM networks to track language proficiency. Academics, communication, and cross-cultural adaptability are assessed. This study investigates the effects of ongoing exposure to LSTM-powered language instruction on the maintenance of language acquisition and the effectiveness of its practitioners in foreign trade settings. Insights into the long-term effects of combining AI with big data in the AI-CTEE model are provided by the study's main conclusions and outcomes. This study highlights the necessity to strategically enhance language skills to survive in the ever-changing world of global trade, contributing to the continuing discourse regarding new language education methods. The proposed AI-CTEE model increases the retention rate by 98.5%, CPU utilization by 59%, memory consumption rate by 60%, response time analysis of 194 milliseconds, and interaction period by 78 minutes compared to other existing models.
- Research Article
- 10.1371/journal.pone.0323941.r004
- Nov 5, 2025
- PLOS One
- Yifan Pang + 2 more
Strong verbal and written communication abilities are more valuable in today’s globalized world because of the increased frequency and complexity of cross-border encounters. Professionals require a high degree of linguistic competency and flexibility because of the frequent international communication necessary to handle complex business scenarios, laws, and fluctuating market conditions. The study is driven by a desire to customize language instruction to suit the unique needs of professionals involved in cross-border trade. The goal is to ensure that the skills students learn are relevant to the complexities of this industry. This study tackles the challenge of improving Cross-Border Trade English Education by integrating big data and Artificial Intelligence (AI). The Artificial Intelligence-based Cross-Border Trade English Education (AI-CTEE) uses Long Short-Term Memory (LSTM) networks to create personalized learning experiences, adapt the curriculum dynamically, and provide real-time language support. The AI-CTEE model examines long-term dependencies in sequential data to determine how LSTM-powered language education affects linguistic competency in cross-border trade. The longitudinal study uses LSTM networks to track language proficiency. Academics, communication, and cross-cultural adaptability are assessed. This study investigates the effects of ongoing exposure to LSTM-powered language instruction on the maintenance of language acquisition and the effectiveness of its practitioners in foreign trade settings. Insights into the long-term effects of combining AI with big data in the AI-CTEE model are provided by the study’s main conclusions and outcomes. This study highlights the necessity to strategically enhance language skills to survive in the ever-changing world of global trade, contributing to the continuing discourse regarding new language education methods. The proposed AI-CTEE model increases the retention rate by 98.5%, CPU utilization by 59%, memory consumption rate by 60%, response time analysis of 194 milliseconds, and interaction period by 78 minutes compared to other existing models.
- Research Article
- 10.1088/1402-4896/ae1adf
- Nov 3, 2025
- Physica Scripta
- Fangqin Wang + 5 more
Abstract Deep learning has become a research focus in academia and industry due to its ability to effectively extract fault features from rotating machinery. However, given the variability of high-power variable-frequency industrial systems, existing models face the challenge of low accuracy in identifying electrical-erosion faults in rail transit motor bearings. Moreover, current models struggle to fully integrate the temporal information of such faults and exist technical challenges related to poor interpretability. To address the aforementioned shortcomings, this paper creatively develops a novel neural network architecture for global fusion of temporal sequence information, named BISR-Former, which focuses on solving the problem of difficult-to-identify bearing electro-corrosion faults in electric motors for rail transit in actual engineering applications. Firstly, inspired by the successful application of Transformer architectures in natural language processing, we make the first attempt to adapt i-Transformer to the task of motor-bearing fault diagnosis. We innovatively devise a Global Temporal Information Fusion module that comprehensively captures the global dependencies between long nonlinear sequences of motor-bearing data. By introducing this module into the proposed framework, the model gains the advantages of dynamic weighting and parallel computation. Secondly, recognizing the strong time-varying nature of the time-series data of bearing failures in rail transit electric motors, we have innovatively designed a bidirectional local time-series feature extraction module. By integrating this module into the proposed framework, it gains the ability to fuse bidirectional temporal modeling of motor bearings, enabling the framework to capture both long-range global dependencies and short-range local temporal features. Consequently, the framework attains a more comprehensive understanding of the sequential dynamics underlying motor-bearing fault evolution. Finally, extensive experiments on a real-world motor-bearing dataset confirm the proposed framework's superior performance and strong generalization capability. At the same, t-SNE was introduced into the proposed framework to enhance the interpretability of the fault-feature extraction process.
- Research Article
- 10.1038/s41598-025-24326-8
- Nov 3, 2025
- Scientific Reports
- Prashant Kumar Shukla + 5 more
The high rate of social media development has triggered a high rate of fake accounts, which are a great risk to the privacy of users and the integrity of the platform. These malicious accounts are hard to detect because user activity data is highly imbalanced, dimensional, and sequential. The emergence of fake profiles on social media endangers the privacy and trust of social media users. It is difficult to detect such accounts because of high-dimensional, highly sequential, and imbalanced user behavior data. Current techniques tend to miss out on the complicated activity patterns or even overfit, which is why a strong, scalable, and precise model of social media fraud detection is required. This study suggests a new deep learning architecture that entails a Temporal Convolutional Network (TCN) with Generative Adversarial Network (GAN)-based data augmentation to generate minority classes, and Autoencoder-based feature extraction to reduce dimensionality. The Seagull Optimization Algorithm (SOA), which is a metaheuristic algorithm, is used to optimize hyperparameters by balancing efficiency and speed of convergence in global search. The framework is tested on benchmark datasets (Cresci-2017 and TwiBot-22) and compared to the state-of-the-art models. It has been shown in experiments that the suggested TCN-GAN-SOA framework performs better, with ROC-AUC scores of 0.96 on Cresci-2017 and 0.95 on TwiBot-22, and a higher precision-recall value and better F1-scores. In addition, computational efficiency can be verified by the runtime analysis; case studies prove the framework’s strength when handling various situations of fraudulent behaviors. The given solution offers a scalable, reliable, and accurate methodology of detecting social media fraud based on the combination of sophisticated sequence modeling, realistic data augmentation, and hyperparameter optimization.
- Research Article
- 10.53360/2788-7995-2025-3(19)-8
- Nov 3, 2025
- Bulletin of Shakarim University. Technical Sciences
- D Amrin + 3 more
Due to their complex and unpredictable nature, stock market movements were always challenging to predict. Factors like economic indicators, market sentiment, and political and global events significantly contribute to stock price unpredictability. There are different methods to analyze risks, returns, and average price movements, based on which investors make assumptions. Identifying patterns and making the right decision on large amounts of data is very difficult, but nowadays, with the advancement of neural networks, we can solve prediction problems by identifying patterns of high-dimensional sequential data. We will analyze and compare five neural network architectures, including Recurrent Neural Networks (RNNs), Long Short-Term Memory networks (LSTMs), Gated Recurrent Units (GRUs), Convolutional Neural Networks (CNNs), and Artificial Neural Networks (ANNs), to try to predict stock prices using historical data taken from Yahoo Finance API, which is widely used and reliable for financial data analysis. We will separate historical data into two parts, 80% of which will be trained and 20% will be tested. For each model, we will use different hyperparameters we selected as the most effective training. Popular Python libraries such as TensorFlow, Keras, and NumPy are used for efficient implementation. Additionally, we used preprocessing for data, such as data cleaning and normalization, to avoid errors and enhance model performance. The models are evaluated based on prediction accuracy using metrics like Mean Absolute Error (MAE), Mean Squared Error (MSE), Root Mean Squared Error (RMSE), and R-squared (R²). Additionally, we use classification metrics such as the confusion matrix and Receiver Operating Characteristic - Area Under the Curve (ROC-AUC) to analyze each model’s performance in predicting price movement directions. We concluded that the GRU model achieves the highest accuracy and reliability in our analysis, with notable performance in classification metrics. Conversely, the simple ANN model shows the worst results, highlighting the variability in predictive capabilities across different neural network architectures.
- Research Article
- 10.1002/ese3.70346
- Nov 2, 2025
- Energy Science & Engineering
- Nadeem Ahmed Tunio + 5 more
ABSTRACT Prompt and accurate fault detection in extra high voltage transmission lines is required for guaranteeing the steadiness of power system. This study describes the performance of BiLSTM, GRU, and TCN as deep learning models for the detection and classification of faults in transmission lines through synthetic and real‐time sequential datasets in 500 kV transmission line between Jamshoro and Karachi (NKI), in Sindh, Pakistan. Testing models' performance on simulated faults versus real fault events, the study concludes a major space and suggests insights for their practical applicability. The results show that deep learning models can reach vast level of accuracy in classifying different faults in transmission lines. This study forms the basis for exploiting modern fault detection practices in operating grids to improve their dependability and flexibility. The results revealed an accuracy of 98.31%, achieved by the BiLSTM, 94.27% for GRU and TCN as 99.8% through simulated data set, whereas using real‐time fault data BiLSTM scored 62.05% accuracy, while GRU accuracy score achieved 96.43%, and TCN attained 100% accuracy. The results demonstrate that the deep learning models used in this study work well analyzing time series data by achieving high fault accuracy for fault classification in transmission lines. In general, the study was conducted to identify the best model in managing the fault over extra high voltage transmission lines under different conditions.
- Research Article
- 10.1002/eng2.70432
- Nov 1, 2025
- Engineering Reports
- Chethan C Raj + 7 more
ABSTRACT Trust evaluation in SIoT networks is increasingly complex, requiring advanced classification models. This study presents SAFE‐SIoT, an Intelligent Trust Classification System (ITCS) that leverages Skip‐GRU‐Attention and Feed‐Forward Neural Networks to model trust relationships non‐linearly. The proposed framework enhances classification accuracy by incorporating centralization‐based trust indicators, which improve differentiation among entities and optimize trust evaluation. SAFE‐SIoT integrates Gated Recurrent Units (GRU) with skip connections to facilitate efficient gradient flow, an Attention Mechanism for prioritizing critical features in sequential data, and Extreme Learning Machines (ELM) for high‐speed trust classification. By incorporating hierarchical attention mechanisms for deep feature extraction, the system enhances adaptability and robustness, making it highly suitable for secure and scalable SIoT environments. This paper introduces SAFE‐SIoT, a novel trust classification model combining Skip‐GRU, attention mechanisms, and ELM. It achieves 97.5% accuracy and demonstrates superior scalability and robustness compared to existing models. The results indicate that SAFE‐SIoT significantly enhances trust evaluation and provides a scalable, high‐performance solution for dynamic SIoT ecosystems. Future research will focus on further optimizing the model's adaptability to real‐world applications and refining trust parameter integration to improve predictive capabilities. SAFE‐SIoT introduces a novel hybrid deep learning framework tailored for trust evaluation in smart city Social Internet of Things (SIoT) environments. The architecture integrates Skip‐GRU units to enhance temporal modeling by preserving long‐range dependencies and mitigating vanishing gradients in sequential trust data. A Bi‐Layered Attention (BLA) mechanism further refines feature prioritization by combining Position‐Spatial Attention (PSA) and Channel Attention (CA), enabling the model to focus on both spatially salient regions and semantically rich channels. To ensure computational efficiency, the framework employs an Extreme Learning Machine (ELM) as a lightweight classifier, offering fast training and strong generalization—particularly beneficial for resource‐constrained IoT nodes. SAFE‐SIoT achieves 97.5% accuracy on benchmark SIoT trust datasets and demonstrates robust scalability across heterogeneous IoT deployments with minimal latency overhead. It consistently outperforms baseline models such as LSTM, CNN‐GRU, and vanilla GRU, with precision, recall, and F1‐scores of 96.8%, 97.2%, and 97.0%, respectively. Designed for secure and trustworthy communication, the model supports real‐time trust evaluation in dynamic and decentralized SIoT topologies and is compatible with federated learning extensions to enable privacy‐preserving deployment.
- Research Article
- 10.1016/j.ins.2025.122325
- Nov 1, 2025
- Information Sciences
- Jianmei Ren + 2 more
Handling unobserved confounding for continuous sequential data via extracting key sub-intervals