Published in last 50 years
Articles published on Network Architecture
- New
- Research Article
- 10.1007/s00417-025-07015-0
- Nov 8, 2025
- Graefe's archive for clinical and experimental ophthalmology = Albrecht von Graefes Archiv fur klinische und experimentelle Ophthalmologie
- Kangyeun Pak + 2 more
To quantitatively analyze change in the extent of hard exudates (HEs) following anti-VEGF therapy for diabetic macular edema (DME) and its relationships with visual outcomes. This post-hoc analysis of DRCR Protocol T included 260 eyes of 260 patients. The volume of HEs was measured by automatically quantifying hyper-reflective foci (HRF) on structural optical coherence tomography (OCT) volumes using a supervised convolutional neural network architecture, "DUCK-Net". HEs were quantified within the entire ETDRS grid as well as within the central subfield (CSF), inner ring (IR), and outer ring (OR) at baseline, 4, 12, 24, and 52 weeks (w) after treatment. The extent of HEs at baseline and over time was then correlated with visual acuity (VA) and retinal thickness outcomes. Following initiation of anti-VEGF therapy, HEs significantly increased from baseline (0.0293 ± 0.0455 mm3) to w4 (0.0328 ± 0.0492 mm3) and peaked at w12 (0.0350 ± 0.0513 mm3), but decreased by w52 (0.0165 ± 0.0275mm3) within the entire ETDRS region (P = < 0.001, respectively), as well as within the OR and IR. Multiple regression analysis revealed that baseline HEs within the OR was one of the independent predictors of w52 VA (adjusted R2 = 0.160). Following anti-VEGF for DME, HEs initially increase, followed by a subsequent decrease over one year. Greater extent of HEs at baseline is associated with worse visual outcomes at one year.
- New
- Research Article
- 10.1038/s41598-025-25573-5
- Nov 7, 2025
- Scientific reports
- Alessandro Puleio + 1 more
Spectroscopy covers a huge range of applications in various fields of science, such as physics, biology, chemistry, engineering, and medicine. In some spectroscopic applications, the data analysis of spectra plays a leading role in the determination of the technique's performance in terms of sensitivity, specificity, and reliability. For this reason, solutions based on machine and deep learning algorithms have been deeply explored as possible alternatives to standard methodologies. Recently, an innovative neural network architecture and training approach have been developed to solve problems where standard supervised deep learning algorithms cannot be used, by exploiting a physics-informed neural network. This new method allows for information extraction from spectra without a supervised approach, i.e. without the need to have controlled experiments where both the spectra and the desired pieces of information to be extracted are known, opening the possibility to solve a huge number of problems where a controlled set (what it is known as training set in machine and deep learning) is present. However, in the previous work, the method has been presented only for simple and linear cases, limiting the range of applications of this new method. In this work, the previous physics-informed deep learning methodology is generalised to tackle both non-linear and multi-agent cases. The methodology, once it has been formally introduced, will be tested on synthetic cases and compared with standard supervised algorithms.
- New
- Research Article
- 10.1186/s13007-025-01461-x
- Nov 7, 2025
- Plant methods
- Manon Chossegros + 5 more
Plant diseases can cause heavy yield losses in arable crops resulting in major economic losses. Effective early disease recognition is paramount for modern large-scale farming. Since plants can be infected with multiple concurrent pathogens, it is important to be able to distinguish and identify each disease to ensure appropriate treatments can be applied. Hyperspectral imaging is a state-of-the art computer vision approach, which can improve plant disease classification, by capturing a wide range of wavelengths before symptoms become visible to the naked eye. Whilst a lot of work has been done applying the technique to identifying single infections, to our knowledge, it has not been used to analyse multiple concurrent infections which presents both practical and scientific challenges. In this study, we investigated three wheat pathogens (yellow rust, mildew and Septoria), cultivating co-occurring infections, resulting in a dataset of 1447 hyperspectral images of single and double infections on wheat leaves. We used this dataset to train four disease classification algorithms (based on four neural network architectures: Inception and EfficientNet with either a 2D or 3D convolutional layer input). The highest accuracy was achieved by EfficientNet with a 2D convolution input with 81% overall classification accuracy, including a 72% accuracy for detecting a combined infection of yellow rust and mildew. Moreover, we found that hyperspectral signatures of a pathogen depended on whether another pathogen was present, raising interesting questions about co-existence of several pathogens on one plant host. Our work demonstrates that the application of hyperspectral imaging and deep learning is promising for classification of multiple infections in wheat, even with a relatively small training dataset, and opens opportunities for further research in this area. However, the limited number of Septoria and yellow rust + Septoria samples highlights the need for larger, more balanced datasets in future studies to further validate and extend our findings under field conditions.
- New
- Research Article
- 10.3390/machines13111027
- Nov 6, 2025
- Machines
- Xin Chen + 1 more
In the context of Industry 4.0 and smart manufacturing, predicting cutting tool remaining useful life (RUL) is crucial for enabling and enhancing the reliability and efficiency of CNC machining. This paper presents an innovative predictive model based on the data fusion architecture of Graph Neural Networks (GNNs) and Transformers to address the complexity of shallow multimodal data fusion, insufficient relational modeling, and single-task limitations simultaneously. The model harnesses time-series data, geometric information, operational parameters, and phase contexts through dedicated encoders, employs graph attention networks (GATs) to infer complex structural dependencies, and utilizes a cross-modal Transformer decoder to generate fused features. A dual-head output enables collaborative RUL regression and health state classification of cutting tools. Experiments are conducted on a multimodal dataset of 824 entries derived from multi-sensor data, constructing a systematic framework centered on tool flank wear width (VB), which includes correlation analysis, trend modeling, and risk assessment. Results demonstrate that the proposed model outperforms baseline models, with MSE reduced by 26–41%, MAE by 33–43%, R2 improved by 6–12%, accuracy by 6–12%, and F1-Score by 7–14%.
- New
- Research Article
- 10.1038/s41598-025-15883-z
- Nov 6, 2025
- Scientific reports
- Heba M Elreify + 5 more
Lysine 2-hydroxyisobutyrylation (Khib) has emerged as a crucial Post-Translational Modification(PTM) with significant roles in diverse biological processes ranging from gene expression to metabolic regulation. Despite its importance, computational approaches for accurately predicting Khib sites remain limited. This study introduces BLOS-Khib, a deep-learning framework that utilizes evolutionary information encoded in the BLOSUM62 matrix within a Convolutional Neural Network (CNN) architecture for cross-species Khib site prediction. Through systematic optimization, we found that a 43-amino acid peptide length captures the optimal sequence context for prediction across six taxonomically diverse organisms. Comprehensive comparative analyses demonstrated BLOS-Khib competitive performance compared to existing methods, achieving notable Area Under the ROC Curve (AUC) values on independent test sets: human (0.913), wheat (0.892), T. gondii (0.893), rice (0.887), Candida albicans (0.885), and Botrytis cinerea (0.903). Our framework showed improved performance compared to state-of-the-art approaches, including traditional machine learning classifiers and alternative deep learning architectures. Sequence signature analysis revealed both conserved lysine-rich regions preceding modification sites and species-specific amino acid preferences at positions immediately flanking the target residue. Notably, our cross-species applicability experiments identified high transferability between evolutionarily distant organisms, ensuring the potential convergent evolution of Khib determinants. BLOS-Khib demonstrates competitive performance for PTM prediction, while providing evolutionary insights into the sequence determinants governing this emerging regulatory mechanism across diverse species.
- New
- Research Article
- 10.1115/1.4070332
- Nov 6, 2025
- Journal of Computing and Information Science in Engineering
- Hiep Vo Dang + 1 more
Abstract Reconstructing high-fidelity fluid flow fields from sparse sensor measurements is vital for many science and engineering applications but remains challenging because of the dimensional disparities between state and observational spaces. Due to such dimensional differences, the measurement operator becomes ill-conditioned and non-invertible, making the reconstruction of flow fields from sensor measurements extremely difficult. Although sparse optimization and machine learning address the above problems to some extent, questions about their generalization and efficiency remain, particularly regarding the discretization dependence of these models. In this context, deep operator learning offers a better solution as this approach models mappings between infinite-dimensional function spaces, enabling superior generalization and discretization-independent reconstruction. We introduce a deep operator learning model that is trained to reconstruct fluid flow fields from sparse sensor measurements. Our deep learning model employs a branch-trunk network architecture to represent the inverse measurement operator that maps sensor observations to the original flow field, a continuous function of both space and time. Our validation has demonstrated that the proposed deep learning method consistently achieves high levels of reconstruction accuracy and robustness, even in scenarios where sensor measurements are inaccurate or missing. Furthermore, the operator learning approach enables the capability to perform zero-shot super-resolution in both spatial and temporal domains, offering a solution for rapid reconstruction of high-fidelity flow fields.
- New
- Research Article
- 10.1016/j.marpolbul.2025.118908
- Nov 6, 2025
- Marine pollution bulletin
- Mahsa Samkhaniani + 2 more
Deep learning-based hyperspectral oil spill detection for marine pollution monitoring in the Gulf of Mexico: A step toward marine pollution monitoring and SDG 14 compliance.
- New
- Research Article
- 10.3389/fmars.2025.1661373
- Nov 6, 2025
- Frontiers in Marine Science
- Wenmiao Shao + 6 more
To address the limitations in identifying complex anomaly patterns and the heavy reliance on manual labeling in traditional oceanographic data quality control (QC) processes, this study proposes an intelligent QC method that integrates Gated Recurrent Units (GRU) with a Mean Teacher–based semi-supervised learning framework. Unlike conventional deep learning approaches that require large amounts of high-quality labeled data, our model adopts an innovative training strategy that combines a small set of labeled samples with a large volume of unlabeled data. Leveraging consistency regularization and a teacher–student network architecture, the model effectively enhances its ability to learn anomalous features from unlabeled observations. The input incorporates multiple sources of information, including temperature, salinity, vertical gradients, depth one-hot encodings, and seasonal encodings. A bidirectional GRU combined with an attention mechanism enables precise extraction of profile structure features and accurate identification of anomalous observations. Validation on real-world profile datasets from the Bailong (BL01) moored buoy and Argo floats demonstrates that the proposed model achieves outstanding performance in detecting temperature and salinity anomalies, with ROC-AUC scores of 0.966 and 0.940, and precision–recall AUCs of 0.952 and 0.916, respectively. Manual verification shows over 90% consistency, indicating high sensitivity and robust generalization capability under challenging scenarios such as weak anomalies and structural profile shifts. Compared to existing fully supervised models, the proposed semi-supervised QC framework exhibits superior practical value in terms of labeling efficiency, anomaly modeling capacity, and cross-platform adaptability.
- New
- Research Article
- 10.1371/journal.pcsy.0000076
- Nov 5, 2025
- PLOS Complex Systems
- Dirk Gütlin + 1 more
Predictive Coding (PC) is a neuroscientific theory that has inspired a variety of training algorithms for biologically inspired deep neural networks (DNN). However, many of these models have only been assessed in terms of their learning performance, without evaluating whether they accurately reflect the underlying mechanisms of neural learning in the brain. This study explores whether predictive coding inspired Deep Neural Networks can serve as biologically plausible neural network models of the brain. We compared two PC-inspired training objectives, a predictive and a contrastive approach, to a supervised baseline in a simple Recurrent Neural Network (RNN) architecture. We evaluated the models on key signatures of PC, including mismatch responses, formation of priors, and learning of semantic information. Our results show that the PC-inspired models, especially a locally trained predictive model, exhibited these PC-like behaviors better than a Supervised or an Untrained RNN. Further, we found that activity regularization evokes mismatch response-like effects across all models, suggesting it may serve as a proxy for the energy-saving principles of PC. Finally, we find that Gain Control (an important mechanism in the PC framework) can be implemented using weight regularization. Overall, our findings indicate that PC-inspired models are able to capture important computational principles of predictive processing in the brain, and can serve as a promising foundation for building biologically plausible artificial neural networks. This work contributes to our understanding of the relationship between artificial and biological neural networks such as the brain, and highlights the potential of PC-inspired algorithms for advancing brain modelling as well as brain-inspired machine learning.
- New
- Research Article
- 10.1371/journal.pone.0332577
- Nov 5, 2025
- PloS one
- Siham Essahraui + 7 more
Driver drowsiness is a leading cause of traffic accidents and fatalities, highlighting the urgent need for intelligent systems capable of real-time fatigue detection. Although recent advancements in machine learning (ML) and deep learning (DL) have significantly improved detection accuracy, most existing models are computationally demanding and not well-suited for deployment in resource-limited environments such as microcontrollers. While the emerging domain of TinyML presents promising avenues for such applications, there remains a substantial gap in the development of lightweight, interpretable, and high-performance models specifically tailored for embedded automotive systems. This paper introduces FastKAN-DDD, an innovative driver drowsiness detection model grounded in the Fast Kolmogorov-Arnold Network (FastKAN) architecture. The model incorporates learnable nonlinear activation functions based on radial basis functions (RBFs), facilitating efficient function approximation with a minimal number of parameters. To enhance suitability for TinyML deployment, the model is further optimized through post-training quantization techniques, including dynamic range, float-16, and weight-only quantization. Comprehensive experiments were conducted using the UTA-RLDD dataset-a real-world benchmark for driver drowsiness detection-evaluating the model across various input resolutions and quantization schemes. The FastKAN-DDD model achieved a test accuracy of 99.94%, with inference latency as low as 0.04 ms and a total memory footprint of merely 35 KB, rendering it exceptionally well-suited for real-time inference on microcontroller-based systems. Comparative evaluations further confirm that FastKAN surpasses several state-of-the-art TinyML models in terms of accuracy, computational efficiency, and model compactness. Our code's are publicly available at: https://github.com/sihamess/driver_drowsiness_detection_TinyML.
- New
- Research Article
- 10.1371/journal.pone.0332577.r004
- Nov 5, 2025
- PLOS One
- Siham Essahraui + 10 more
Driver drowsiness is a leading cause of traffic accidents and fatalities, highlighting the urgent need for intelligent systems capable of real-time fatigue detection. Although recent advancements in machine learning (ML) and deep learning (DL) have significantly improved detection accuracy, most existing models are computationally demanding and not well-suited for deployment in resource-limited environments such as microcontrollers. While the emerging domain of TinyML presents promising avenues for such applications, there remains a substantial gap in the development of lightweight, interpretable, and high-performance models specifically tailored for embedded automotive systems. This paper introduces FastKAN-DDD, an innovative driver drowsiness detection model grounded in the Fast Kolmogorov-Arnold Network (FastKAN) architecture. The model incorporates learnable nonlinear activation functions based on radial basis functions (RBFs), facilitating efficient function approximation with a minimal number of parameters. To enhance suitability for TinyML deployment, the model is further optimized through post-training quantization techniques, including dynamic range, float-16, and weight-only quantization. Comprehensive experiments were conducted using the UTA-RLDD dataset—a real-world benchmark for driver drowsiness detection—evaluating the model across various input resolutions and quantization schemes. The FastKAN-DDD model achieved a test accuracy of 99.94%, with inference latency as low as 0.04 ms and a total memory footprint of merely 35 KB, rendering it exceptionally well-suited for real-time inference on microcontroller-based systems. Comparative evaluations further confirm that FastKAN surpasses several state-of-the-art TinyML models in terms of accuracy, computational efficiency, and model compactness. Our code’s are publicly available at: https://github.com/sihamess/driver_drowsiness_detection_TinyML.
- New
- Research Article
- 10.1088/2632-2153/ae1bf7
- Nov 5, 2025
- Machine Learning: Science and Technology
- Yingjie Zhao + 1 more
Abstract Microstructure evolution in matter is often modeled numerically using field or level-set solvers, mirroring the dual representation of spatiotemporal complexity in terms of pixel or voxel data, and geometrical forms in vector graphics. Motivated by this analog, as well as the structural and event-driven nature of artificial and spiking neural networks, respectively, we evaluate their performance in learning and predicting fatigue crack growth and Turing pattern development.&#xD;Predictions are made based on digital libraries constructed from computer simulations, which can be replaced by experimental data to lift the mathematical overconstraints of physics. Our assessment suggests that the leaky integrate-and-fire neuron model offers superior predictive accuracy with fewer parameters and less memory usage, alleviating the accuracy-cost tradeoff in contrast to the common practices in computer vision tasks. Examination of network architectures shows that these benefits arise from its reduced weight range and sparser connections. The study highlights the capability of event-driven models in tackling problems with evolutionary bulk-phase and interface behaviors using the digital library approach.
- New
- Research Article
- 10.1002/advs.202516379
- Nov 5, 2025
- Advanced science (Weinheim, Baden-Wurttemberg, Germany)
- Jiaxin Liu + 4 more
Artificial neuromorphic vision systems emulate the biological visual pathway by integrating sensing, storage, and information processing within a unified architecture. Featuring high speed, low power consumption, and superior temporal resolution, they demonstrate significant potential in fields such as autonomous driving, facial recognition, and intelligent perception. As the core building block, the optoelectronic synapse plays a decisive role in determining system performance, which is closely related to its material composition, structural design, and functional characteristics. This review systematically summarizes recent progress in optoelectronic synaptic materials, device architectures, and performance evaluation methodologies. Furthermore, it explores the working mechanisms and network architectures of optoelectronic synapse-based neuromorphic vision systems, highlighting their capability in image perception, information storage, and target recognition. Current challenges, including environmental stability, large-scale array fabrication, chip-level integration, and adaptability of visual functions to real-world scenarios, are discussed in depth. Finally, the review provides an outlook on future development trends toward stable, scalable, and highly integrated optoelectronic neural vision systems, underscoring their key importance in next-generation intelligent sensing and information-processing technologies.
- New
- Research Article
- 10.1038/s41598-025-22538-6
- Nov 4, 2025
- Scientific Reports
- Mitchell Ángel Gómez-Ortega + 5 more
The design of Artificial Neural Networks (ANNs) for classification tasks has been a topic of interest. However, defining an optimal ANN architecture remains challenging, especially when considering resource constraints and the large number of design parameters. This paper proposes an Evolutionary Bi-Level Neural Architecture Search with Training (EB-LNAST) approach that simultaneously optimizes the architecture, weights, and biases of a neural network using a bi-level optimization strategy. The upper level focuses on minimizing the network complexity penalized by the lower level performance function, while the lower level optimizes training parameters to minimize the loss function and maximize the predictive performance. The proposal is evaluated on a real-world color classification task and the WDBC dataset, demonstrating statistically significant improvements over traditional machine learning algorithms, as well as advanced models. Compared to Multilayer Perceptron (MLP) based algorithms, EB-LNAST achieves superior predictive performance when the architecture is fixed, and remains competitive, with a marginal reduction in performance of no more than 0.99%, even when compared against MLPs optimized with extensive hyperparameter tuning, including architecture, activation functions, regularization, and optimizers. Remarkably, EB-LNAST achieves up to a 99.66% reduction in model size, highlighting its ability to discover compact and efficient architectures. EB-LNAST is a reliable alternative for generating compact and effective neural network architectures in accordance with the problem’s requirements, enabling efficient exploration of the search space while maintaining or exceeding the predictive performance of state-of-the-art classification algorithms.
- New
- Research Article
- 10.1161/circ.152.suppl_3.4369860
- Nov 4, 2025
- Circulation
- Rashid Alavi + 6 more
Introduction: Myocardial infarct size (IS) is the most robust endpoint for evaluating cardioprotective strategies in preclinical ischemia/reperfusion studies. The gold standard for IS quantification in preclinical studies (triphenyl tetrazolium chloride (TTC) staining) is traditionally performed manually and is prone to inter-operator variability. Here, we propose a deep learning segmentation pipeline to automate IS quantification in TTC-stained rat heart sections. Methods: We used n=165 Sprague-Dawley rats (150–300 g, 1–2 months, 69% female). Myocardial infarction (MI) was induced using a standard occlusion/reperfusion model by occluding the proximal left coronary artery for 30 minutes, followed by 3 hours of reperfusion. After euthanasia, the left ventricle (LV) was excised, transversely sliced, and incubated in 1% TTC at 37 °C for 15 minutes to distinguish necrotic myocardium (pale white) from viable tissues (brick red, Fig. 1). Manual IS was quantified by contouring infarcted and total LV areas in each slice using ImageJ (NIH, USA). To automate the IS measurement from TTC-stained heart slices, we implemented a deep learning segmentation pipeline based on the mask region-based convolutional neural network (Mask R-CNN) architecture. Ground truth masks for infarcted regions and LV area were created using VGG Image Annotator. Images from n=140 rats were used for training, as well as an additional 1,400 images generated by data augmentation. All training and preprocessing pipelines were conducted in Python. Dice similarity coefficient (Dice score) was used to evaluate the model performance. The best-performing Mask R-CNN model was blindly tested on 25 additional MI rats. Results: Infarct sizes calculated from Mask R-CNN-generated segmentations showed strong agreement with the ones from expert-annotated manual segmentations from TTC-stained LV slices (R = 0.97, p < 0.0001) when tested on heart slices from 25 additional MI rats, supporting the model’s accuracy and validity. Conclusions: Our results demonstrate that deep learning segmentation accurately and automatically quantifies infarct size from TTC-stained images without operator input. This automated approach is rapid, reproducible, and unbiased, significantly reducing inter-operator variability and manual workload in preclinical studies. By streamlining infarct size assessment in preclinical cardio-protection studies, it has the potential to improve consistency and translational value in cardiac research.
- New
- Research Article
- 10.64751/ajmimc.2025.v4.n4.pp95-102
- Nov 4, 2025
- American Journal of Management and IOT Medical Computing
- K.Shashidhar
The early and accurate identification of brain abnormalities plays a vital role in improving patient outcomes and treatment planning [1], [2]. This project focuses on developing an intelligent medical image analysis system capable of detecting and classifying brain tumors automatically from MRI data [3], [4]. The proposed approach utilizes advanced three-dimensional convolutional neural network (3D-CNN) architectures that effectively capture spatial and contextual information from volumetric MRI images [5]–[7]. The system undergoes preprocessing steps such as skull stripping, normalization, and data augmentation to enhance input quality and model robustness [8], [9]. Through deep feature extraction and layer-wise learning, the model distinguishes between tumor and non-tumor regions with high precision [10], [11]. Experimental results demonstrate that the proposed deep learning framework outperforms conventional 2D models by leveraging 3D spatial relationships within the MRI scans [12]–[15]. This automated solution significantly reduces diagnostic time, assists radiologists in clinical decision-making, and contributes to improved brain healthcare through intelligent image-based diagnosis [16]–[19]. Furthermore, the integration of explainable AI techniques provides interpretability and transparency, which are crucial for clinical trust and real-world applicability [20], [25].
- New
- Research Article
- 10.3389/fradi.2025.1691048
- Nov 4, 2025
- Frontiers in Radiology
- Taner Alic + 5 more
Background Accurate diagnosis of anterior cruciate ligament (ACL) tears on magnetic resonance imaging (MRI) is critical for timely treatment planning. Deep learning (DL) approaches have shown promise in assisting clinicians, but many prior studies are limited by small datasets, lack of surgical confirmation, or exclusion of partial tears. Aim To evaluate the performance of multiple convolutional neural network (CNN) architectures, including a proposed CustomCNN, for ACL tear detection using a surgically validated dataset. Methods A total of 8,086 proton density–weighted sagittal knee MRI slices were obtained from patients whose ACL status (intact, partial, or complete tear) was confirmed arthroscopically. Eleven deep learning models, including CustomCNN, DenseNet121, and InceptionResNetV2, were trained and evaluated with strict patient-level separation to avoid data leakage. Model performance was assessed using accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC). Results The CustomCNN model achieved the highest diagnostic performance, with an accuracy of 91.5% (95% CI: 89.5–93.1), sensitivity of 92.4% (95% CI: 90.4–94.2), and an AUC of 0.913. The inclusion of both partial and complete tears enhanced clinical relevance, and patient-level splitting reduced the risk of inflated metrics from correlated slices. Compared with previous reports, the proposed approach demonstrated competitive results while addressing key methodological limitations. Conclusion The CustomCNN model enables rapid and reliable detection of ACL tears, including partial lesions, and may serve as a valuable decision-support tool for radiologists and orthopedic surgeons. The use of a surgically validated dataset and rigorous methodology enhances clinical credibility. Future work should expand to multicenter datasets, diverse MRI protocols, and prospective reader studies to establish generalizability and facilitate integration into real-world workflows.
- New
- Research Article
- 10.63995/jltn3377
- Nov 4, 2025
- Fusion of Multidisciplinary Research, An International Journal
- Revanth Singothu + 4 more
In the automotive industry, the shift to digitally integrated and intelligent manufacturing ecosystems is required, and real-time adaptability, transparency, and efficiency at all levels of the supply chain are needed. In this paper, a virtualized network architecture is suggested comprising of cloud computing, network function virtualization (NFV), and cyber-physical systems (CPS) to support coordinated and low-latency communication between distributed automotive stakeholders. To model interactions within a virtualised supply-chain network between customers, distributors, showrooms and manufacturers, a client-server simulation model was created. The system supports coordinated data transfer, on-demand resource provisioning, and unproblematic scaling with a small hardware footprint. The experimental assessment shows a significant increase in the efficiency of communication, inventory balance, and responsiveness of operations. Furthermore, the framework integrates AI-driven analytics, blockchain-provided traceability, and IoT-driven sensing to improve predictive and autonomous decision-making. The suggested solution proves that virtualization and intelligent networking can change standard supply chains into adaptable, transparent, and sustainable systems and thus offer a potential technological backbone to Industry 4.0 and smart-manufacturing in the future.
- New
- Research Article
- 10.1038/s41598-025-22441-0
- Nov 4, 2025
- Scientific Reports
- Shama Firdaus + 2 more
Iron ores are an important mineral resource for the industrial development of an economy. Grading of ores is an important task involved at different stages of ore processing. The present study focuses on the grading of iron ores and uses reflected light microscopic iron ore image dataset. The ores were included from different mines of Singhbhum Craton of Eastern India. The aim of the study is to develop a robust generalized model for automating the task of iron ore characterization of ores belonging to four different grades. For this purpose a deep learning model has been developed, which implements a directed acyclic graph network architecture via hybrid inception topology. The network has been designed to combine together the feature extraction efficiencies of different pre-trained models. It implements MobilenetV2, InceptionV3, Xception models, as base classifiers, for feature extraction, followed by an attention channel for enhancement of the extracted features, and an encoder channel for dimensionality reduction of the enhanced feature set. This encoder channel helps in the making of a more generalized model. The performance of the proposed model has been compared to the existing state of art deep learning models- MobilenetV2, Inception V3 and Xception, and shows a very good performance, with a final classification accuracy of 97% in comparison to 91%, the best accuracy among the individual participating base classifiers. The individual base classifiers exhibited varying performance across different classes, with certain classes experiencing notably high misclassification rates, which posed a major concern. In contrast, the proposed model significantly reduces class-wise misclassification rates compared to the base classifiers.
- New
- Research Article
- 10.38124/ijisrt/25oct1329
- Nov 4, 2025
- International Journal of Innovative Science and Research Technology
- Jiaqi Wang
Dynamic portfolio optimization remains one of the most challenging problems in quantitative finance due to the non-stationary nature of financial markets, complex asset correlations, and the presence of transaction costs. Traditional portfolio management approaches, including Modern Portfolio Theory and mean-variance optimization, often rely on restrictive assumptions that fail to capture market dynamics effectively. This paper investigates the application of Deep Reinforcement Learning techniques to dynamic portfolio optimization, exploring how intelligent agents can learn optimal allocation strategies through continuous interaction with financial environments. We systematically review recent advances in DRL-based portfolio management, examining various algorithmic frameworks including convolutional neural network architectures and actor-critic methods. Our methodology section presents a comprehensive DRL framework employing the Ensemble of Identical Independent Evaluators topology with convolutional layers for feature extraction from historical price data. Through simulated trading experiments, we demonstrate that DRL-based approaches can adapt to changing market conditions while maintaining reasonable trading frequencies that minimize transaction costs. The results indicate that DRL agents achieve superior risk-adjusted returns compared to traditional benchmarks while exhibiting disciplined trading behavior with manageable transaction volumes. This research contributes to the growing body of literature on artificial intelligence applications in finance and provides practical insights for developing adaptive portfolio management systems.