Related Topics
Articles published on Artificial Neural Networks
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
114049 Search results
Sort by Recency
- New
- Research Article
- 10.5815/ijisa.2026.01.10
- Feb 8, 2026
- International Journal of Intelligent Systems and Applications
- Aruna S K + 5 more
This research explores the automated leaf-based identification of medicinal plants, utilizing machine learning and deep learning techniques to address the crucial need for efficient plant classification. Driven by the vast potential of medicinal plants in pharmaceutical development and healthcare, the study aims to surpass the limitations of existing methodologies through thorough experimentation and comparative analysis. The primary goal is to develop a robust and automated solution for classifying medicinal plants based on leaf morphology. The methodology encompasses acquiring diverse datasets. Specifically, set 1 data is processed by applying resizing, rescaling, saturation adjustment, and noise removal, while Set 2 data is processed by applying resizing, rescaling, saturation adjustment, noise removal, and PCA (Principal Component Analysis). The proposed algorithms include Support Vector Machines (SVM), Convolutional Neural Networks (CNNs), YOLOv8, Vision Transformer (ViT), ResNet, and Artificial Neural Networks (ANN). The study evaluates the efficacy and effectiveness of each algorithm in plant classification using metrics such as accuracy, recall, precision, and F1 score. Notably, the ResNet model achieved 93.8% and 94.8% accuracy in Set 1 and Set 2, respectively. The SVM model demonstrated 56.5% and 56.6% accuracy in Set 1 and Set 2, while the Vision Transformer (ViT) model achieved 84.9% and 74.4% accuracy in Set 1 and Set 2, respectively. The CNN model showcased high accuracy at 96.7% and 94.8% in Set 1 and Set 2, followed closely by the ANN model with 96.7% and 96.6% accuracy. Lastly, the YOLOv8 model achieved 96.0% and 95.1% accuracy in Set 1 and Set 2, respectively. The comparative analysis identifies CNN and ANN as the top-performing algorithms. This research significantly contributes to the advancement of medicinal plant identification, pharmaceutical research, and environmental conservation efforts, emphasizing the potential of deep learning techniques in addressing complex classification tasks.
- New
- Research Article
- 10.1002/app.70535
- Feb 7, 2026
- Journal of Applied Polymer Science
- Arif Karadag + 1 more
ABSTRACT This study explores the fabrication of polymer spur gears using fused deposition modeling (FDM). It evaluates their mechanical and surface properties through experimental methods and develops predictive models using artificial neural networks (ANN). Four different thermoplastic materials: PLA, PETG, ABS, and carbon fiber‐reinforced PLA (Cf/PLA) were employed to produce gear models. Dimensional accuracy, surface roughness, Shore D hardness, and wear performance were analyzed under varying printing conditions. A Taguchi L9 orthogonal array was used to investigate the effects of infill density (ID), layer thickness (LT), and printing speed (PS) on gear quality. PLA exhibited the lowest surface roughness (Ra = 11.14 μm), while Cf/PLA provided the highest Shore D hardness of 84.65, along with the lowest coefficient of friction ( μ = 0.1255) under optimized processing conditions. PETG and ABS showed moderate and relatively consistent performance across the evaluated metrics. Dimensional deviations remained under 2.5% for all materials, with Cf/PLA and PLA yielding the highest dimensional stability. ID was the most dominant factor, contributing up to 65.4% to hardness, 59.1% to surface roughness, and 63.7% to wear resistance, depending on material type. For dimensional accuracy, LT had the highest influence, accounting for up to 54.6% of the variation. These findings indicate that both material choice and process parameters have statistically significant effects on the final gear quality. SEM analysis of worn surfaces revealed distinct wear mechanisms: ABS showed adhesive wear and thermal softening, PLA exhibited brittle fracture and thermal degradation, Cf/PLA displayed fiber pull‐out and matrix cracking, while PETG demonstrated abrasive wear and layer delamination, highlighting material‐specific tribological behaviors under dry sliding conditions. ANN was developed to predict gear performance based on material type and processing parameters. The ANN models demonstrated excellent prediction accuracy, with R 2 values exceeding 0.99 and MAPE as low as 8.97% for hardness, 9.68% for surface roughness, 10.38% for wear, and 11.24% for dimensional accuracy.
- New
- Research Article
- 10.3390/appliedmath6020023
- Feb 6, 2026
- AppliedMath
- Ioannis G Tsoulos + 2 more
Artificial neural networks are reliable machine learning models that have been applied to a multitude of practical and scientific applications in recent decades. Among these applications, there are examples from the areas of physics, chemistry, medicine, etc. To effectively apply them to these problems, it is necessary to adapt their parameters using optimization techniques. However, in order to be effective, optimization techniques must know the range of values for the parameters of the artificial neural network, so that they can adequately train the artificial neural network. In most cases, this is not possible, as these ranges are also significantly affected by the inputs to the artificial neural network from the objective problem it is called upon to solve. This situation usually results in artificial neural networks becoming trapped in local minima of the error function or, even worse, in the phenomenon of overfitting, where although the training error achieves low values, the artificial neural network exhibits low performance in the corresponding test set. To address this limitation, this work proposes a novel two-stage training approach in which a simulated annealing (SA)-based preprocessing stage is employed to automatically identify optimal parameter value intervals before the application of any optimization method to train the neural network. Unlike similar approaches that rely on fixed or heuristically selected parameter bounds, the proposed preprocessing technique explores the parameter space probabilistically, guided by a temperature-controlled acceptance mechanism that balances global exploration and local refinement. The proposed method has been successfully applied to a wide range of classification and regression problems and comparative results are presented in detail in the present work.
- New
- Research Article
- 10.3390/buildings16030683
- Feb 6, 2026
- Buildings
- Sadık Varolgüneş + 1 more
Rapid assessment of existing reinforced concrete (RC) buildings is essential for effective seismic risk mitigation, particularly in highly active regions such as Bingol, Turkiye. This study evaluates the local performance of three Rapid Visual Screening (RVS) methods—RBTY-2019, FEMA-P154, and IITK-GSDMA—using verified post-earthquake damage data from the 2003 Bingol Earthquake (SERU-2003). To overcome the limitations of traditional RVS approaches, an Artificial Neural Network (ANN) model was developed and trained with the same dataset to predict building damage levels based on structural deficiency parameters. The ANN achieved a regression coefficient above 0.90 and 100% consistency in test predictions, demonstrating superior accuracy and adaptability to local construction characteristics. A Local Scaling Function (LSF) was also proposed to translate RBTY-2019 performance scores into empirical damage states, achieving 100% consistency with observed data. The findings highlight the reliability of locally trained AI models and the importance of adapting national screening regulations to regional seismic experiences. This integrated ANN–RVS framework provides a practical, data-driven tool for local authorities to prioritize urban building stock and strengthen disaster risk management strategies.
- New
- Research Article
- 10.3390/app16031650
- Feb 6, 2026
- Applied Sciences
- Agustina Buccella + 3 more
The process of building data analytics systems, including big data systems, is currently being investigated from various perspectives that generally focus on specific aspects, such as data security or privacy, to the detriment of an engineering perspective on systems development. To address this limitation, our proposal focuses on developing analytics systems through a reuse-based approach, including stages ranging from problem definition to results analysis by identifying variations and building reusable, context-based assets. This study presents the reuse process by constructing two case studies that address the water table level prediction problem in two different contexts: the irrigated period and the non-irrigated period in the same study area. The objective of this study is to demonstrate the influence of context on the performance of widely used predictive models for this problem, including long short-term memory (LSTM), artificial neural networks (ANNs), and support vector machines (SVMs), as well as the potential for reusing the developed analytics system. Additionally, we applied the permutation feature importance (PFI) to determine the contribution of individual variables to the prediction. The results confirm that the same problem hypotheses yield different performance in each case in terms of coefficient of determination (R2), root mean square error (RMSE), mean absolute error (MAE), and mean square error (MSE). They also show that the best-performing predictive models differ for some of the hypotheses (ANN in one case and LSTM in another), supporting the assumption that context can influence model selection and performance. Reusing assets allows for more efficient evaluation of these alternatives during development time, resulting in analytics systems that are more closely aligned with reality, while also offering the advantages of software system composition.
- New
- Research Article
- 10.1080/02533839.2026.2619704
- Feb 6, 2026
- Journal of the Chinese Institute of Engineers
- Hoang-Tien Cao + 2 more
ABSTRACT In this study, the effects of cutting parameters, namely cutting speed (v), feed rate (f), depth of cut (t), and machining diameter (d), on surface roughness in external turning of C45 steel were investigated using the Taguchi method. Taguchi analysis, Random Forest, and ANOVA were employed to identify the factors affecting surface roughness. The results revealed that feed rate had the most significant effect, followed by machining diameter, depth of cut, and cutting speed. Four regression models, including polynomial regression, Random Forest Regression (RFR), Artificial Neural Network (ANN), and Extreme Learning Machine (ELM), were developed to predict surface roughness based on cutting parameters. Among them, the ELM model demonstrated the highest prediction accuracy, characterized by a high coefficient of determination (R2) value and low mean absolute percentage error (MAPE), mean absolute error (MAE), and root mean squared error (RMSE). Therefore, the ELM model is considered the most suitable for predicting surface roughness in precision external turning operations.
- New
- Research Article
- 10.3389/frai.2026.1701133
- Feb 6, 2026
- Frontiers in Artificial Intelligence
- Gehad Mohammed Ahmed Naji + 4 more
Purposes The study investigated the intention of account executives from Small and Medium Enterprises (SMEs) to employ artificial intelligence at their workplace. This study will examine the Unified Theory of Acceptance and Use of Technology (UTAUT), as well as technological and personal characteristics, and the role of SME account executives in adopting artificial intelligence. This study addresses the knowledge gaps in SME account executives’ understanding of artificial intelligence. Methodology employed an online questionnaire distributed in collaboration with SMEs in Malaysia to gather responses from 273 account executives who work in SMEs. The data were analyzed using PLS-SEM and Artificial Neural Networks (ANN) to investigate SME account executives’ intentions to employ artificial intelligence. The demographic information of the individuals was analyzed using SPSS software. Results The study’s findings revealed positive and significant relationships between performance expectancy, effort expectancy, social influence, facilitating conditions, system quality, employee awareness, and personal innovativeness toward artificial intelligence. Insignificant relationships were found between time-saving features and technological self-efficacy, and a negative, significant relationship existed with internet technology (IT) features toward artificial intelligence. Limitation The cross-sectional approach focuses on SMEs in Malaysia, where the study’s applicability to other industries and countries is limited due to changes in the cultural, economic, and regulatory environment. Because participants may give socially acceptable answers rather than honest ones, using self-reported data raises the possibility of bias. Because inquiry assumes a certain level of knowledge with AI technology, respondents’ varying levels of digital competency may influence the findings. Practical implication The findings of this study can help SMEs adopt artificial intelligence for their operations, particularly in accounting departments. Collaboration among organizations can help improve employee motivation to increase intention to use artificial intelligence. Originality/value This study uses the Unified Theory of Acceptance and Use of Technology (UTAUT), technical qualities, and individual traits.
- New
- Research Article
- 10.1007/s44285-025-00061-4
- Feb 6, 2026
- Urban Lifeline
- Ali Alnaqbi + 2 more
Abstract Transverse cracking is a major distress mechanism in Continuously Reinforced Concrete Pavement (CRCP), affecting ride smoothness, service life, and maintenance strategies. This research introduces a hybrid predictive framework that couples Particle Swarm Optimization (PSO) with Gradient Boosting Machine (GBM) to enhance the accuracy of transverse crack prediction in CRCP. The analysis utilized 395 records from 33 pavement sections obtained from the Long-Term Pavement Performance (LTPP) program, encompassing structural, environmental, traffic, and performance-related parameters. PSO was applied to fine-tune critical GBM hyperparameters, namely the number of iterations, learning rate, and tree depth. The optimized PSO–GBM model demonstrated excellent performance, yielding an average RMSE of 1.62 and an R 2 of 0.99 under 5-fold cross-validation, surpassing benchmark models such as conventional GBM, Random Forest, Support Vector Regression (SVR), Artificial Neural Networks (ANN), and Linear Regression. Sensitivity analysis revealed that L3 thickness, L4 thickness, and Annual Average Daily Traffic (AADT) were the most significant contributors, consistent with engineering knowledge of crack development. Validation through residual distribution and equality line plots confirmed the robustness and stability of the proposed approach across varying severity levels.
- New
- Research Article
- 10.3390/s26031076
- Feb 6, 2026
- Sensors
- Giorgio S Senesi + 4 more
Handheld laser-induced breakdown spectroscopy (hLIBS) can be considered one of the most recent techniques for rock characterization in situ. Handheld LIBS devices are useful tools for providing “fit for purpose” qualitative and quantitative geochemical data. The analytical performance of hLIBS instruments varies significantly between similar instruments from different manufacturers. This study employed two commercial hLIBS instruments, both making use of noise reduction and multivariate partial-least-squares (PLS) calibration. Model validation was performed using the Leave-One-Out Cross-Validation (LOOCV) method. The Random Forest (RF) and Artificial Neural Network (ANN) algorithms were also employed as complementary approaches to PLS modeling, with the goal of exploring potential nonlinear relationships between spectral intensities and reference analyte concentrations. A comparison was also made with the most basic and commonly used approach, univariate analysis, demonstrating that multivariate methods achieve superior performances. To evaluate the predictive performance and quantification capability of the acquired LIBS spectra, the Pearson’s coefficient (R2) and root-mean-square error (RMSE) were employed in the analysis of 21 diverse certified geochemical reference materials (CRMs). The results achieved suggested that the spectral resolution was the key factor determining the performance of multivariate LIBS calibrations. The PLS model proved to be satisfactory for analyses performed by the higher-spectral-resolution instrument, whereas complementary algorithms were necessary to achieve better results with the lower-spectral-resolution instrument.
- New
- Research Article
- 10.1080/01496395.2026.2625770
- Feb 6, 2026
- Separation Science and Technology
- Nageswar Sahu + 1 more
ABSTRACT The co-production of melanin during Aureobasidium pullulans fermentation compromises the purity and functionality of pullulan. To address this, acid-activated date seed biochar was evaluated for the selective removal of melanin from crude pullulan. A central composite design of experiment was adopted to vary solution pH, contact time (CT), and biochar concentration (BC) in the batch adsorption process, while quantifying melanin and pullulan recovery through a rapid, nondestructive chemometric Partial Least Square Regression (PLSR) model (R2 CV 0.99) built on designed blends. Hyperparameter-optimized Artificial Neural Network (ANN) models trained on Gaussian noise-augmented data demonstrated excellent prediction performance for both melanin adsorption (R2: 0.97, R2 CV: 0.95) and polymer loss (R2: 0.99, R2 CV: 0.99). Explainable machine learning tools, such as Shapley Additive Explanation (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME), were used to interpret the developed ANN models. Genetic algorithm-based multi-objective optimization suggests melanin removal of 87.56% and polymer loss of 14.55% under optimal biosorption conditions (i.e. pH: 5.16, CT: 36.59 min, and BC: 20.00 g L−1), closely matching the observed melanin removal (86.85%), polymer loss (17.42%) in the validation run. Combining green adsorbents, chemometric tools, and a machine learning model provides a robust, scalable solution for downstream processing of biopolymers.
- New
- Research Article
- 10.3390/atmos17020172
- Feb 6, 2026
- Atmosphere
- Akin Duvan + 1 more
This study develops a practical framework for forecasting long-term drought conditions in Karaman Province, a semi-arid region of Turkey, where accurate climate information is vital for water planning and agriculture. Since the area has limited rainfall records and strong year-to-year fluctuations, traditional modeling approaches often fall short. To better capture local conditions, drought intensity was defined using a simple monthly wetness anomaly measure based directly on precipitation; here, positive values indicate wetter months and negative values indicate drier ones. This makes the method suitable for regions where detailed hydrological data are scarce. Rainfall observations from 1965 to 2011 were expanded using a combination of kernel density estimation and Cholesky-based correlation reconstruction. These steps preserved the main statistical and temporal patterns of the original data while increasing sample diversity. The enriched dataset was then used to train artificial neural networks to predict both precipitation and drought intensity. The models reached R2 values of 0.76 and 0.72, with mean absolute errors of 12.8 mm and 28.4%, which represents an improvement of roughly 10–15% over traditional statistical methods. They were also able to capture the seasonal and year-to-year variability that strongly affects drought conditions in the region. To understand what drives the predictions, the model was examined with LIME, which consistently highlighted lagged rainfall and seasonal indicators as the most influential inputs. A walk-forward validation approach was also used to mimic real forecasting conditions and demonstrated that the model remains stable when projecting into the future. Overall, the proposed framework offers a reliable and practical basis for early-warning efforts and drought-management strategies in semi-arid regions like Karaman.
- New
- Research Article
- 10.12912/27197050/217186
- Feb 6, 2026
- Ecological Engineering & Environmental Technology
- Andang Suryana Soma + 2 more
Landslide hazard modeling using the artificial neural network approach in the Biang Loe River watershed
- New
- Research Article
- 10.1038/s41598-026-38969-8
- Feb 5, 2026
- Scientific reports
- Zhi Zhang + 6 more
Accurate streamflow prediction is critical for flood warning and water resources management in subtropical monsoon watersheds, yet optimal model selection remains challenging. This study compared seven machine learning models, including Linear Regression (LR), Gradient Boosting Regressor, Artificial Neural Network (ANN), Random Forest Extra Trees Regressor, XGBoost (XGB), and Long Short-Term Memory (LSTM), for daily streamflow prediction in the Boluo Watershed, South China. Results demonstrated that LSTM achieved superior performance with NSE and KGE of 0.95, followed by ANN and LR. High-flow evaluation revealed that LSTM maintained robust performance under extreme conditions, achieving NSE of 0.86, 0.80, and 0.45 for flows exceeding the 90th, 95th, and 99th percentiles respectively. For flood peaks, LSTM showed the smallest underestimation of 7 to 20%, compared to 30 to 50% for tree-based models. Feature importance analysis revealed upstream flow from Lingxia Station as the dominant predictor (importance of 0.373 for XGB), reflecting watershed memory effects whereby streamflow is predominantly controlled by antecedent hydrological conditions. Residual analysis identified pronounced heteroscedasticity with increasing prediction errors under high-flow conditions. These findings demonstrate that temporal memory mechanisms provide substantial advantages for streamflow prediction under extreme conditions, offering guidance for model selection in operational flood forecasting systems.
- New
- Research Article
- 10.37394/232015.2026.22.11
- Feb 5, 2026
- WSEAS TRANSACTIONS ON ENVIRONMENT AND DEVELOPMENT
- Shwe Yi Win + 4 more
Better resource management combined with waste reduction and longer product life cycles presents sustainability improvements by integrating the Internet of Everything (IoE) into Circular Economy (CE) models. This extensive study examines both the promising aspects and obstacles associated with the IoE when used to advance CE goals. IoE provides enhanced servitization, optimizing resource efficiency and enabling successful product recovery through real-time surveillance, analytical tools, and improved product tracking capabilities, while enhancing sustainability, global collaboration, and leveraging artificial neural networks and convolutional neural networks. Through IoE, organizations adopt more sustainable production and consumption patterns since they gain the ability to track materials and products from creation to disposal. Various barriers prevent the adoption of IoE in CE because customers express doubts about their conduct, face financial risks, scalability limitations, and interoperability challenges. The management and utilization of the IoE system generate extensive data that faces numerous significant challenges with security and complexity concerns. The review indicates that the maximum utilization of IoE in CE requires addressing technology limitations while conducting customer behavior research and creating supportive regulatory structures. The paper closes by suggesting research paths for handling these barriers and advancing IoE adoption in circular business systems as process innovation.
- New
- Research Article
- 10.1038/s41562-025-02324-0
- Feb 5, 2026
- Nature human behaviour
- Maria K Eckstein + 3 more
A long-standing challenge for psychology and neuroscience is to understand the transformations by which past experiences shape future behaviour. Reward-guided learning is typically modelled using simple reinforcement learning (RL) algorithms. In RL, a handful of incrementally updated internal variables both summarize past rewards and drive future choice. Here we describe work that questions the assumptions of many RL models. We adopt a hybrid modelling approach that integrates artificial neural networks into interpretable cognitive architectures, estimating a maximally general form for each algorithmic component and systematically evaluating its necessity and sufficiency. Applying this method to a large dataset of human reward-learning behaviour, we show that successful models require independent and flexible memory variables that can track rich representations of the past. Using a modelling approach that combines predictive accuracy and interpretability, these results call into question an entire class of popular RL models based on incremental updating of scalar reward predictions.
- New
- Research Article
- 10.1016/j.ijbiomac.2026.150777
- Feb 5, 2026
- International journal of biological macromolecules
- Xu Zhang + 2 more
Gellan gum/ artemisia sphaerocephala Krasch gum composite films integrated with machine learning for real-time freshness assessment.
- New
- Research Article
- 10.1088/1402-4896/ae41ec
- Feb 4, 2026
- Physica Scripta
- Sobhan Bisui + 1 more
Abstract We investigate the onset of double-diffusive convection in a rotating bi-dispersive porous medium (BDPM) saturated with an electrically conducting fluid and subjected to an external magnetic field. The BDPM is heated from below, giving rise to onset of convection and is rotated about a vertical axis with a uniform angular velocity. Using linear stability analysis, we derive critical thermal and concentration Rayleigh numbers for both steady and oscillatory convection modes, systematically examining the effects of key dimensionless parameters such as the Vadasz number, Taylor number, magnetic parameter and others. To enhance predictive capabilities, 
we implement a supervised machine learning framework. A Support Vector Machine (SVM) with a radial basis kernel and an Artificial Neural Network (ANN) are trained on analytically generated data to accurately classify steady and oscillatory convection regimes, achieving 97.33% accuracy. It is demonstrated that magnetic damping, rotational restraint, and thermal coupling collectively suppress oscillatory convection, promoting stability. We have also performed a data-driven optimization to identify parameter combinations that suppress oscillatory convection. By minimizing the probability of oscillatory behavior from the SVM model, we obtain optimal physical conditions corresponding to the most stable regime.
- New
- Research Article
- 10.1038/s41598-026-38028-2
- Feb 3, 2026
- Scientific reports
- Gurmeet Saini + 2 more
A critical challenge in swarm intelligence is the effective utilization of knowledge gained during the search, a process often confounded by the risk of negative knowledge transfer. To address this, we introduce the Learning-Aided Artificial Bee Colony (LA-ABC), a novel framework guided by a Neural Knowledge Transfer mechanism for global optimization. Our framework establishes a co-evolutionary mechanism between the search process of the ABC algorithm and an online neural knowledge learning engine. LA-ABC operates on a dual-pathway architecture, probabilistically arbitrating between foundational swarm exploration and a knowledge-transfer pathway. In this second pathway, an Artificial Neural Network (ANN) learns a predictive, non-linear model from a dynamic archive of historically successful solutions. This approach enables the model to interpret the complex context of successful moves, thereby preventing the negative knowledge transfer where a beneficial pattern in one region of the search space could be detrimental in another. This learned intelligence is then operationalized through a generative operator that transfers validated positive knowledge to create high-quality candidate solutions. The process transforms the ABC from a memoryless explorer into an intelligent agent that learns to navigate the fitness landscape with high efficacy. The superiority of the LA-ABC framework is demonstrated through comprehensive benchmarking on 23 standard test functions, the competitive IEEE CEC 2019 suite, and a real-world photovoltaic parameter extraction problem. Our proposed neural knowledge transfer approach significantly outperforms 12 state-of-the-art algorithms, including ABC, L-SHADE, JSO, L-DE, L-PSO, KL-variants, and RL variants with the significance of these improvements validated by rigorous statistical tests (Wilcoxon, Bonferroni-Dunn, Friedman, and ANOVA). Ultimately, LA-ABC provides a robust new paradigm for integrating reinforcement learning and knowledge transfer within evolutionary computation.
- New
- Research Article
- 10.1108/jm2-09-2024-0290
- Feb 3, 2026
- Journal of Modelling in Management
- Sandesh Singh + 2 more
Purpose This study aims to investigate the potential of various machine-learning models to expedite the credit rating assignment process for non-banking finance companies (NBFCs) in India. It seeks to address the limitations of traditional credit rating agencies, which often update ratings only after an organization has begun defaulting, by providing timely information for proactive decision-making. Design/methodology/approach This study evaluates six machine learning models – support vector machine, artificial neural network, naïve Bayes, decision tree, random forest and gradient boosting decision tree – using a data set of top 50 Indian NBFCs. The data span six fiscal years, from 2014–2015 to 2019–2020, specifically focusing on the pre-crisis period of COVID-19 and were primarily sourced from the Center for Monitoring Indian Economy database, with missing data supplemented by annual reports of the listed companies. Findings The results indicate that the gradient boosting decision tree algorithm outperforms the other models in predicting credit ratings, followed by the polynomial kernel support vector machine and random forest algorithms. This suggests that machine-learning models, particularly gradient boosting, can provide more efficient and accurate credit rating predictions for NBFCs. Practical implications The findings of this study have practical implications for NBFCs and rating agencies. By incorporating machine learning models, the credit rating process can be significantly expedited, offering timely insights for financial institutions and regulators to implement proactive measures to mitigate risk. Originality/value This study contributes to the existing literature by applying and comparing machine learning techniques to predict credit ratings, specifically for Indian NBFCs during pre-crisis and crisis period of COVID-19, a sector that plays a vital role in the country’s financial ecosystem. This study elucidates the potential of modern machine learning models to enhance the timeliness and accuracy of credit rating assessments in this context.
- New
- Research Article
- 10.1016/j.concog.2026.104003
- Feb 3, 2026
- Consciousness and cognition
- Chris Percy + 1 more
The phenomenal binding problem for neural networks.