Financial risk prediction framework in supply chain management using residual Conv-GRU and ensemble features
Rapid advancement in banking technology not only enhances productivity and improves people's lives but also introduces significant risks. This work proposes an effective framework to predict financial risk in Supply Chain Management (SCM). At first, the data needed to perform the process is taken from various data sources. Further, the feature extraction process with the help of Principal Component Analysis (PCA) and Restricted Boltzmann Machine (RBM), and also the statistical features are extracted from the inputted data. Moreover, the features PCA, RBM, and statistical features are fed to the weighted fused features phase, whereas the weights are tuned by Revised Archimedes Optimisation (RAO). Then, the fused features are inputted into the Residual Convolution Gated Recurrent Unit (Res-CGRU) for the prediction process. Finally, the suggested Res-CGRU model tends to show a predicted outcome. The effectiveness of designed approach is compared with several baseline systems to confirm its superiority over others.
- Research Article
36
- 10.1162/neco_a_01210
- Jul 1, 2019
- Neural Computation
A restricted Boltzmann machine (RBM) is an unsupervised machine learning bipartite graphical model that jointly learns a probability distribution over data and extracts their relevant statistical features. RBMs were recently proposed for characterizing the patterns of coevolution between amino acids in protein sequences and for designing new sequences. Here, we study how the nature of the features learned by RBM changes with its defining parameters, such as the dimensionality of the representations (size of the hidden layer) and the sparsity of the features. We show that for adequate values of these parameters, RBMs operate in a so-called compositional phase in which visible configurations sampled from the RBM are obtained by recombining these features. We then compare the performance of RBM with other standard representation learning algorithms, including principal or independent component analysis (PCA, ICA), autoencoders (AE), variational autoencoders (VAE), and their sparse variants. We show that RBMs, due to the stochastic mapping between data configurations and representations, better capture the underlying interactions in the system and are significantly more robust with respect to sample size than deterministic methods such as PCA or ICA. In addition, this stochastic mapping is not prescribed a priori as in VAE, but learned from data, which allows RBMs to show good performance even with shallow architectures. All numerical results are illustrated on synthetic lattice protein data that share similar statistical features with real protein sequences and for which ground-truth interactions are known.
- Research Article
- 10.1162/neco_a_01751
- Apr 17, 2025
- Neural computation
A restricted Boltzmann machine (RBM) is a two-layer neural network with shared weights and has been extensively studied for dimensionality reduction, data representation, and recommendation systems in the literature. The traditional RBM requires a probabilistic interpretation of the values on both layers and a Markov chain Monte Carlo (MCMC) procedure to generate samples during the training. The contrastive divergence (CD) is efficient to train the RBM, but its convergence has not been proved mathematically. In this letter, we investigate the RBM by using a maximum a posteriori (MAP) estimate and the expectation-maximization (EM) algorithm. We show that the CD algorithm without MCMC is convergent for the conditional likelihood object function. Another key contribution in this letter is the reformulation of the RBM into a deterministic model. Within the reformulated RBM, the CD algorithm without MCMC approximates the gradient descent (GD) method. This reformulated RBM can take the continuous scalar and vector variables on the nodes with flexibility in choosing the activation functions. Numerical experiments show its capability in both linear and nonlinear dimensionality reduction, and for the nonlinear dimensionality reduction, the reformulated RBM can outperform principal component analysis (PCA) by choosing the proper activation functions. Finally, we demonstrate its application to vector-valued nodes for the CIFAR-10 data set (color images) and the multivariate sequence data, which cannot be configured naturally with the traditional RBM. This work not only provides theoretical insights regarding the traditional RBM but also unifies the linear and nonlinear dimensionality reduction for scalar and vector variables.
- Research Article
- 10.1080/1573062x.2025.2519094
- Jul 3, 2025
- Urban Water Journal
Water resources are the main source to improve people's lives and enhance economic growth, which is connected to health and environmental practices. In addition to this, diverse deep learning models are employed for controlling water pollution. However, predicting the water quality effectively relies upon selecting the most appropriate features. So, the research work employs novel prediction techniques for accurately predicting the water quality. Here, the data is collected from publicly available sources and given into the feature extraction process using the restricted Boltzmann machine (RBM). The deep weighted features are evaluated optimally in RBM by the random parameter improved black widow optimization (RPIBO). Also, the hybrid attention-based deep network (HADNet) is designed to predict the water quality by considering the average prediction score. The experimental findings of dataset 1 and 2 of the suggested HADNet model achieve 95.7% and 96.8% regarding accuracy to control the water pollution.
- Research Article
17
- 10.32604/iasc.2023.028257
- Jan 1, 2023
- Intelligent Automation & Soft Computing
Diabetes mellitus is a metabolic disease that is ranked among the top 10 causes of death by the world health organization. During the last few years, an alarming increase is observed worldwide with a 70% rise in the disease since 2000 and an 80% rise in male deaths. If untreated, it results in complications of many vital organs of the human body which may lead to fatality. Early detection of diabetes is a task of significant importance to start timely treatment. This study introduces a methodology for the classification of diabetic and normal people using an ensemble machine learning model and feature fusion of Chi-square and principal component analysis. An ensemble model, logistic tree classifier (LTC), is proposed which incorporates logistic regression and extra tree classifier through a soft voting mechanism. Experiments are also performed using several well-known machine learning algorithms to analyze their performance including logistic regression, extra tree classifier, AdaBoost, Gaussian naive Bayes, decision tree, random forest, and k nearest neighbor. In addition, several experiments are carried out using principal component analysis (PCA) and Chi-square (Chi-2) features to analyze the influence of feature selection on the performance of machine learning classifiers. Results indicate that Chi-2 features show high performance than both PCA features and original features. However, the highest accuracy is obtained when the proposed ensemble model LTC is used with the proposed feature fusion framework-work which achieves a 0.85 accuracy score which is the highest of the available approaches for diabetes prediction. In addition, the statistical T-test proves the statistical significance of the proposed approach over other approaches.
- Research Article
- 10.1287/opre.1110.0939
- Apr 1, 2011
- Operations Research
Contributors
- Conference Article
4
- 10.1109/fuzz-ieee.2019.8858804
- Jun 1, 2019
Deep learning (DL) has played a crucial role in many domains of image and pattern recognition, extraction of features from video and text processing etc. One of the quintessential elements of deep learning is Restricted Boltzmann Machines (RBM). RBMs are capable of extracting the high-level features from raw data very efficiently. Nevertheless, feature extraction process is prone to external and unwanted noises, which introduces uncertainty in the decision making process. Moreover, existing RBM-based DL methods are not robust enough to handle such noises in the data samples while training within layers. To tackle these drawbacks, Fuzzy Restricted Boltzmann Machine (FRBM) had been available in the literature. FRBM utilizes Type-1 Fuzzy Sets (T1FS) to handle such uncertainties in governing parameters of the system. However, membership values of membership functions used in T1FSs are also crisp. Thus, in this state-of-art paper, we propose the use Interval Type-2 Fuzzy Sets (IT2FSs) to model parameters in RBM for training, as they are efficient in handling higher level of uncertainty. Experiments performed for MNIST digits show more generative and discriminative capabilities of IT2FRBM over RBM and FRBM.
- Research Article
17
- 10.1016/j.compbiomed.2015.05.004
- May 12, 2015
- Computers in Biology and Medicine
A comparative study of PCA, SIMCA and Cole model for classification of bioimpedance spectroscopy measurements
- Book Chapter
10
- 10.1007/978-981-10-9023-3_147
- May 30, 2018
Infant cry recognition is a challenging task as it is hard to determine the speech features that can allow researchers to clearly separate between different types of cries. However, baby cry is treated as a different way of communication of speech. The types of baby cry can be differentiated using Mel-Frequency Cepstral Coefficient (MFCC) with appropriate artificial intelligence model. Stacked restricted Boltzmann machine (RBN) is popular in providing few layers of neural networks to convert the high dimensional data to lower dimensional data to fine tune the input data to a better initialized weight for the neural networks. Usually RBN is used with another deep neural network to form the deep belief networks (DBN), and the studies in this direction is heading towards the convolutional-RBN variant. The study on RBN to pre-train Convolutional neural networks (CNN) without convolution function in the RBN meanwhile is scarce due to the Back propagation and principal component analysis can be applied directly to the CNN. In this paper, we describe the hybrid system between RBN and CNN for learning class specific features for baby cry recognition using the feature of Mel-Frequency Cepstral Coefficient. We archived an 78.6% of accuracy on 5 types of baby cries by validating the proposed model on baby cry recognition.
- Book Chapter
2
- 10.5772/14754
- Apr 26, 2011
Global marketplaces, higher levels of product variety, shorter product life cycles, and demand for premium customer services are all things which cause pressure for one supply chain to be more efficient, more time compressed and more cost effective. This has become even more critical in recent years because the advancement in information technology has enabled companies to improve their supply chain strategies and explore new models for management of supply chain activity. Among others, important research area in the supply chain management literature is the coordination of the supply chain. Actually, the understanding and practicing of supply chain coordination has become an essential prerequisite for staying competitive in the global race and for enhancing profitability. Hence, supply chain management needs to be defined to explicitly recognise the strategic nature of coordination and information sharing between trading partners and to explain the dual purpose of supply chain management: to improve the performance of an individual organisation an to improve the performance of the whole supply chain. In this context, we present the business process reengineering as a tool for achievinging effective supply chain management, and illustrate through a case study how business process modelling can help in achieving successful improvements in sharing information and the coordination of supply chain processes. It is well recognised that advances in information technologies have driven much change through supply chain and logistics management services. Traditionally, the management of information has been somewhat neglected. The method of information transferring carried out by memebers of the supply chain has consisted of placing orders with the member directly above them. This caused many problems in the supply chain including: excessive inventory holding, longer lead times and reduced service levels in addition to increased demand variability or the ‘Bullwhip Effect’. Thus, as supply chain management progresses, supply chain managers are realising the need to utilise improved information sharing throughout the supply chain in order to have coordinated supply chain and to remain competitive. However, coordination is not just a mere information sharing. Information can be shared but there may not be any alignment in terms of incentives, objectives and decisions (Lee et al., 1997b). Coordination involves alignments of decisions, objectives and incentives and this can be done only through new reengineered business process models, which need to follow the information sharing. Appropriate business processes are a prerequisite for the strategic
- Conference Article
20
- 10.1109/iembs.2006.260262
- Aug 1, 2006
This work proposes a methodology for content-based image retrieval of glioblastoma multiforme (GBM) and non-GBM tumors. Regions containing GBM lesions from 40 patients and non-GBM lesions from 20 patients were manually segmented from MR imaging studies (T1 post-contrast and T2 weighted channels) to form the training set. In addition to the two acquired channels, a composite image was formed by an image fusion method. Data reduction techniques, principal component analysis (PCA) and linear discriminant analysis (LDA), were applied on the training sets (T1 post, T2, composite, and multi-channel combining the PCA features from T1 post and T2). The retrieval accuracy was evaluated using a 'leave-one-out' strategy with query images belonging to 'normal', 'GBM' and 'non-GBM' classes. Several combinations of the similarity metric and classifier were used: Euclidean similarity measures with k-means classifier for the PCA and LDA features and support vector machine (SVM) nonlinear classifier (radial basis function kernel) with the PCA derived features. The SVM classifier served as a comparison of nonlinear techniques vs. linear ones. Multi-channel PCA was 100% accurate in classifying a query image as either 'normal' or 'abnormal'. The highest accuracy in classification of tumor grade (GBM or other Grade 3) was 77% and was achieved by SVM coupled with the PCA features. The proposed algorithm intent is to be integrated into an automated decision support system for MR brain tumor studies.
- Research Article
606
- 10.1111/jscm.12145
- Sep 3, 2017
- Journal of Supply Chain Management
While systematic literature reviews (SLRs) have contributed substantially to developing knowledge in fields such as medicine, they have made limited contributions to developing knowledge in the supply chain management domain. This is due to the ontological and epistemological idiosyncrasies of research in supply chain management, which need to be accounted for when retrieving, selecting, and synthesizing studies in an SLR. Therefore, we propose a new paradigm for SLRs in the supply chain domain that is based on both best practice and the unique attributes of doing supply chain management research. This approach involves exploring existing studies with attention to theoretical boundaries, units of analysis, sources of data, study contexts, and definitions and the operationalization of constructs, as well as research methods, with the goal of refining or revising existing theory. This new paradigm will push supply chain management research to the frontier of current methodological standards and build a foundation for improving the contribution of future SLRs in the supply chain and adjacent management disciplines.
- Peer Review Report
17
- 10.7554/elife.39397.091
- Feb 1, 2019
Statistical analysis of evolutionary-related protein sequences provides information about their structure, function, and history. We show that Restricted Boltzmann Machines (RBM), designed to learn complex high-dimensional data and their statistical features, can efficiently model protein families from sequence information. We here apply RBM to 20 protein families, and present detailed results for two short protein domains (Kunitz and WW), one long chaperone protein (Hsp70), and synthetic lattice proteins for benchmarking. The features inferred by the RBM are biologically interpretable: they are related to structure (residue-residue tertiary contacts, extended secondary motifs (α-helixes and β-sheets) and intrinsically disordered regions), to function (activity and ligand specificity), or to phylogenetic identity. In addition, we use RBM to design new protein sequences with putative properties by composing and 'turning up' or 'turning down' the different modes at will. Our work therefore shows that RBM are versatile and practical tools that can be used to unveil and exploit the genotype–phenotype relationship for protein families.
- Research Article
119
- 10.7554/elife.39397
- Mar 12, 2019
- eLife
Statistical analysis of evolutionary-related protein sequences provides information about their structure, function, and history. We show that Restricted Boltzmann Machines (RBM), designed to learn complex high-dimensional data and their statistical features, can efficiently model protein families from sequence information. We here apply RBM to 20 protein families, and present detailed results for two short protein domains (Kunitz and WW), one long chaperone protein (Hsp70), and synthetic lattice proteins for benchmarking. The features inferred by the RBM are biologically interpretable: they are related to structure (residue-residue tertiary contacts, extended secondary motifs (α-helixes and β-sheets) and intrinsically disordered regions), to function (activity and ligand specificity), or to phylogenetic identity. In addition, we use RBM to design new protein sequences with putative properties by composing and 'turning up' or 'turning down' the different modes at will. Our work therefore shows that RBM are versatile and practical tools that can be used to unveil and exploit the genotype-phenotype relationship for protein families.
- Conference Article
1
- 10.1117/12.2070953
- Nov 24, 2014
Facial expression recognition is an important part of the study in man-machine interaction. Principal component analysis (PCA) is an extraction method based on statistical features which were extracted from the global grayscale features of the whole image .But the grayscale global features are environmentally sensitive. In order to recognize facial expression accurately, a fused method of principal component analysis and local direction pattern (LDP) is introduced in this paper. First, PCA extracts the global features of the whole grayscale image; LDP extracts the local grayscale texture features of the mouth and eyes region, which contribute most to facial expression recognition, to complement the global grayscale features of PCA. Then we adopt Support Vector Machine (SVM) classifier for expression classification. Experimental results demonstrate that this method can classify different expressions more effectively and get higher recognition rate compared with the traditional method.
- Research Article
7
- 10.1038/s41598-024-67226-z
- Jul 31, 2024
- Scientific Reports
Risks in the supply chain can damage many companies and organizations due to sustainability risk factors. This study evaluates the supply chain risk assessment and management and then selects the best supplier in a gas company in Egypt. A comprehensive methodology can use the experts' opinions who use the linguistic variables in the spherical fuzzy numbers (SFNs) to evaluate the criteria and suppliers in this study based on their views. Selecting the best supplier is a complex task due to various criteria related to supply chain risk assessment, such as supply risks, environmental risks, financial risks, regularity risks, political risk, ethical risks, and technology risks and their sub-criteria. This study suggested a new combined model with multi-criteria decision-making (MCDM) under a spherical fuzzy set (SFS) environment to overcome uncertainty and incomplete data in the assessment process. The MCDM methodology has two methods: the Entropy and COmbinative Distance-based Assessment (CODAS) methods. The SFS-Entropy is used to compute supply chain risk assessment and management criteria weights. The SFS-CODAS method is used to rank the supplier. The main results show that supply risks have the highest importance, followed by financial and environmental risks, and ethical risks have the lowest risk importance. The criteria weights were changed under sensitivity analysis to show the stability and validation of the results obtained from the suggested methodology. The comparative analysis is implemented with other MCDM methods named TOPSIS, VIKOR, MARCOS, COPRAS, WASPAS, and MULTIMOORA methods under the SFS environment. This study can help managers and organizations select the best supplier with the lowest sustainability risks.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.