Accurate estimation of CO2 adsorption on activated carbon with multi-layer feed-forward neural network (MLFNN) algorithm
Accurate estimation of CO2 adsorption on activated carbon with multi-layer feed-forward neural network (MLFNN) algorithm
- Research Article
- 10.1186/s13635-024-00152-9
- Feb 24, 2024
- EURASIP Journal on Information Security
With the development of Internet technology, the large number of network nodes and dynamic structure makes network security detection more complex, which requires the use of a multi-layer feedforward neural network to build a security threat detection model to improve network security protection. Therefore, the entropy model is adopted to optimize the particle swarm algorithm to decode particles, and then the single-peak and multi-peak functions are used to test and compare the particle entropy and fitness values to optimize the weights and thresholds in the multi-layer feedforward neural network. Finally, Suspicious Network Event Recognition Dataset discovered by data mining is sampled and applied to the entropy model particle swarm optimization for training. The test results show that there are four functions for the optimal mean and standard deviation in this algorithm, with values of 5.712e − 02, 4.805e − 02, 4.914e − 01, 1.066e − 01, 1.577e − 01, 1.343e − 01, and 2.089e + 01, 5.926, respectively. Overall, the algorithm proposed in the study is the best. Finally, the detection rate of attack types is calculated. The multi-layer feedforward neural network algorithm is 83.80%, the particle swarm optimization neural network algorithm is 91.00%, and the entropy model particle swarm optimization algorithm is 95.00%. The experiment shows that the research model has high accuracy in detecting network security threats, which can provide technical support and theoretical assistance for network security protection.
- Research Article
18
- 10.2147/aabc.s8656
- Jun 15, 2010
- Advances and Applications in Bioinformatics and Chemistry : AABC
Chronic hepatitis C (CHC) patients often stop pursuing interferon-alfa and ribavirin (IFN-alfa/RBV) treatment because of the high cost and associated adverse effects. It is highly desirable, both clinically and economically, to establish tools to distinguish responders from nonresponders and to predict possible outcomes of the IFN-alfa/RBV treatments. Single nucleotide polymorphisms (SNPs) can be used to understand the relationship between genetic inheritance and IFN-alfa/RBV therapeutic response. The aim in this study was to establish a predictive model based on a pharmacogenomic approach. Our study population comprised Taiwanese patients with CHC who were recruited from multiple sites in Taiwan. The genotyping data was generated in the high-throughput genomics lab of Vita Genomics, Inc. With the wrapper-based feature selection approach, we employed multilayer feedforward neural network (MFNN) and logistic regression as a basis for comparisons. Our data revealed that the MFNN models were superior to the logistic regression model. The MFNN approach provides an efficient way to develop a tool for distinguishing responders from nonresponders prior to treatments. Our preliminary results demonstrated that the MFNN algorithm is effective for deriving models for pharmacogenomics studies and for providing the link from clinical factors such as SNPs to the responsiveness of IFN-alfa/RBV in clinical association studies in pharmacogenomics.
- Research Article
8
- 10.2147/oab.s8297
- Jun 1, 2010
- Open Access Bioinformatics
correspondence: Po see chen Department of Psychiatry, hospital and college of Medicine, national cheng Kung University, Tainan, Taiwan Tel + 886 6-235-3535 ext 5190 Fax + 886 6-275-9259 email chenps@mail.ncku.edu.tw Abstract: Due to the varying nature of patient response to different types and even dosages of the same antidepressant, doctors currently prescribe antidepressants on a trial and error basis. Therefore, it is highly desirable, both clinically and economically, to establish tools that distinguish responders from non-responders and to predict possible outcomes of the antidepressant treatments. The overall effectiveness of treatment using antidepressants may thus be optimized. Common genetic polymorphisms, such as single nucleotide polymorphisms (SNPs) can be used in clinical association studies to determine the contribution of genes to drug efficacy. In this work we developed a prediction model resulting from the analysis of clinical factors such as SNPs, age, baseline Hamilton Rating Scale for Depression (HAM-D) score, antidepressant groups, and gender of depression patients. We used it to predict the responsiveness of antidepressant treatment. By using candidate genes reported in the literature, we selected four SNPs that were strongly relevant to antidepressant efficacy. Our study population consisted of Taiwanese patients with major depression recruited from the National Cheng Kung University Hospital. The genotyping data was generated in the high-throughput genomics lab of Vita Genomics, Inc. With the wrapper-based feature selection approach, we employed multilayer feedforward neural network (MFNN) and logistic regression as a basis for comparisons. Our data revealed that the MFNN models were superior to the logistic regression model. The MFNN approach provides an efficient way to develop a tool for distinguishing responders from nonresponders prior to treatments. Our preliminary results showed that the MFNN algorithm is effective for deriving models for pharmacogenomics studies and for providing the link from clinical factors such as SNPs to the responsiveness of antidepressants in clinical association studies.
- Conference Article
- 10.1109/sipnn.1994.344887
- Apr 13, 1994
This paper proposes a homotopy recursive backpropagation (BP) algorithm, which introduces the homotopy continuation method to the multilayer feedforward neural network so as to provide the global convergence and faster convergence speed than conventional BP algorithm for the feedforward neural network. >
- Research Article
68
- 10.1109/tcsi.2002.805733
- Dec 1, 2002
- IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications
We develop, in this brief, a new constructive learning algorithm for feedforward neural networks. We employ an incremental training procedure where training patterns are learned one by one. Our algorithm starts with a single training pattern and a single hidden-layer neuron. During the course of neural network training, when the algorithm gets stuck in a local minimum, we will attempt to escape from the local minimum by using the weight scaling technique. It is only after several consecutive failed attempts in escaping from a local minimum that will we allow the network to grow by adding a hidden-layer neuron. At this stage, we employ an optimization procedure based on quadratic/linear programming to select initial weights for the newly added neuron. Our optimization procedure tends to make the network reach the error tolerance with no or little training after adding a hidden-layer neuron. Our simulation results indicate that the present constructive algorithm can obtain neural networks very close to minimal structures (with the least possible number of hidden-layer neurons) and that convergence (to a solution) in neural network training can be guaranteed. We tested our algorithm extensively using a widely used benchmark problem, i.e., the parity problem. Many researchers have studied the neural network training problem, and many algorithms have been reported. Although there have been many successful applications, there are still a number of issues that have not been completely resolved. These include the determination of the number of hidden-layer neurons, and the convergence as well as the speed of convergence in training. We say that a training is con- vergent if the training algorithm can eventually find a solution (i.e., a trained neural network) to the problem at hand without human inter- vention. This implies, in many cases, that the training algorithm can escape from local minima which the algorithm may visit during the course of a neural network training. Techniques reported in the liter- ature to deal with the local minimum problem (i.e., the convergence problem) include weight scaling (6), (13) and dynamic tunneling (14). The number of hidden-layer neurons is one of the most important considerations when solving problems using multilayered feedforward neural networks. An insufficient number of hidden-layer neurons gen- erally results in the network's inability to solve a particular problem, while too many hidden-layer neurons may result in a network with poor generalization performance. The required number of hidden nodes de- pends on the dimension of the input space and the number of sepa- rable regions required to solve a particular classification (mapping) problem (11), (16). Choosing an insufficient number of hidden neu- rons leads to an overdetermined problem since there are not enough
- Research Article
9
- 10.31185/wjcm.96
- Dec 30, 2022
- Wasit Journal of Computer and Mathematics Science
Data transmission classification is an important issue in networks communications, since the data classification process has the ultimate impact in organizing and arranging it according to size and area to prepare it for transmission to minimize the transmission bandwidth and enhancing the bit rate. There are several methods and mechanisms for classifying the transmitted data according to the type of data and to the classification efficiency. One of the most recent classification methods is the classification of artificial neural networks (ANN). It is considered one of the most dynamic and up-to-date research in areas of application. ANN is a branch of artificial intelligence (AI). The neural network is trained by backpropagation algorithm. Various combinations of functions and their effect while utilizing ANN as a file, classifier was studied and the validity of these functions for different types of datasets was analyzed. Back propagation neural university (BPNN) supported with Levenberg Marqurdte (LM) activation function might be utilized with as a successful data classification tool with a suitable set of training and learning functions which operates, when the probability is maximum. Whenever the maximum likelihood method was compared with backpropagation neural network method, the BPNN supported with Levenberg Marqurdte (LM) activation function was further accurate than maximum likelihood method. A high predictive ability against stable and well-functioning BPNN is possible. Multilayer feed-forward neural network algorithm is also used for classification. However BPNN supported with Levenberg Marqurdte (LM) activation function proves to be more effective than other classification algorithms.
- Research Article
- 10.1142/s0218348x25401346
- Mar 14, 2025
- Fractals
This paper employs an artificial neural network (ANN) method to achieve an approximate solution [Formula: see text] of the one-dimensional fractional Rayleigh–Stokes equation with boundary and initial conditions. ANN is a machine learning branch that aims to minimize the loss function, which closely approaches the solution of mathematics equations. By combining the ANN output with the initial and boundary conditions, we can achieve an adequate approximation for this fractional equation. The study utilizes a multi-layer feed-forward neural network algorithm and error backpropagation method with unsupervised learning to minimize the loss function and parameter alteration (weights and biases). The accuracy and effectiveness of the ANN technique were demonstrated through several examples. This method can approximate solutions of the Rayleigh–Stokes equation for more than 10,000 points with considerable accuracy in a short time. A comparison of the results achieved with other existing techniques indicates that the ANN method provides a more appropriate approximation.
- Research Article
54
- 10.5121/ijasa.2014.2402
- Dec 31, 2014
- The International Journal of Ambient Systems and Applications
Artificial neural networks (ANN) consider classification as one of the most dynamic research and application areas. ANN is the branch of Artificial Intelligence (AI). The neural network was trained by back propagation algorithm. The different combinations of functions and its effect while using ANN as a classifier is studied and the correctness of these functions are analyzed for various kinds of datasets. The back propagation neural network (BPNN) can be used as a highly successful tool for dataset classification with suitable combination of training, learning and transfer functions. When the maximum likelihood method was compared with backpropagation neural network method, the BPNN was more accurate than maximum likelihood method. A high predictive ability with stable and well functioning BPNN is possible. Multilayer feed-forward neural network algorithm is also used for classification. However BPNN proves to be more effective than other classification algorithms.
- Conference Article
- 10.1109/iaecst57965.2022.10061885
- Dec 9, 2022
AC voltage regulating circuit is widely used in industrial field because of its simple circuit and large power capacity. When the thermal stress of the circuit is too large, it will lead to circuit faults. The traditional fault diagnosis method uses manual intervention, which is inefficient. This paper proposes an intelligent fault diagnosis technology and applies it to the AC voltage regulating circuit. Firstly, fault feature vectors are extracted, and the principle of multi-layer feedforward neural network algorithm is analyzed. Finally, the intelligent fault diagnosis technology is studied with AC voltage regulating circuit. The results have certain reference value and engineering application value.
- Research Article
- 10.18091/ijsts.v1i2.9519
- Apr 2, 2016
- International Journal of Science, Technology & Society
Artificial neural networks (ANN) is one of the most dynamic research and application areas for pattern classification. ANN is the branch of Artificial Intelligence (AI). The network is trained by 'n' number of algorithm like back propagation algorithm. The different combinations of performance functions are used for training the ANN. The back propagation neural network (BPNN) can be used as a highly successful algorithm for pattern classification with suitable combination of performance functions while training and learning ANN. When the maximum likelihood algorithm was compared with back propagation neural network method, the BPNN was more accurate than other algorithms. A Multilayer feed-forward neural network algorithm is also used for pattern classification. However BPNN gives more effective results than other pattern classification algorithms. Handwriting Recognition (or HWR) is the ability of a machine to receive and interpret handwritten input from different sources like paper documents, photographs, touch-screens and other input devices. Various performance functions is examined in this paper so as to get to a conclusion that which function would be better for usage in the network to produce an efficient and effective system. The training of back propagation neural network is done with the application of Offline Handwritten Character Recognition.
- Conference Article
- 10.1109/icnn.1996.548967
- Jun 3, 1996
A new fast learning algorithm for training multilayer feedforward neural networks by using variable forgetting factor technique and U-D factorization-based fading memory extended Kalman filter is proposed. In comparison with the backpropagation (BP) and extended Kalman filter (EKF) based learning algorithms, the proposed algorithm can provide much more accurate learning results in fewer iterations with fewer hidden nodes as well as improve convergence rate and numerical stability. In, addition, it is less sensitive to the choice of initial weights and initial covariance matrix as well as other setup parameters. Simulation results of nonlinear dynamic system modeling and identification show that the new algorithm proposed here is an effective and efficient learning algorithm for feedforward neural networks.
- Conference Article
- 10.2991/icismme-15.2015.432
- Jan 1, 2015
Application Research of BP neural network in face recognition
- Conference Article
- 10.1109/scee.2018.8684098
- Dec 1, 2018
The humanoid robot represents a promising application in the medical, handicapped and industrial limbs fields. The control of the hand fingers of the spoken command is one of the importance of these applications. This paper presents the proposed design of the hand-fingers controls circuit through the spoken letter. This design is based on the control of servo motors for each finger by Arduino controller. The Arduino has received a coded signal from the recognition algorithm to give the signals with different' duration to the servo motors to take the required hand-fingers form which is corresponding to the spoken letters. Where these codes relate to the statistical features of that letter, which are extracted from the spoken signals. Then, by using the Multilayer Feed-Forward Neural Network algorithm as a classifier, the simulation recognition, which has been gained is 96.33% while the experimental recognition is 94.8 %. The difference between these results is caused by the reject some patterns caused by the motor's errors.
- Book Chapter
- 10.1007/978-1-4471-2063-6_157
- Jan 1, 1993
The backpropagation algorithm for training feedforward neural networks is extensively used to solve pattern recognition, signal processing and control problems. However, the time required before convergence is long even for medium sized network problems. The choice of learning rate η and momentum coefficient α also have significant effect on the rate of convergence. In this abstract, a fast training algorithm for feedforward neural networks is briefed. This algorithm is based on the adaptation of learning rate by correlation coefficient. The algorithm has been extensively tested. Results show that this algorithm can significantly reduce the time required for convergence.
- Book Chapter
3
- 10.1007/11427391_89
- Jan 1, 2005
We present a recursive total least squares (RTLS) algorithm for multilayer feedforward neural networks. So far, recursive least squares (RLS) has been successfully applied to training multilayer feedforward neural networks. If the input data contains additive noise, the results from RLS could be biased. Such biased results can be avoided by using the RTLS algorithm. The RTLS algorithm described in this paper performs better than RLS algorithm over a wide range of SNRs and involves approximately the same computational complexity of O(N2) as the RLS algorithm.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.