Information Entropy of Biometric Data in a Recurrent Neural Network with Low Connectivity.
In this paper, we explore the storage capacity and maximal information content of a random recurrent neural network characterized by a very low connectivity. A specific set of patterns is embedded into the network according to the Hebb prescription, a fundamental principle in neural learning. We thoroughly examine how various properties of the network, such as its connectivity and the level of synaptic noise, influence its performance and information retention capabilities, which is evaluated through an entropy measure. Our theoretical analyses are complemented by extensive simulations, and the results are validated through comparisons with the retrieval of real biometric patterns, including retinal vessel maps and fingerprints. This comprehensive approach provides deeper insights into the functionality and limitations of finite-connectivity neural networks and their applicability to the retrieval of complex, structured patterns.
- Single Book
7
- 10.1007/978-3-642-01216-7
- Jan 1, 2009
The Sixth International Symposium on Neural Networks (ISNN 2009)
- Conference Article
1
- 10.18653/v1/p18-2002
- Jan 1, 2018
Increasing the capacity of recurrent neural networks (RNN) usually involves augmenting the size of the hidden layer, with significant increase of computational cost. Recurrent neural tensor networks (RNTN) increase capacity using distinct hidden layer weights for each word, but with greater costs in memory usage. In this paper, we introduce restricted recurrent neural tensor networks (r-RNTN) which reserve distinct hidden layer weights for frequent vocabulary words while sharing a single set of weights for infrequent words. Perplexity evaluations show that for fixed hidden layer sizes, r-RNTNs improve language model performance over RNNs using only a small fraction of the parameters of unrestricted RNTNs. These results hold for r-RNTNs using Gated Recurrent Units and Long Short-Term Memory.
- Peer Review Report
- 10.7554/elife.80680.sa2
- Oct 12, 2022
Author response: Neural learning rules for generating flexible predictions and computing the successor representation
- Peer Review Report
- 10.7554/elife.80680.sa0
- Aug 29, 2022
Editor's evaluation: Neural learning rules for generating flexible predictions and computing the successor representation
- Peer Review Report
- 10.7554/elife.80680.sa1
- Aug 29, 2022
Decision letter: Neural learning rules for generating flexible predictions and computing the successor representation
- Peer Review Report
- 10.7554/elife.83035.sa0
- Jan 8, 2023
Editor's evaluation: Neural population dynamics of computing with synaptic modulations
- Peer Review Report
- 10.7554/elife.83035.sa1
- Jan 8, 2023
Decision letter: Neural population dynamics of computing with synaptic modulations
- Conference Article
18
- 10.23919/chicc.2017.8027970
- Jul 1, 2017
The prediction of PM2.5 is difficult because the variation of PM2.5 concentration is a nonlinear dynamic process. Therefore, a recurrent fuzzy neural network prediction method is proposed to predict the PM2.5 concentration in this paper. Firstly, the partial least squares (PLS) algorithm is used to select key input variables as a preprocessing step. Then, a recurrent fuzzy neural network model is established and the gradient descent algorithm with an adaptive learning rate is used to train the neural network. Simulation results show that the recurrent neural network has better prediction performance and higher interpretability than fuzzy neural network (FNN) and radial-basis function (RBF) feed forward neural network.
- Research Article
31
- 10.1214/22-aap1806
- Feb 1, 2023
- The Annals of Applied Probability
This work studies approximation based on single-hidden-layer feedforward and recurrent neural networks with randomly generated internal weights. These methods, in which only the last layer of weights and a few hyperparameters are optimized, have been successfully applied in a wide range of static and dynamic learning problems. Despite the popularity of this approach in empirical tasks, important theoretical questions regarding the relation between the unknown function, the weight distribution, and the approximation rate have remained open. In this work it is proved that, as long as the unknown function, functional, or dynamical system is sufficiently regular, it is possible to draw the internal weights of the random (recurrent) neural network from a generic distribution (not depending on the unknown object) and quantify the error in terms of the number of neurons and the hyperparameters. In particular, this proves that echo state networks with randomly generated weights are capable of approximating a wide class of dynamical systems arbitrarily well and thus provides the first mathematical explanation for their empirically observed success at learning dynamical systems.
- Conference Article
3
- 10.1109/yac53711.2021.9486432
- May 28, 2021
The most important step in the process of medical record analysis in TCM is the classification of medical records. The biggest challenge of medical record classification is to perceive the correlation between context words and find keywords, and make judgments based on the keyword information. In this article, we propose a TCM medical record analysis algorithm based on recurrent convolutional neural network, which introduces a maximum pooling layer in the recurrent neural network, and uses it to determine the words that play an important role in text classification to capture the key components of the text. Experimental results show that recurrent convolutional neural network achieves better results than attention recurrent neural network and traditional recurrent neural network. In addition, recurrent convolutional neural network is more than twice as fast as them in terms of training speed.
- Research Article
34
- 10.1016/j.asoc.2022.108836
- Apr 18, 2022
- Applied Soft Computing
A Lyapunov-stability-based context-layered recurrent pi-sigma neural network for the identification of nonlinear systems
- Research Article
84
- 10.1109/78.650099
- Jan 1, 1997
- IEEE Transactions on Signal Processing
Neural networks (NNs) have been extensively applied to many signal processing problems. In particular, due to their capacity to form complex decision regions, NNs have been successfully used in adaptive equalization of digital communication channels. The mean square error (MSE) criterion, which is usually adopted in neural learning, is not directly related to the minimization of the classification error, i.e., bit error rate (BER), which is of interest in channel equalization. Moreover, common gradient-based learning techniques are often characterized by slow speed of convergence and numerical ill conditioning. In this paper, we introduce a novel approach to learning in recurrent neural networks (RNNs) that exploits the principle of discriminative learning, minimizing an error functional that is a direct measure of the classification error. The proposed method extends to RNNs a technique applied with success to fast learning of feedforward NNs and is based on the descent of the error functional in the space of the linear combinations of the neurons (the neuron space); its main features are higher speed of convergence and better numerical conditioning w.r.t. gradient-based approaches, whereas numerical stability is assured by the use of robust least squares solvers. Experiments regarding the equalization of PAM signals in different transmission channels are described, which demonstrate the effectiveness of the proposed approach.
- Book Chapter
3
- 10.5772/6506
- Jan 1, 2009
Neural network has good nonlinear function approximation ability. It can be widely used to identify the model of controlled plant. In this chapter, the theories of modeling uncertain plant by using two kinds of neural networks: feed-forward neural network and recurrent neural network are introduced. And two adaptive control strategies for robotic tracking control are developed. One is recurrent fuzzy neural network based adaptive control (RFNNBAC), and another is neural network based adaptive robust control (NNBARC). In RFNNBAC, a kind of recurrent fuzzy neural network (RFNN) is constructed by using recurrent neural network to realize fuzzy inference, In which, temporal relations are embedded in the network by adding feedback connections on the first layer of the network. Two RFNNs are used to identify and control plant respectively. Base on the Lyapunov stability approach, the convergence of the proposed RFNN is analyzed. In NNBARC, A robust controller and a neural network are combined into an adaptive robust robotic tracking control scheme. Neural network is used to approximate the modeling uncertainties in robotic system. Then the disadvantageous effects on tracking performance, due to the approximating error of the neural network and non-measurable external disturbances in robotic system, are attenuated to a prescribed level by robust controller. The robust controller and the adaptation law of neural network are designed based on Hamilton-JacobiIssacs (HJI) inequality theorem. The weights of NN are easily tuned on-line by a simple adaptation law, with no need of a tedious and lengthy off-line training phase. This chapter is organized in the following manner. In the first section a robust robotic tracking controller based on neural network is designed and its effectiveness is proved by applying it to control the trajectories of a two-link robot. Secondly, a recurrent fuzzy neural network based adaptive control is proposed and simulation experiments are made by applying it on robotic tracing control problem to confirm its effectiveness. Finally, some conclusions are drawn.
- Single Book
12
- 10.1007/3-540-45720-8
- Jan 1, 2001
Connectionist Models of Neurons, Learning Processes, and Artificial Intelligence
- Conference Article
5
- 10.1109/icnc.2013.6817951
- Jul 1, 2013
The Random Neural Network (RNN) is a recurrent neural network in which neurons interact with each other by exchanging excitatory and inhibitory spiking signals. The stochastic excitatory and inhibitory interactions in the network make the RNN an excellent modeling tool for various interacting entities. It has been applied in a number of applications such as optimization, image processing, communication systems, simulation pattern recognition and classification. In this paper, we briefly describe the RNN model and some learning algorithms for RNN. We discuss how the RNN with reinforcement learning was successfully applied to Cognitive Packet Network (CPN) architecture so as to offer users QoS driven packet delivery services. The experiments conducted on a 26-node testbed clearly demonstrated the learning capability of the RNNs in CPN.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.