Information Entropy of Biometric Data in a Recurrent Neural Network with Low Connectivity.

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

In this paper, we explore the storage capacity and maximal information content of a random recurrent neural network characterized by a very low connectivity. A specific set of patterns is embedded into the network according to the Hebb prescription, a fundamental principle in neural learning. We thoroughly examine how various properties of the network, such as its connectivity and the level of synaptic noise, influence its performance and information retention capabilities, which is evaluated through an entropy measure. Our theoretical analyses are complemented by extensive simulations, and the results are validated through comparisons with the retrieval of real biometric patterns, including retinal vessel maps and fingerprints. This comprehensive approach provides deeper insights into the functionality and limitations of finite-connectivity neural networks and their applicability to the retrieval of complex, structured patterns.

Similar Papers
  • Single Book
  • Cite Count Icon 7
  • 10.1007/978-3-642-01216-7
The Sixth International Symposium on Neural Networks (ISNN 2009)
  • Jan 1, 2009

The Sixth International Symposium on Neural Networks (ISNN 2009)

  • PDF Download Icon
  • Conference Article
  • Cite Count Icon 1
  • 10.18653/v1/p18-2002
Restricted Recurrent Neural Tensor Networks: Exploiting Word Frequency and Compositionality
  • Jan 1, 2018
  • Alexandre Salle + 1 more

Increasing the capacity of recurrent neural networks (RNN) usually involves augmenting the size of the hidden layer, with significant increase of computational cost. Recurrent neural tensor networks (RNTN) increase capacity using distinct hidden layer weights for each word, but with greater costs in memory usage. In this paper, we introduce restricted recurrent neural tensor networks (r-RNTN) which reserve distinct hidden layer weights for frequent vocabulary words while sharing a single set of weights for infrequent words. Perplexity evaluations show that for fixed hidden layer sizes, r-RNTNs improve language model performance over RNNs using only a small fraction of the parameters of unrestricted RNTNs. These results hold for r-RNTNs using Gated Recurrent Units and Long Short-Term Memory.

  • Peer Review Report
  • 10.7554/elife.80680.sa2
Author response: Neural learning rules for generating flexible predictions and computing the successor representation
  • Oct 12, 2022
  • Ching Fang + 3 more

Author response: Neural learning rules for generating flexible predictions and computing the successor representation

  • Peer Review Report
  • 10.7554/elife.80680.sa0
Editor's evaluation: Neural learning rules for generating flexible predictions and computing the successor representation
  • Aug 29, 2022
  • Srdjan Ostojic

Editor's evaluation: Neural learning rules for generating flexible predictions and computing the successor representation

  • Peer Review Report
  • 10.7554/elife.80680.sa1
Decision letter: Neural learning rules for generating flexible predictions and computing the successor representation
  • Aug 29, 2022
  • Stefano Recanatesi + 1 more

Decision letter: Neural learning rules for generating flexible predictions and computing the successor representation

  • Peer Review Report
  • 10.7554/elife.83035.sa0
Editor's evaluation: Neural population dynamics of computing with synaptic modulations
  • Jan 8, 2023
  • Gianluigi Mongillo

Editor's evaluation: Neural population dynamics of computing with synaptic modulations

  • Peer Review Report
  • 10.7554/elife.83035.sa1
Decision letter: Neural population dynamics of computing with synaptic modulations
  • Jan 8, 2023
  • Omri Barak + 1 more

Decision letter: Neural population dynamics of computing with synaptic modulations

  • Conference Article
  • Cite Count Icon 18
  • 10.23919/chicc.2017.8027970
Prediction of PM2.5 concentration based on recurrent fuzzy neural network
  • Jul 1, 2017
  • Shanshan Zhou + 2 more

The prediction of PM2.5 is difficult because the variation of PM2.5 concentration is a nonlinear dynamic process. Therefore, a recurrent fuzzy neural network prediction method is proposed to predict the PM2.5 concentration in this paper. Firstly, the partial least squares (PLS) algorithm is used to select key input variables as a preprocessing step. Then, a recurrent fuzzy neural network model is established and the gradient descent algorithm with an adaptive learning rate is used to train the neural network. Simulation results show that the recurrent neural network has better prediction performance and higher interpretability than fuzzy neural network (FNN) and radial-basis function (RBF) feed forward neural network.

  • Research Article
  • Cite Count Icon 31
  • 10.1214/22-aap1806
Approximation bounds for random neural networks and reservoir systems
  • Feb 1, 2023
  • The Annals of Applied Probability
  • Lukas Gonon + 2 more

This work studies approximation based on single-hidden-layer feedforward and recurrent neural networks with randomly generated internal weights. These methods, in which only the last layer of weights and a few hyperparameters are optimized, have been successfully applied in a wide range of static and dynamic learning problems. Despite the popularity of this approach in empirical tasks, important theoretical questions regarding the relation between the unknown function, the weight distribution, and the approximation rate have remained open. In this work it is proved that, as long as the unknown function, functional, or dynamical system is sufficiently regular, it is possible to draw the internal weights of the random (recurrent) neural network from a generic distribution (not depending on the unknown object) and quantify the error in terms of the number of neurons and the hyperparameters. In particular, this proves that echo state networks with randomly generated weights are capable of approximating a wide class of dynamical systems arbitrarily well and thus provides the first mathematical explanation for their empirically observed success at learning dynamical systems.

  • Conference Article
  • Cite Count Icon 3
  • 10.1109/yac53711.2021.9486432
TCM Medical Record Analysis Algorithm Based on Recurrent Convolutional Neural Network
  • May 28, 2021
  • Yu Zhang + 4 more

The most important step in the process of medical record analysis in TCM is the classification of medical records. The biggest challenge of medical record classification is to perceive the correlation between context words and find keywords, and make judgments based on the keyword information. In this article, we propose a TCM medical record analysis algorithm based on recurrent convolutional neural network, which introduces a maximum pooling layer in the recurrent neural network, and uses it to determine the words that play an important role in text classification to capture the key components of the text. Experimental results show that recurrent convolutional neural network achieves better results than attention recurrent neural network and traditional recurrent neural network. In addition, recurrent convolutional neural network is more than twice as fast as them in terms of training speed.

  • Research Article
  • Cite Count Icon 34
  • 10.1016/j.asoc.2022.108836
A Lyapunov-stability-based context-layered recurrent pi-sigma neural network for the identification of nonlinear systems
  • Apr 18, 2022
  • Applied Soft Computing
  • Rajesh Kumar

A Lyapunov-stability-based context-layered recurrent pi-sigma neural network for the identification of nonlinear systems

  • Research Article
  • Cite Count Icon 84
  • 10.1109/78.650099
Fast adaptive digital equalization by recurrent neural networks
  • Jan 1, 1997
  • IEEE Transactions on Signal Processing
  • R Parisi + 3 more

Neural networks (NNs) have been extensively applied to many signal processing problems. In particular, due to their capacity to form complex decision regions, NNs have been successfully used in adaptive equalization of digital communication channels. The mean square error (MSE) criterion, which is usually adopted in neural learning, is not directly related to the minimization of the classification error, i.e., bit error rate (BER), which is of interest in channel equalization. Moreover, common gradient-based learning techniques are often characterized by slow speed of convergence and numerical ill conditioning. In this paper, we introduce a novel approach to learning in recurrent neural networks (RNNs) that exploits the principle of discriminative learning, minimizing an error functional that is a direct measure of the classification error. The proposed method extends to RNNs a technique applied with success to fast learning of feedforward NNs and is based on the descent of the error functional in the space of the linear combinations of the neurons (the neuron space); its main features are higher speed of convergence and better numerical conditioning w.r.t. gradient-based approaches, whereas numerical stability is assured by the use of robust least squares solvers. Experiments regarding the equalization of PAM signals in different transmission channels are described, which demonstrate the effectiveness of the proposed approach.

  • Book Chapter
  • Cite Count Icon 3
  • 10.5772/6506
Adaptive Control Based On Neural Network
  • Jan 1, 2009
  • Sun Wei + 3 more

Neural network has good nonlinear function approximation ability. It can be widely used to identify the model of controlled plant. In this chapter, the theories of modeling uncertain plant by using two kinds of neural networks: feed-forward neural network and recurrent neural network are introduced. And two adaptive control strategies for robotic tracking control are developed. One is recurrent fuzzy neural network based adaptive control (RFNNBAC), and another is neural network based adaptive robust control (NNBARC). In RFNNBAC, a kind of recurrent fuzzy neural network (RFNN) is constructed by using recurrent neural network to realize fuzzy inference, In which, temporal relations are embedded in the network by adding feedback connections on the first layer of the network. Two RFNNs are used to identify and control plant respectively. Base on the Lyapunov stability approach, the convergence of the proposed RFNN is analyzed. In NNBARC, A robust controller and a neural network are combined into an adaptive robust robotic tracking control scheme. Neural network is used to approximate the modeling uncertainties in robotic system. Then the disadvantageous effects on tracking performance, due to the approximating error of the neural network and non-measurable external disturbances in robotic system, are attenuated to a prescribed level by robust controller. The robust controller and the adaptation law of neural network are designed based on Hamilton-JacobiIssacs (HJI) inequality theorem. The weights of NN are easily tuned on-line by a simple adaptation law, with no need of a tedious and lengthy off-line training phase. This chapter is organized in the following manner. In the first section a robust robotic tracking controller based on neural network is designed and its effectiveness is proved by applying it to control the trajectories of a two-link robot. Secondly, a recurrent fuzzy neural network based adaptive control is proposed and simulation experiments are made by applying it on robotic tracing control problem to confirm its effectiveness. Finally, some conclusions are drawn.

  • Single Book
  • Cite Count Icon 12
  • 10.1007/3-540-45720-8
Connectionist Models of Neurons, Learning Processes, and Artificial Intelligence
  • Jan 1, 2001

Connectionist Models of Neurons, Learning Processes, and Artificial Intelligence

  • Conference Article
  • Cite Count Icon 5
  • 10.1109/icnc.2013.6817951
The Random Neural Network and its learning process in Cognitive Packet Networks
  • Jul 1, 2013
  • Peixiang Liu

The Random Neural Network (RNN) is a recurrent neural network in which neurons interact with each other by exchanging excitatory and inhibitory spiking signals. The stochastic excitatory and inhibitory interactions in the network make the RNN an excellent modeling tool for various interacting entities. It has been applied in a number of applications such as optimization, image processing, communication systems, simulation pattern recognition and classification. In this paper, we briefly describe the RNN model and some learning algorithms for RNN. We discuss how the RNN with reinforcement learning was successfully applied to Cognitive Packet Network (CPN) architecture so as to offer users QoS driven packet delivery services. The experiments conducted on a 26-node testbed clearly demonstrated the learning capability of the RNNs in CPN.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon