Abstract

The random neural network is a biologically inspired neural model where neurons interact by probabilistically exchanging positive and negative unit-amplitude signals that has superior learning capabilities compared to other artificial neural networks. This paper considers non-negative least squares supervised learning in this context, and develops an approach that achieves fast execution and excellent learning capacity. This speedup is a result of significant enhancements in the solution of the non-negative least-squares problem which regard (a) the development of analytical expressions for the evaluation of the gradient and objective functions and (b) a novel limited-memory quasi-Newton solution algorithm. Simulation results in the context of optimizing the performance of a disaster management problem using supervised learning verify the efficiency of the approach, achieving two orders of magnitude execution speedup and improved solution quality compared to state-of-the-art algorithms.

Highlights

  • The random neural network (RNN) is a neural network model inspired by the spiking behavior of biophysical neurons [16,17]

  • The most important application area of RNN regards the solution of supervised learning problems such as laser intensity vehicle classification [35], wafer surface reconstruction [27], mine detection [1] and denial-of-service attack detection [41]

  • If the outputs of two consecutive layers are known the optimal weights connecting the two layers can be derived by minimizing the mean square error (MSE) between the actual and the desired input to the second layer [6]

Read more

Summary

INTRODUCTION

The random neural network (RNN) is a neural network model inspired by the spiking behavior of biophysical neurons [16,17]. Linear least-squares techniques for learning have been utilized in feedforward connectionist neural networks and shown to be very efficient, obtaining smaller training errors and faster training times compared to backpropagation techniques These methods are based on the observation that the inputs to the neurons of a given layer is a linear function of the outputs of the preceding layer. If the outputs of two consecutive layers are known the optimal weights connecting the two layers can be derived by minimizing the mean square error (MSE) between the actual and the desired input to the second layer [6] One problem with this approach is that it does not take into consideration the scaling effect of the non-linear activation function. U (a, b) and U int(a, b) represent the uniform distribution in the interval [a, b] generating real and integer numbers, respectively

The RNN Model
Non-negative Least Squares
NON-NEGATIVE LEAST-SQUARES RNN LEARNING FORMULATION
RNN–NNLS LEARNING ALGORITHM
PROJECTED GRADIENT NON-NEGATIVE LEAST-SQUARES ALGORITHM
EFFICIENT COMPUTATION OF NNLS COSTLY FUNCTIONS
The Structure of Matrix B
SIMULATION RESULTS
Problem Description
Supervised Learning Solution Approach
Training Architecture
Performance Evaluation of PGNNLS
Solving the AEUI Problem
CONCLUSIONS
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call