Abstract

The learning algorithm for a three-layered neural structure with novel non-linear quaternionic-valued multiplicative (QVM) neurons is proposed in this paper. The computing capability of non-linear aggregation in the cell body of biological neurons inspired the development of a non-linear neuron model. However, unlike linear neuron models, most non-linear neuron models are built on higher order aggregation, which is more mathematically complex and difficult to train. As a result, building non-linear neuron models with a simple structure is a difficult and time-consuming endeavor in the neurocomputing field. The concept of a QVM neuron model was influenced by the non-linear neuron model, which has a simple structure and the great computational ability. The suggested neuron’s linearity is determined by the weight and bias associated with each quaternionic-valued input. Non-commutative multiplication of all linearly connected quaternionic input-weight terms accommodates the non-linearity. To train three-layered networks with QVM neurons, the standard quaternionic-gradient-based backpropagation (QBP) algorithm is utilized. The computational and generalization capabilities of the QVM neuron are assessed through training and testing in the quaternionic domain utilizing benchmark problems, such as 3D and 4D chaotic time-series predictions, 3D geometrical transformations, and 3D face recognition. The training and testing outcomes are compared to conventional and root-power mean (RPM) neurons in quaternionic domain using training–testing MSEs, network topology (parameters), variance, and AIC as statistical measures. According to these findings, networks with QVM neurons have greater computational and generalization capabilities than networks with conventional and RPM neurons in quaternionic domain.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call