Abstract

In this paper, we investigate different activation functions (AFs) on convergence performance of a gradient-based neural network (GNN) for solving linear matrix equation, AXB+X=C. It is observed that, by employing different AFs, i.e., linear, power-sigmoid, sign-power, and general sign-bi-power functions, the presented GNN model can achieve different convergence performance. More specifically, if linear function is employed, the GNN model can achieve exponential convergence; if the power-sigmoid function is employed, superior convergence can be achieved as compared to the linear case; while if the sign-power and general sign-bi-power functions are employed, the GNN model can achieve finite- and fixed-time convergence, respectively. Detailed theoretical proofs are offered to demonstrate these facts. Besides, the exponential convergence rate and the upper bounds of finite and fixed convergence time are also theoretically estimated. Finally, two illustrative examples are performed to further substantiate the aforementioned theoretical results and the effectiveness of the presented GNN model for solving the linear matrix equation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call