Abstract

Matrix-variable optimization is a generalization of vector-variable optimization and has been found to have many important applications. To reduce computation time and storage requirement, this article presents two matrix-form recurrent neural networks (RNNs), one continuous-time model and another discrete-time model, for solving matrix-variable optimization problems with linear constraints. The two proposed matrix-form RNNs have low complexity and are suitable for parallel implementation in terms of matrix state space. The proposed continuous-time matrix-form RNN can significantly generalize existing continuous-time vector-form RNN. The proposed discrete-time matrix-form RNN can be effectively used in blind image restoration, where the storage requirement and computational cost are largely reduced. Theoretically, the two proposed matrix-form RNNs are guaranteed to be globally convergent to the optimal solution under mild conditions. Computed results show that the proposed matrix-form RNN-based algorithm is superior to related vector-form RNN and matrix-form RNN-based algorithms, in terms of computation time.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call