Abstract

The k -winners-take-all ( k -WTA) problem refers to the selection of k winners with the first k largest inputs over a group of n neurons, where each neuron has an input. In existing k -WTA neural network models, the positive integer k is explicitly given in the corresponding mathematical models. In this article, we consider another case where the number k in the k -WTA problem is implicitly specified by the initial states of the neurons. Based on the constraint conversion for a classical optimization problem formulation of the k -WTA, via modifying the traditional gradient descent, we propose an initialization-based k -WTA neural network model with only n neurons for n -dimensional inputs, and the dynamics of the neural network model is described by parameterized gradient descent. Theoretical results show that the state vector of the proposed k -WTA neural network model globally asymptotically converges to the theoretical k -WTA solution under mild conditions. Simulative examples demonstrate the effectiveness of the proposed model and indicate that its convergence can be accelerated by readily setting two design parameters.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call