Abstract

Distributed k-winners-takes-all (k-WTA) neural network (k-WTANN) models have better scalability than centralized ones. In this work, a distributed k-WTANN model with a simple structure is designed for the efficient selection of k winners among a group of more than k agents via competition based on their inputs. Unlike an existing distributed k-WTANN model, the proposed model does not rely on consensus filters, and only has one state variable. We prove that under mild conditions, the proposed distributed k-WTANN model has global asymptotic convergence. The theoretical conclusions are validated via numerical examples, which also show that our model is of better convergence speed than the existing distributed k-WTANN model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call