Abstract

In this paper, we present a method of entropy minimization for competitive learning with winner-take-all activation rule. In the competitive learning, only one unit is turned on as a winner, while all the other units are off as losers. Thus, the learning is mainly considered to be a process of entropy minimization. If entropy in competitive layer is minimized, only one unit is on, while all the other units are turned off. If entropy is maximized, all the units are equally activated.We applied this method of entropy minimization to two problems: autoencoder as feature detector and the organization of internal representation: the estimation of well-formedness of English sentences. For an autoencoder, we observed that networks with entropy method could classify four input patterns into two categories clearly. For a sentence well-formedness problem, a feature of input patterns was explicitly seen in competitive hidden layer. In other words, explicit internal representation could be obtained. In two cases, multiple inhibitory connections were observed to be produced. Thus, entropy minimization method is completely equivalent to competitive learning approaches through mutual inhibition. Entropy minimization method is more simple and easy to calculate. In the formulation and experiments, supervised learning (autoencoder) was used. However, the entropy method can be extended to fully unsupervised learning, which may replace ordinary competitive learning with winner-take-all activation rule.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call