Abstract

In this paper, we propose a method to extend information-theoretic competitive learning to supervised competitive learning. We have shown that information maximization correspond to competition in neurons. However, this information maximization cannot be used to specify which neurons should be winners. Thus, it is impossible to incorporate teacher information in information maximization. For dealing with this teacher information, we use weighted distance between input patterns and connection weights. Even if distance between input patterns and connection weights is not so small, the distance are made smaller by the parameter considering teacher information. By this weighted distance, we can naturally incorporate teacher information and extend unsupervised competitive learning to supervised information-theoretic competitive learning.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call