Abstract

In this paper, we propose a new information-theoretic method to simplify and unify learning methods in one framework. The new method is called maximum information, which is used to produce humanly comprehensible internal representations supposing that information is already maximized before learning. The new learning method is composed of three stages. First, without information on input variables, a competitive network is trained. Second, with supposed maximum information on input variables, the importance of the variables is estimated by measuring mutual information between competitive units and input patterns. Finally, with the estimated importance of input variables, the competitive network is retrained to take into account the importance of input variables. The method is applied not to pure competitive learning but to self-organizing maps, because it is easy to demonstrate how well the new method can produce more explicit internal representation intuitively. We applied the method to the well-known SPECT heart data of the machine learning database. We succeeded in producing more comprehensible class boundaries on the U-matrices than did the conventional SOM. In addition, quantization and topographic errors produced by our method were not larger than those by the conventional SOM.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.