Abstract

The present paper aims to propose a new learning method based on destructive computing, contrary to the conventional progressive computing or the steady-step learning. In spite of the existence of a large amount of biased or distorted information in inputs, the conventional learning methods fundamentally aim to gradually acquire information that is as faithful as possible to inputs, which has prevented us from acquiring intrinsic information hidden in the deepest level of inputs. At this time, it is permitted to suppose a leap to that level by changing information at hand not gradually but drastically. In particular, for the really drastic change of information, we introduce the winner-lose-all (WLA) to drastically destroy the supposedly most important information for immediately reaching or leaping to intrinsic information, hidden in complicated inputs. The method was applied to a target-marketing problem. The experimental results show that, with the new method, multi-layered neural networks had an ability to disentangle complicated network configurations into the simplest ones with simple and independent correlation coefficients between inputs and targets. This was realized by drastically changing the information content in the course of learning and, correspondingly, by mixing regular and irregular properties over connection weights.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call