Abstract

A big challenge in dealing with real-world problems is scalability. In fact, this is partially the reason behind the success of deep learning over other learning paradigms. Here, we tackle the scalability of a novel learning paradigm proposed in 2021 based solely on self-organizing principles. This paradigm consists of only dynamical equations which self-organize with the input to create attractor-repeller points that are related to the patterns found in data. To achieve scalability for such a system, we propose the Magnum algorithm, which utilizes many self-organizing subsystems (Inertia-SyncMap) each with subsets of the problem’s variables. The main idea is that by merging Inertia-SyncMaps, Magnum builds over time a variable correlation by consensus, capable of accurately predicting the structure of large groups of variables. Experiments show that Magnum surpasses or ties with other unsupervised algorithms in all of the high-dimensional chunking problems, each with distinct types of shapes and structural features. Moreover, Inertia-SyncMap alone outperforms or ties with other unsupervised algorithms in six out of seven basic chunking problems. Thus, this work sheds light on how self-organization learning paradigms can be scaled up to deal with high-dimensional structures and compete with current learning paradigms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call