Growing neural gas (GNG) has been widely used in topological mapping, clustering and unsupervised tasks. It starts from two random nodes and grows until it forms a topological network covering all data. The time required for growth depends on the total amount of data and the current network nodes. To accelerate growth, we introduce a novel distributed batch processing method to extract the rough distribution called Distributed Batch Learning Growing Neural Gas (DBL-GNG). First, instead of using a for loop in standard GNG, we adopt a batch learning approach to accelerate learning. To do this, we replace most of the standard equations with matrix calculations. Next, instead of starting with two random nodes, we start with multiple nodes in different distribution areas. Furthermore, we also propose to add multiple nodes to the network instead of adding them one by one. Finally, we introduce an edge cutting method to reduce unimportant links between nodes to obtain a better cluster network. We demonstrate DBL-GNG on multiple benchmark datasets. From the results, DBL-GNG performs faster than other GNG methods by at least 10 times. We also demonstrate the scalability of DBL-GNG by implementing a multi-scale batch learning process in it, named MS-DBL-GNG, which successfully obtains fast convergence results. In addition, we also demonstrate the dynamic data adaptation of DBL-GNG to 3D point cloud data. It is capable of processing and mapping topological nodes on point cloud objects in real time.
Read full abstract