Abstract

Self-organizing Map (SOM) is a very popular algorithm that has been used as clustering algorithm and data exploration. SOM consists of complex calculations where the calculation of complexity depending on the circumstances. Many researchers have managed to improve online SOM processing speed using discrete Graphic Processing Units (GPU). In spite of excellent performance using GPU, there is a situation that causes computer hardware underutilized when executing online SOM variant on GPU architecture. In details, the situation occurs when number of cores is larger than the number of neurons on map. Moreover, the complexities of SOM steps also increase the usage of high memory capacity which leads to high rate memory transfer. Recently, Heterogeneous System Architecture (HSA), that integrated Central Processing Unit (CPU) and GPU together on a single chip are rapidly attractive the design paradigm for recent platform because of their remarkable parallel processing abilities. Therefore, the main goal of this study is to reduce computation time of SOM training through adapting HSA platform and combining two SOM training processes. This study attempts to enhance the processing of SOM algorithm using multiple stimuli approach. The data used in this study are benchmark datasets from UCI Machine Learning repository. As a result, the enhanced parallel SOM algorithm that executed on HSA platform is able to score a promising speed up for different parameter size compared to standard parallel SOM on HSA platform.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call