The proliferation of edge devices and advancements in Internet of Things technology have created a vast array of distributed data sources, necessitating machine learning models that can operate closer to the point of data generation. Traditional centralized machine learning approaches often struggle with real-time big data applications, such as climate prediction and traffic simulation, owing to high communication costs. In this study, we introduce random node entropy pairing, a novel distributed learning method for artificial neural networks tailored to distributed computing environments. This method reduces communication overhead and addresses data imbalance by enabling nodes to exchange only the weights of their local models with randomly selected peers during each communication round, rather than sharing the entire dataset. Our findings indicate that this approach significantly reduces communication costs while maintaining accuracy, even when learning from non-IID local data. Furthermore, we explored additional learning strategies that enhance system accuracy by leveraging the characteristics of this method. The results demonstrate that random node entropy pairing is an effective and efficient solution for distributed learning in environments where communication costs and data distribution present significant challenges.
Read full abstract