Abstract

With the application of mobile computing in the security field, security monitoring big data has also begun to emerge, providing favorable support for smart city construction and city-scale and investment expansion. Mobile computing takes full advantage of the computing power and communication capabilities of various sensing devices and uses these devices to form a computing cluster. When using such clusters for training of distributed machine learning models, the load imbalance and network transmission delay result in low efficiency of model training. Therefore, this paper proposes a distributed machine learning parameter communication consistency model based on the parameter server idea, which is called the limited synchronous parallel model. The model is based on the fault-tolerant characteristics of the machine learning algorithm, and it dynamically limits the size of the synchronization barrier of the parameter server, reduces the synchronization communication overhead, and ensures the accuracy of the model training; thus, the model realizes finite asynchronous calculation between the worker nodes and gives full play to the overall performance of the cluster. The implementation of cluster dynamic load balancing experiments shows that the model can fully utilize the cluster performance during the training of distributed machine learning models to ensure the accuracy of the model and improve the training speed.

Highlights

  • The smart city is an inevitable trend of urban modernization and informatization development

  • In the scene of large-scale security monitoring, because it is limited to the worst-performing nodes in the cluster, the overall computing performance of the cluster will be greatly wasted as the performance difference of the worker nodes increases

  • This paper draws on the advantages of bulk synchronous parallel model (BSP) and asynchronous parallel model (ASP) and proposes a limited synchronous parallel model (LSP), which implements a limited synchronization barrier, which synchronizes a part of the worker process that is iteratively fast in each iteration, ensures frequent synchronization, and reduces communication overhead

Read more

Summary

INTRODUCTION

The smart city is an inevitable trend of urban modernization and informatization development. The distributed implementation of the iterative convergence algorithm usually depends on the Bulk Synchronous Parallel model (BSP) [1]. In this model, each compute process performs the same iteration on the local model local replica generated by the previous iteration. Facing the performance lag problem of the bulk synchronous parallel model, Dean et al proposed a distributed machine learning asynchronous iterative scheme [4] in which each compute process performs a full asynchronous calculation and each compute process synchronizes with the parameter server immediately after completing the iteration, greatly utilizing the performance of each computing node; because the model and the updated parameters exhibit uncontrollable delays, there is no guarantee regarding the model convergence speed. This article describes some of the components of parameter communication in distributed machine learning in Section 2; Section 3 describes in detail the distributed machine learning parameter communication consistency model and provides the corresponding theoretical proof and the VOLUME 7, 2019 distributed machine learning framework for security monitoring based on the mainstream parameter communication consistency model; Section 4 describes experimental analysis on distributed machine learning using a limited synchronous model to verify the corresponding theory and functions; and Section 5 presents this study’s conclusions

RELATED WORK
14. END FOR
THEORETICAL ANALYSIS
Let ut
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call