Abstract

Federated learning (FL), as one of novel machine learning frameworks, has been widely applied in edge computing (EC) fields. Most of the existing FL frameworks are implemented based on gradient descent (GD), and the training resources are assumed to be sufficient. However, in practical training, not all optimization problem can be well addressed by GD under FL framework in edge computing environment, and especially the edge resources are also limited. In this paper, based on the limited training resource and the FL framework, we study a kind of non-convex optimization problems, namely the gaussian mixture model (GMM), and propose an adaptive distributed expectation maximization (DEM) algorithm of GMM, which are not well addressed by using gradient descent. In particular, we analyze and give the convergence bound of the adaptive DEM. With the bound of adaptive DEM, we propose a local update control algorithm, which can achieve the best compromise between local update and global parameter aggregation of FL to achieve the maximization utilization of given resource budget. The experimental results demonstrate that adaptive DEM can well work out a kind of non-convex optimization problem under the FL framework and edge resource constrained environment.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.