Federated learning (FL), as one of novel machine learning frameworks, has been widely applied in edge computing (EC) fields. Most of the existing FL frameworks are implemented based on gradient descent (GD), and the training resources are assumed to be sufficient. However, in practical training, not all optimization problem can be well addressed by GD under FL framework in edge computing environment, and especially the edge resources are also limited. In this paper, based on the limited training resource and the FL framework, we study a kind of non-convex optimization problems, namely the gaussian mixture model (GMM), and propose an adaptive distributed expectation maximization (DEM) algorithm of GMM, which are not well addressed by using gradient descent. In particular, we analyze and give the convergence bound of the adaptive DEM. With the bound of adaptive DEM, we propose a local update control algorithm, which can achieve the best compromise between local update and global parameter aggregation of FL to achieve the maximization utilization of given resource budget. The experimental results demonstrate that adaptive DEM can well work out a kind of non-convex optimization problem under the FL framework and edge resource constrained environment.