This paper is concerned with distributed online constrained nonconvex optimization problems of minimizing a global cost function decomposed by a sum of local smooth (possibly nonconvex) cost functions. This type of problems occupy a significant component of online learning in dynamic environments which are commonly involved in time-varying (TV) digraphs. Moreover, the network topology of TV digraphs is considered to be more general where related weight matrices are permitted to be row stochastic. Aiming at tackling these intricate challenges effectively, we adopt a valid primal–dual framework decomposing the multiple coupled constraints into individual node-related constraints. Additionally, by integrating a compensation error scheme, a novel primal dual mirror descent (PDMD) algorithm is proposed which employs two sequence of variables respectively serving as compensation error terms for bidirectional mirror mapping processes between primal space and dual space. Under some wild conditions, we theoretically prove that the proposed method can sublinearly reach the stationary point. In numerical simulations, four numerical examples are used to illustrate the validity and superiority of the proposed algorithm with contrast algorithms.
Read full abstract