Abstract

In distributed optimization schemes that consist of a group of agents coordinated by a coordinator, the optimization algorithm often involves the agents solving private local proximal minimization subproblems and exchanging data frequently with the coordinator. Such schemes usually incur excessive communication cost, effecting the need for communication reduction in distributed optimization. Gaussian Processes (GPs) have been shown to be effective for learning the agents’ proximal operators and hence for reducing the communication of the Alternating Direction Method of Multipliers (ADMM). We combine this learning-based approach with an adaptive uniform quantization approach to achieve even higher communication reduction. Our approach exploits the probabilistic prediction of the GPs to adapt and refine the quantizers along the progress of the ADMM algorithm. Moreover, following a linear minimum mean square error estimation (LMMSE) approach, we improve the GP regression and hyperparameter tuning by taking into account the statistics of the resulting quantization errors. The proposed approach can achieve significant communication reduction for ADMM without sacrificing the convergence nor the optimality even with small numbers of quantization levels, as demonstrated in simulations of a distributed optimal power dispatch application.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.