Abstract

We propose a distributed optimization framework to generate accurate sparse estimates while allowing an algorithmic solution with guaranteed convergence to a global minimizer. To this end, the proposed problem formulation involves the minimax concave penalty together with an additional penalty called consensus promoting penalty (CPP) that induces convexity to the resulting optimization problem. This problem is solved with an exact first-order proximal gradient algorithm, which employs a pair of proximity operators and is referred to as the distributed proximal and debiasing-gradient (DPD) method. Numerical examples show that CPP not only convexifies the whole cost function, but it also accelerates the convergence speed with respect to the system mismatch.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call