Abstract

In this paper, we develop a variant of the well-known Gauss-Newton (GN) method to solve a class of nonconvex optimization problems involving low-rank matrix variables. As opposed to the standard GN method, our algorithm allows one to handle general smooth convex objective function. We show, under mild conditions, that the proposed algorithm globally and locally converges to a stationary point of the original problem. We also show empirically that the GN algorithm achieves higher accurate solutions than the alternating minimization algorithm (AMA). Then, we specify our GN scheme to handle the symmetric case and prove its convergence, where AMA is not applicable. Next, we incorporate our GN scheme into the alternating direction method of multipliers (ADMM) to develop an ADMM-GN algorithm. We prove that, under mild conditions and a proper choice of the penalty parameter, our ADMM-GN globally converges to a stationary point of the original problem. Finally, we provide several numerical experiments to illustrate the proposed algorithms. Our results show that the new algorithms have encouraging performance compared to existing methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call