Abstract

Substantial progress has been made recently on developing provably accurate and efficient algorithms for low-rank matrix factorization via nonconvex optimization. While conventional wisdom often takes a dim view of nonconvex optimization algorithms due to their susceptibility to spurious local minima, simple iterative methods such as gradient descent have been remarkably successful in practice. The theoretical footings, however, had been largely lacking until recently. In this tutorial-style overview, we highlight the important role of statistical models in enabling efficient nonconvex optimization with performance guarantees. We review two contrasting approaches: (1) two-stage algorithms, which consist of a tailored initialization step followed by successive refinement; and (2) global landscape analysis and initialization-free algorithms. Several canonical matrix factorization problems are discussed, including but not limited to matrix sensing, phase retrieval, matrix completion, blind deconvolution, robust principal component analysis, phase synchronization, and joint alignment. Special care is taken to illustrate the key technical insights underlying their analyses. This article serves as a testament that the integrated consideration of optimization and statistics leads to fruitful research findings.

Highlights

  • M ODERN information processing and machine learning often have to deal with low-rank matrix factorization

  • Several problems provably enjoy benign optimization landscape when the sample size is sufficiently large, in the sense that there is no spurious local minima, i.e. all local minima are global minima, and that the only undesired stationary points are strict saddle points [28]–[32]. These important messages inspire a recent flurry of activities in trheTdweosi-gSntaogfetwAopcpornotarcahst.inMg oatligvoartietdhmbiyc approaches: the existence of a basin of attraction, a large number of works follow a two-stage paradigm: (1) initialization, which locates an initial guess within the basin; (2) iterative refinement, which successively refines the estimate without leaving the basin

  • This problem stems from interpreting principal component analysis (PCA) from an optimization perspective, which has a long history in the literature of neural networks and unsupervised learning; see for example [36]–[41]

Read more

Summary

INTRODUCTION

M ODERN information processing and machine learning often have to deal with (structured) low-rank matrix factorization. Date of publication August 23, 2019; date of current version September 16, 2019. A common goal of these problems is to develop reliable, scalable, and robust algorithms to estimate a low-rank matrix of interest, from potentially noisy, nonlinear, and highly incomplete observations

Optimization-Based Methods
Nonconvex Optimization Meets Statistical Models
This Paper
Notations
PRELIMINARIES IN OPTIMIZATION THEORY
Gradient Descent for Locally Strongly Convex Functions
Convergence Under Regularity Conditions
Critical Points
A WARM-UP EXAMPLE
Local Linear Convergence of Gradient Descent
Global Optimization Landscape
FORMULATIONS OF A FEW CANONICAL PROBLEMS
Matrix Sensing
Phase Retrieval and Quadratic Sensing
Matrix Completion
LOCAL REFINEMENT VIA GRADIENT DESCENT
Computational Analysis via Strong Convexity and Smoothness
Improved Computational Guarantees via Restricted Geometry and Regularization
The Phenomenon of Implicit Regularization
VARIANTS OF GRADIENT DESCENT
Projected Gradient Descent
Truncated Gradient Descent
Generalized Gradient Descent
Gradient Descent on Manifolds
BEYOND GRADIENT METHODS
Alternating Minimization
Singular Value Projection
Further Pointers to Other Algorithms
VIII. INITIALIZATION VIA SPECTRAL METHODS
Preliminaries
Spectral Methods
Variants of Spectral Methods
Precise Asymptotic Characterization and Phase Transitions for Phase Retrieval
Global Landscape Analysis
Gradient Descent With Random Initialization
Generic Saddle-Escaping Algorithms
CONCLUDING REMARK

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.