Abstract
In the era of big data, mathematical optimization has experienced a paradigm shift due tothe introduction of vast amounts of data in real-time. The need to solve linear optimization problems is central to most quantitative areas such as Scientific Computing, Machine Learning, Signal Processing, Operations Research, and Computer Vision. During the last decade, there has been a huge demand from practitioners for computational frameworks that can handle huge amounts of data. While most classical optimization methods are deterministic in nature, recent developments suggest that stochastic methods play a significant role in the design of efficient algorithms that are better than existing deterministic methods for solving optimization problems with huge dimensional data. The broader goal of this thesis is to design scalable stochastic algorithms for solving large-scale convex optimization problems. In this thesis, our main focus is on the design, analysis, and implementation of such algorithms. In particular, we are interested in developing stochastic iterative methods for solving large-scale linear systems, linear feasibility, and linear programming problems. In Chapter 2, we present the Stochastic Steepest Descent (SSD) framework that connects the Sketch & Project method with the Steepest Descent method. We also propose a momentum variant of the SSD method. We then develop two greedy sampling strategies that have roots in the Kaczmarz Motzkin sampling and the capped sampling, respectively. We provide convergence results for the proposed SSD method as well as the momentum SSD method. We establish global convergence results for a wide range of projection parameters and momentum parameters. From our convergence results, one can recover convergence results of well-known methods such as steepest descent, Kaczmarz method, Motzkin method, Coordinate Descent method, etc. We also show that under mild conditions, the Cesaro average of iterates generated by SSD and momentum SSD enjoys sub-linear convergence rate. We design computational experiments to demonstrate the performance of the proposed greedy sampling methods as well as the momentum methods. In Chapter 3, we propose three variants of the Sampling Kaczmarz Motzkin (SKM) algorithm: 1) Generalized SKM (GSKM), 2) Probably Accelerated SKM (PASKM), and 3) Momentum SKM (MSKM) for solving linear feasibility problems. The GSKM method uses the concept of over-relaxation technique to the SKM algorithm. The PASKM method incorporates the well-known Nesterov Accelerated Gradient (NAG) technique into the SKM method. The MSKM method is developed by integrating the heavy ball momentum to the SKM algorithm. We prove global, non-asymptotic linear convergence rates of all of these methods as well as sub-linear rates for the Cesaro average of iterates. We obtain an upper bound on the probability of finding a certificate of feasibility for the PASKM and MSKM algorithms whenever the system is feasible. We then back up the theoretical results with the help of thorough numerical experiments on artificial and real datasets. In Chapter 4 we propose a Sketch & Project (SP) framework for solving linear feasibility problems that synthesize existing randomized iterative methods into one framework. We propose two greedy sampling techniques that generalize the available sampling strategies and generate efficient algorithmic variants of the SP method for solving the linear feasibility problem. Furthermore, we introduce the heavy ball momentum scheme to the proposed greedy SP method to accelerate the efficiency. We establish a global linear rate for both methods. We also propose a certificate of feasibility result for the momentum-induced adaptive sketching method. We obtain the so-called certificate of feasibility result for the proposed momentum SP method. To measure the performance of the proposed algorithms, we carry out comprehensive numerical experiments on randomly generated test instances as well as sparse real-world test instances. In Chapter 5, we present two primal Affine Scaling (AFS) algorithms to achieve faster convergence in solving Linear Programming problems. In the first algorithm, we integrate the heavy ball momentum strategy in the primal AFS method. Then, we introduce a second algorithm to accelerate the convergence rate of the generalized algorithm by integrating the Shanks non-linear series transformation technique. We provide convergence results for the primal and dual sequences without the degeneracy assumption for both the momentum and accelerated AFS variants. We carry out computational experiments to corroborate the theoretical findings--Author's abstract
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.