Abstract
Many problems in machine learning can be presented in the form of convex optimization problems with objective function as a loss function. The paper examines two weak relaxed greedy algorithms for finding the solutions of convex optimization problems over convex hulls of atomic sets. Such problems arise as the natural convex relaxations of cardinality-type constrained problems, many of which are well-known to be NP-hard. Both algorithms utilize one atom from a dictionary per iteration, and therefore, guarantee designed sparsity of the approximate solutions. Algorithms employ the so called ‘gradient greedy step’ that maximizes a linear functional which uses gradient information of the element obtained in the previous iteration. Both algorithms are ‘weak’ in the sense that they solve the linear subproblems at the gradient greedy step only approximately. Moreover, the second algorithm employs an approximate solution at the line-search step. Following ideas of [5] we put up the notion of the duality gap, the values of which are computed at the gradient greedy step of the algorithms on each iteration, and therefore, they are inherent upper bounds for primal errors, i.e. differences between values of objective function at current and optimal points on each step. We obtain dual convergence estimates for the weak relaxed greedy algorithms.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.