Abstract

In this manuscript, we consider the problem of minimizing a smooth function with cardinality constraint, i.e., the constraint requiring that the -norm of the vector of variables cannot exceed a given threshold value. A well-known approach of the literature is represented by the class of penalty decomposition methods, where a sequence of penalty subproblems, depending on the original variables and new variables, are inexactly solved by a two-block decomposition method. The inner iterates of the decomposition method require to perform exact minimizations with respect to the two blocks of variables. The computation of the global minimum with respect to the original variables may be prohibitive in the case of nonconvex objective function. In order to overcome this nontrivial issue, we propose a modified penalty decomposition method, where the exact minimizations with respect to the original variables are replaced by suitable line searches along gradient-related directions. We also present a derivative-free penalty decomposition algorithm for black-box optimization. We state convergence results of the proposed methods, and we report the results of preliminary computational experiments.

Highlights

  • We consider the problem of minimizing a smooth function with a sparsity constraint

  • Applications of sparse optimization regard compressed sensing in signal processing [1,2], best subset selection [3,4,5,6] and sparse inverse covariance estimation [7,8] in statistics, sparse portfolio selection [9] in decision science, neural networks compression in machine learning [10,11]

  • We focus on the approach of the penalty decomposition (PD) methods and we present two contributions: (a) the definition of a PD algorithm performing inexact minimizations by an Armijotype line search [19] along gradient-related directions; (b) the definition of a derivative-free PD method for sparse black-box optimization

Read more

Summary

Introduction

We consider the problem of minimizing a smooth function with a sparsity constraint (cardinality constraint). Useful algorithms designed to deal with cardinality-constrained optimization problems are the greedy sparse simplex method [17] and the class of penalty decomposition (PD) methods [18]. These methods, based on different approaches, present theoretical convergence properties and are computationally efficient in the solution of cardinality-constrained problems They require to exactly solve at each iteration suitable subproblems (of dimension 1 in the case of the greedy sparse simplex method, and of dimension n for PD methods). The aim of the present work is to tackle cardinality-constrained problems by defining convergent algorithms that do not require to compute the exact solution of (possibly nonconvex) subproblems To this aim, we focus on the approach of the PD methods and we present two contributions:.

Background
The Projection onto the Feasible Set
The Penalty Decomposition Method
An Inexact Penalty Decomposition Method
A Derivative-Free Penalty Decomposition Method
Preliminary Computational Experiments
Findings
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call