Abstract

This paper considers a class of sparse optimization problems with <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$l_1$</tex-math></inline-formula> -norm regularization and general convex constraints, in which the individual functions involved are differential except <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$l_1$</tex-math></inline-formula> regularization term. Firstly, a sufficient and necessary condition for the subgradients of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$l_1$</tex-math></inline-formula> -norm is discussed. Subsequently, a sufficient and necessary optimality condition for the considered problem is obtained. According to this condition, a simple neural network with differential equation structure is proposed. Secondly, positive invariance and exponential convergence of state trajectory to the set of equality constraints are studied. In addition, the intermediate state variable is always non-negative when its initial value is so. Moreover, boundedness, global existence and stability in the sense of Lyapunov of state solution to the proposed neural network are guaranteed. Thirdly, the proposed network is globally convergent to an optimal solution of the considered problem from any initial point. At last, sufficient experiments including two numerical experiments, signal recovery, data classification and image restoration problems with real data sets are provided to show the efficiency of this approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call