Abstract

As is well known the proximal iterative method can be used to solve the lasso of Tibshirani (J. R. Stat. Soc., Ser. B 58:267-288, 1996). In this paper, we first propose a modified proximal iterative method based on the viscosity approximation method to obtain strong convergence, then we apply this method to solve the lasso.

Highlights

  • The lasso of Tibshirani [ ] is formulated as the minimization problem min x∈RnAx – b subject to x ≤ t, ( . )where A is an m × n matrix, b ∈ Rm, t ≥ is a tuning parameter

  • As the norm promotes the sparsity phenomenon that occurs in practical problems such as image/signal processing, machine learning and so on, the lasso has received much attention in recent years

  • It is proved that the algorithm we propose can obtain strong convergence

Read more

Summary

Introduction

The lasso of Tibshirani [ ] is formulated as the minimization problem min x∈RnAx – b subject to x ≤ t, ( . )where A is an m × n (real) matrix, b ∈ Rm, t ≥ is a tuning parameter. Xu [ ] exploited the following proximal algorithm: xn+ = proxλng ◦ (I – λn∇f ) xn to solve lasso Their iterative algorithms only obtain weak convergence. ) can converge strongly to a fixed point x∗ of T, which is the unique solution of the variational inequality (I – f )x∗, x – x∗ ≥ , for x ∈ Fix(T).

Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.