Abstract

In this paper, we extend the APG method to solve matrix l_{2,1}-norm minimization problem in multi-task feature learning. We investigate that the resulting inner subproblem has closed-form solution which can be easily determined by taking the problem's favorable structures. Under suitable conditions, we can establish a comprehensive convergence result for the proposed method. Furthermore, we present three different inexact APG algorithms by using the Lipschitz constant, the eigenvalue of Hessian matrix and the Barzilai and Borwein parameter in the inexact model, respectively. Numerical experiments on simulated data and real data set are reported to show the efficiency of proposed method.

Highlights

  • Consider the following matrix l2,1-norm minimization problem min X ∈Rn×t 1 ∥AX − b∥22 + μ∥X ∥2,1, (1)ISSN 2310-5070 ISSN 2311-004XCopyright ⃝c 2014 International Academic Press where the matrix l2,1-norm ∥X∥2,1 is defined by the sum of l2-norm of each row ∑n ∥X∥2,1 =

  • In this paper, we extend the implementable Accelerated proximal gradient (APG) method to solve the matrix l2,1-norm minimization problem arising in multi-task feature learning

  • We investigate that the resulting inner subproblem has closed-form solution which can be determined by taking the problem’s favorable structures

Read more

Summary

Introduction

Copyright ⃝c 2014 International Academic Press where the matrix l2,1-norm ∥X∥2,1 is defined by the sum of l2-norm of each row. The extension of the APG algorithm to the matrix l2,1-norm minimization problem in multi-task feature learning is interesting in terms of practical perspective, because it takes computational advantage over alternative algorithms for solving the problem (1). We solve the matrix l2,1-norm minimization problem by using some inexact APG algorithms, establish the iteration complexities, and present numerical results to demonstrate the efficiency of our proposed algorithms. We present inexact APG version with three different choices of the self-adjoint positive definite linear operator Hk. In Section 4 we conduct some preliminary numerical experiments to evaluate the practical performance of our proposed inexact APG algorithms for solving matrix l2,1-norm minimization problems arising from simulated data and real data set, compare it with the existing IADM-MFL method.

An accelerated proximal gradient method
Convergence analysis
Choices of Hk
Numerical results
Simulated data
Real data
Findings
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.