Abstract
The goal of multi-task feature selection is to learn explanatory features across multiple related tasks. In this paper, we develop a weighted feature selection model to enhance the sparsity of the learning variables and propose an online algorithm to solve this model. The worst-case bounds of the time complexity and the memory cost of this algorithm at each iteration are both in $$\mathcal {O}N\times Q$$ON×Q, where N is the number of feature dimensions and Q is the number of tasks. At each iteration, the learning variables can be solved analytically based on a memory of the previous subgradients and the whole weighted regularization, and the weight coefficients used for the next iteration are updated by the current learned solution. A theoretical analysis for the regret bound of the proposed algorithm is presented, along with experiments on public data demonstrating that it can yield better performance, e.g., in terms of convergence speed and sparsity.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.