Year
Publisher
Journal
Institution
1
Institution Country
Publication Type
Field Of Study
Topics
Open Access
Language
Filter 1
Year
Publisher
Journal
Institution
1
Institution Country
Publication Type
Field Of Study
Topics
Open Access
Language
Filter 1
Export
Sort by: Relevance
Metafeature Selection via Multivariate Sparse-Group Lasso Learning for Automatic Hyperparameter Configuration Recommendation.

The performance of classification algorithms is mainly governed by the hyperparameter settings deployed in applications, and the search for desirable hyperparameter configurations usually is quite challenging due to the complexity of datasets. Metafeatures are a group of measures that characterize the underlying dataset from various aspects, and the corresponding recommendation algorithm fully relies on the appropriate selection of metafeatures. Metalearning (MtL), aiming to improve the learning algorithm itself, requires development in integrating features, models, and algorithm learning to accomplish its goal. In this article, we develop a multivariate sparse-group Lasso (SGLasso) model embedded with MtL capacity in recommending suitable configurations via learning. The main idea is to select the principal metafeatures by removing those redundant or irregular ones, promoting both efficiency and performance in the hyperparameter configuration recommendation. To be specific, we first extract the metafeatures and classification performance of a set of configurations from the collection of historical datasets, and then, a metaregression task is established through SGLasso to capture the main characteristics of the underlying relationship between metafeatures and historical performance. For a new dataset, the classification performance of configurations can be estimated through the selected metafeatures so that the configuration with the highest predictive performance in terms of the new dataset can be generated. Furthermore, a general MtL architecture combined with our model is developed. Extensive experiments are conducted on 136 UCI datasets, demonstrating the effectiveness of the proposed approach. The empirical results on the well-known SVM show that our model can effectively recommend suitable configurations and outperform the existing MtL-based methods and the well-known search-based algorithms, such as random search, Bayesian optimization, and Hyperband.

Read full abstract
Bayesian Dictionary Learning on Robust Tubal Transformed Tensor Factorization.

The recent study on tensor singular value decomposition (t-SVD) that performs the Fourier transform on the tubes of a third-order tensor has gained promising performance on multidimensional data recovery problems. However, such a fixed transformation, e.g., discrete Fourier transform and discrete cosine transform, lacks being self-adapted to the change of different datasets, and thus, it is not flexible enough to exploit the low-rank and sparse property of the variety of multidimensional datasets. In this article, we consider a tube as an atom of a third-order tensor and construct a data-driven learning dictionary from the observed noisy data along the tubes of the given tensor. Then, a Bayesian dictionary learning (DL) model with tensor tubal transformed factorization, aiming to identify the underlying low-tubal-rank structure of the tensor effectively via the data-adaptive dictionary, is developed to solve the tensor robust principal component analysis (TRPCA) problem. With the defined pagewise tensor operators, a variational Bayesian DL algorithm is established and updates the posterior distributions instantaneously along the third dimension to solve the TPRCA. Extensive experiments on real-world applications, such as color image and hyperspectral image denoising and background/foreground separation problems, demonstrate both effectiveness and efficiency of the proposed approach in terms of various standard metrics.

Read full abstract