Abstract

This paper introduces a new procedure to train Minimal Learning Machines (MLM) for regression tasks. Besides that, we propose a new prediction process in MLM. A well-known drawback concerning the (original) MLM model formulation is the lack of sparseness.The most recent efforts on this problem strongly rely on the selection of reference points before training and prediction steps in MLM, all based on some supposition regarding the data. In the opposite direction, here, we explore another formulation of MLM in which we do not rely on any assumption regarding the data for prior selection. Instead, our proposal, named Lightweight Minimal Learning Machine (LW-MLM), builds a regularized system that imposes sparseness. We thrive in such a sparse criterion, not by selection but instead using a piece of weighted information into the model. We validate the contributions of this paper through four types of experiments to evaluate different aspects of our proposal: the prediction error performance, the goodness-of-fit of estimated vs. measured values, the norm values which are related to the sparsity, and finally, the prediction error in high dimensional settings. Based on the results, we show that LW-MLM is a valid alternative since achieved similar or higher accuracy rates against other variants being all seen as statistically equivalent.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call