Abstract

In machine learning literature, and especially in the literature referring to artificial neural networks, most methods are iterative and operate in batch mode. However, many of the standard algorithms are not suitable for efficiently managing the emerging large-scale data sets obtained from new real-world applications. Novel proposals to address these challenges are mainly iterative approaches based on incremental or distributed learning algorithms. However, the state-of-the-art is such that there are few learning methods based on non-iterative approaches, which have certain advantages over iterative models in dealing more efficiently with these new challenges. We have developed a non-iterative, incremental and hyperparameter-free learning method for one-layer feedforward neural networks without hidden layers. This method efficiently obtains the optimal parameters of the network, regardless of whether the data contains a greater number of samples than variables or vice versa. It does this by using a square loss function that measures errors before the output activation functions and scales them by the slope of these functions at each data point. The outcome is a system of linear equations that obtain the network's weights and that is further transformed using Singular Value Decomposition. We analyze the behavior of the algorithm, comparing its performance and scaling properties to other state-of-the-art approaches. Experimental results demonstrate that the proposed method appropriately solves a wide range of classification problems and is able to efficiently deal with large-scale tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call