Abstract

Different regression algorithms are applied for predicting the sublimation rate of naphthalene in various working conditions: time, temperature, trainer rate and shape of the sample. The original Large Margin Nearest Neighbor Regression (LMNNR) algorithm is applied and its performance is compared to other well-established regression algorithms, such as support vector regression, multilayer perceptron neural networks, classical k-nearest neighbor, random forest, and others. The experimental results obtained show that the LMNNR algorithm provides better results than the other regression algorithms.

Highlights

  • Many problems in science and social science can be expressed as classification or regression problems, where one does not know an analytical model of some underlying phenomenon, but sampled data is available through experiments or observations, and the aim is to define a predictive model based on those samples

  • We investigate the performance of some well-established algorithms in comparison to an original regression algorithm, namely the Large Margin Nearest Neighbor Regression (LMNNR), which combines the idea of nearest neighbors with that of a large separation margin, typical of support vector machines

  • Some of the recent research of the authors of the present paper addressed a performance comparison of different regression methods for a polymerization process with adaptive sampling [11], a comparison between simulation and experiments for phase equilibrium and physical properties of aqueous mixtures [12], an experimental analysis and mathematical prediction of cadmium removal by biosorption [13] and the prediction of corrosion resistance of some dental metallic materials with an original adaptive regression model based on the k-nearest-neighbor regression technique [14]

Read more

Summary

Introduction

Machine learning is a subdomain of artificial intelligence whose popularity and success are constantly growing [1, 2]. Many problems in science and social science can be expressed as classification or regression problems, where one does not know an analytical model of some underlying phenomenon, but sampled data is available through experiments or observations, and the aim is to define a predictive model based on those samples. To date, many such algorithms have been proposed, which belong to different paradigms, e.g. neural networks, nearest neighbor, decision trees, support vector machines, Bayesian approaches, etc. One must make several choices when dealing with such a problem: first, to establish the most appropriate learning method and, second, to control the complexity of the model generated with that learning method by changing its specific parameters

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call