Abstract

Linear least square is a common method to fit two-dimensional data set with linear relation. In non-linear case we need to do some linearization before fitting. Parameters can be extracted from the fitted line and used to predict new dependent variable values at new independent variable points. In this report we investigate the uncertainty introduced to a nonlinear model prediction from linear least square fitting. As an example, we fit a data set with the relation D = a/h, which is the physical model of measurement we are interested. The fitting goes well, and we get the curve with R-square very closed to one. However, when we use the fitted parameters to actual measurements, the accuracy is poor, especially at large y. At first, we expect that this due to the functional form of the model. When compare with high order polynomial fitting we realize that this is not the case, since, for example, the sixth-degree polynomial gives less than one percent error, about 10 times less than linear fitting prediction. It is because the linearization and the inverse transformation to the original space in the linear least square method that give rise to the uncertainty. Our analysis can be generalized to any non-linear model prediction. We expect our results to be a caution to anyone using linear fitting to their non-linear model prediction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call