Abstract

Linear regression aims to learn a linear model that can predict the target using the features as accurately as possible. The assumption of linear regression models is that the target is linearly correlated with the features, i.e., the regression function E(y|x) is linear in x, where E(⋅) denotes expectation. If the assumption can (almost) be satisfied, linear regression can be comparable or can even outperform fancier non-linear models. The linear regression model is one of the most classic models for prediction tasks, and it is still widely used in the computer and big data era, thanks to its intuitiveness and interpretability in particular. In the following sections, we first introduce simple linear regression models (with a single feature) and the least squares method, which aims to find the optimal parameters of a linear regression model by minimizing the sum of squares of the residuals. Then, we discuss multiple linear regression (with more than one feature) and its extension. Finally, we introduce shrinkage linear regression models.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.