Abstract

This work compares Autometrics with dual penalization techniques such as minimax concave penalty (MCP) and smoothly clipped absolute deviation (SCAD) under asymmetric error distributions such as exponential, gamma, and Frechet with varying sample sizes as well as predictors. Comprehensive simulations, based on a wide variety of scenarios, reveal that the methods considered show improved performance for increased sample size. In the case of low multicollinearity, these methods show good performance in terms of potency, but in gauge, shrinkage methods collapse, and higher gauge leads to overspecification of the models. High levels of multicollinearity adversely affect the performance of Autometrics. In contrast, shrinkage methods are robust in presence of high multicollinearity in terms of potency, but they tend to select a massive set of irrelevant variables. Moreover, we find that expanding the data mitigates the adverse impact of high multicollinearity on Autometrics rapidly and gradually corrects the gauge of shrinkage methods. For empirical application, we take the gold prices data spanning from 1981 to 2020. While comparing the forecasting performance of all selected methods, we divide the data into two parts: data over 1981–2010 are taken as training data, and those over 2011–2020 are used as testing data. All methods are trained for the training data and then are assessed for performance through the testing data. Based on a root-mean-square error and mean absolute error, Autometrics remain the best in capturing the gold prices trend and producing better forecasts than MCP and SCAD.

Highlights

  • In the regression analysis, it is the core concern of researchers to discover the key predictors for achieving better prediction of the response variable. erefore, identifying the potential predictors for knowledge discovery and boosting the predictive power of the model are very beneficial [1]

  • Our study aims to compare Autometrics with improved penalization techniques including smoothly clipped absolute deviation and minimax concave penalty under several asymmetric error distributions such as exponential, gamma, and Frechet through Monte Carlo simulations

  • We provide the brief discussion of the following methods: Least Absolute Shrinkage and Selection Operator: the L1 norm is defined as πk(|θ|) π|θ| tends to the least absolute shrinkage selection operator (Lasso) estimator, where π refers to the tuning parameter and is selected through cross-validation [29]

Read more

Summary

Introduction

It is the core concern of researchers to discover the key predictors for achieving better prediction of the response variable. erefore, identifying the potential predictors for knowledge discovery and boosting the predictive power of the model are very beneficial [1]. It is the core concern of researchers to discover the key predictors for achieving better prediction of the response variable. Erefore, identifying the potential predictors for knowledge discovery and boosting the predictive power of the model are very beneficial [1]. To construct a linear regression model, variable selection is one of the most vital steps. In other words, incorporating more predictors in the model may cause high variation in the least-squares fit, which, in turn, results in overfitting the model, and it yields a poor forecast for the future [2]. On the other hand, missing a single important predictor may lead to model mis-specification, and the conclusion drawn on the basis of a particular model could be misleading [6]

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call