Abstract

In model building, the model with appropriate number of parameters needs to be identified. Thus, a variety of information criteria have already been developed, each with a different background to handle this challenge. The mostly used information criteria are the Akaike Information Criterion (AIC), Schwarz Information Criterion (SIC), and Hannan and Quinn Information Criterion (HQ). However, available literature and the preliminary analysis done by authors indicated that when selecting the appropriate model, these information criteria usually lacked uniformity. Thus, in this study, an information criterion that serves as a unifier to the three commonly used criteria; AIC, SIC and HQ is proposed. The penalties of these three information criteria are considered as a linear function. Simulations were conducted on the performance of the proposed information criterion (PIC) together with the three conventional information criteria using nine models and seven different sample sizes. The results revealed that the proposed information criterion (PIC) performed better than the AIC, SIC and HQ with respect to the overall performance in choosing the true model. The performance of PIC increased as sample size increased. However, PIC turns to under fit, when the true model is not selected. When sample size is large, PIC is asymptotically robust with respect to single processes, Autoregressive (AR) and Moving Average (MA). Thus, the proposed information criterion is recommended when selecting the order of a univariate time series. Tropical Agricultural Research Vol. 26 (2): 303 – 316 (2015)

Highlights

  • In model building, the focus is that, there is information in the observed data, and we want to express this information in a compact form through a “model”, (Burnham and Anderson, 2002)

  • The results of the simulation study indicating the models, different sample sizes and information criteria (IC) performances are reported in the respective tables

  • We have investigated four information criteria including the proposed information criterion using 9 different models and 7 different sample sizes

Read more

Summary

Introduction

The focus is that, there is information in the observed data, and we want to express this information in a compact form through a “model”, (Burnham and Anderson, 2002). The goal of model selection is to attain a perfect 1-to-1 translation such that no information is lost in going from the data to a model of the information in the data. Such models (true models) do not exist in the real world. We can attempt to find a model for the data that is "best” or close to the true model (the model loses as little information as possible). This thinking leads directly to Kullback–Leibler information (K-L). We wish to select a model that minimizes K-L information loss as the best model for inference

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call