Abstract

The development of empirical probabilistic discrete-choice models frequently entails comparing two non-nested models (i.e., models with the property that neither can be obtained as a parametric special case of the other) to determine which is most likely to provide a correct explanation of a particular choice situation. Conventional statistical procedures, such as the likelihood ratio test, do not apply to comparisons of nonnested models. This paper describes three procedures for carrying out such comparisons and explores the ability of each to distinguish between correct and incorrect models. The procedures are: tests against a composite model, the Cox test of separate families of hypotheses, and comparisons based on the likelihood ratio index goodness-of-fit statistic. A modification of the likelihood ratio index is proposed that corrects for the effects of differences in the numbers of estimated parameters in the compared models. The abilities of the various procedures to reject incorrect models and accept correct ones are explored analytically and through numerical experiments. It is shown analytically that, in large samples, the modified likelihood ratio index has greater ability to distinguish between correct and incorrect models than do composite model procedures. The results of the numerical experiments suggest that the modified likelihood ratio index also has greater ability to distinguish between correct and incorrect models than does the Cox test. The numerical results give encouraging indications of the ability of the modified likelihood ratio index to choose the correct model in comparisons of models whose choice probabilities differ by at least 10–15%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call