Abstract
This paper focuses on the application of the kernel logit formulation to model dynamic discrete choice data. A dynamic kernel logit (DKL) formulation with normal errors is presented to model unordered discrete choice panel data. Investigating the theoretical foundations of the kernel logit model, it is demonstrated that the mixed logit error structure converges in distribution asymptotically to a suitable multivariate normal error structure. This result provides support for both cross-sectional kernel logit (CKL) and DKL models with normal errors. The calibration, identification, and specification issues associated with the latter model are also discussed. The performance of the proposed DKL model is assessed from the perspective of computational efficiency and estimate accuracy relative to the multinomial probit (MNP) model using a series of numerical experiments. Complexity analysis reveals that the DKL has a lower computational complexity than the MNP frequency simulator, which has an exponential complexity. Thus, for choice situations with a large number of alternatives (J) in each time period, and/or large number of time periods (T), the DKL model is faster than the corresponding MNP by more than an order of magnitude. This is also confirmed by computational experiments conducted using 32 synthetic data sets. The computational performance of the DKL relative to MNP appears to be the result of a trade-off between the number of Monte-Carlo draws required, and the computational cost of each draw. With fewer than 25 alternatives (JT), the results suggest that it is more advantageous to use the probit model (MNP) compared to the DKL. There appears to be little advantage in applying the kernel logit formulation relative to the MNP to cross-sectional data with a few alternatives. Regarding computational accuracy, the numerical results suggest that the parameter estimates of both models (MNP and DKL) are comparable and close to the true values from which the data sets were generated. However, both DKL and MNP formulations may lead to the maximization of a nonconcave objective function, resulting in flat log-likelihood functions, and identification problems.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.