Abstract

The poor predictability and the misspecification arising from hand-crafted utility functions are common issues in theory-driven discrete choice models (DCMs). Data-driven DCMs improve predictability through flexible utility specifications, but they do not address the misspecification issue and provide untrustworthy behavioral interpretations (e.g., biased willingness to pay estimates). Improving interpretability at the minimum loss of flexibility/predictability is the main challenge in the data-driven DCM. To this end, this study proposes a flexible and partially monotonic DCM by specifying the systematic utility using the Lattice networks (i.e., DCM-LN). DCM-LN ensures the monotonicity of the utility function relative to the selected attributes while learning attribute-specific non-linear effects through piecewise linear functions and interaction effects using multilinear interpolations in a data-driven manner. Partial monotonicity could be viewed as domain-knowledge-based regularization to prevent overfitting, consequently avoiding incorrect signs of the attribute effects. The light architecture and an automated process to write monotonicity constraints make DCM-LN scalable and translatable to practice. The proposed DCM-LN is benchmarked against deep neural network-based DCM (i.e., DCM-DNN) and a DCM with a hand-crafted utility in a simulation study. While DCM-DNN marginally outperforms DCM-LN in predictability, DCM-LN highly outperforms all considered models in interpretability, i.e., recovering willingness to pay at individual and population levels. The empirical study verifies the balanced interpretability and predictability of DCM-LN. With superior interpretability and high predictability, DCM-LN lays out new pathways to harmonize the theory-driven and data-driven paradigms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call