Abstract

We study the classical newsvendor problem in which the decision maker must trade off underage and overage costs. In contrast to the typical setting, we assume that the decision maker does not know the underlying distribution driving uncertainty but has only access to historical data. In turn, the key questions are how to map existing data to a decision and what type of performance to expect as a function of the data size. We analyze the classical setting with access to past samples drawn from the distribution (e.g., past demand), focusing not only on asymptotic performance but also on what we call the transient regime of learning, that is, performance for arbitrary data sizes. We evaluate the performance of any algorithm through its worst-case relative expected regret, compared with an oracle with knowledge of the distribution. We provide the first finite sample exact analysis of the classical sample average approximation (SAA) algorithm for this class of problems across all data sizes. This allows one to uncover novel fundamental insights on the value of data: It reveals that tens of samples are sufficient to perform very efficiently but also that more data can lead to worse out-of-sample performance for SAA. We then focus on the general class of mappings from data to decisions without any restriction on the set of policies and derive an optimal algorithm (in the minimax sense) and characterize its associated performance. This leads to significant improvements for limited data sizes and allows to exactly quantify the value of historical information. This paper was accepted by David Simchi-Levi, data science. Supplemental Material: The data files and online appendix are available at https://doi.org/10.1287/mnsc.2023.4725 .

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call