Abstract

Storage assets are critical for temporal trading of commodities under volatile prices. State-of-the-art methods for managing storage such as the reoptimization heuristic (RH), which are part of commercial software, approximate a Markov decision process (MDP) assuming full information regarding the state and the stochastic commodity price process and hence suffer from informational inconsistencies with observed price data and structural inconsistencies with the true optimal policy, which are both components of generalization error. Based on extensive backtests, we find that this error can lead to significantly suboptimal RH policies and qualitatively different performance compared to the known near-optimality and behavior of RH in the full-information setting. We develop a forward-looking data-driven approach (DDA) to learn policies and overcome generalization error. This approach extends standard (backward-looking) DDA in two ways: (i) it uses financial-market features and estimates of future pro ts as part of the training objective, which typically includes past pro ts alone; and (ii) it enforces structural properties of the optimal policy. To elaborate, DDA trains parameters of bang-bang and base-stock policies, respectively, by solving linear-and mixed-integer programs, thereby extending known DDAs that parameterize decisions as functions of features without enforcing policy structure. We backtest the performance of DDA and RH on six major commodities from 2000 to 2017 with features constructed using Thomson Reuters and Bloomberg data. DDA significantly improves RH on real data, with base-stock structure needed to realize this improvement. Our research advances the state-of-the-art for storage operations and suggests modifications to commercial software to handle generalization error.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.