Abstract

How to generate more revenues is crucial to cloud providers. Evidences from the Amazon cloud system indicate that "dynamic pricing" would be more profitable than "static pricing." The challenges are: How to set the price in real-time so to maximize revenues? How to estimate the price dependent demand so to optimize the pricing decision? We first design a discrete-time based dynamic pricing scheme and formulate a Markov decision process to characterize the evolving dynamics of the price-dependent demand. We formulate a revenue maximization framework to determine the optimal price and theoretically characterize the "structure" of the optimal revenue and optimal price. We apply the Q -learning to infer the optimal price from historical transaction data and derive sufficient conditions on the model to guarantee its convergence to the optimal price, but it converges slowly. To speed up the convergence, we incorporate the structure of the optimal revenue that we obtained earlier, leading to the VpQ-learning ( Q -learning with value projection) algorithm. We derive sufficient conditions, under which the VpQ-learning algorithm converges to the optimal policy. Experiments on a real-world dataset show that the VpQ-learning algorithm outperforms a variety of baselines, i.e., improves the revenue by as high as 50% over the Q -learning, speedy Q -learning, and adaptive real-time dynamic programming (ARTDP), and by as high as 20% over the fixed pricing scheme.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call