Abstract

In this paper, we present an optimization method to analyze the simultaneous decisions on dynamic pricing and ordering quantities for seasonal products, by a retailer in monopolistic condition. Customers are assumed to be strategic and may postpone their purchase to get a lower price in future. The problem has been investigated in the context of multiple substitute products. We have developed a model based on deep neural networks to estimate customers’ demand. The problem is complex and cannot be solved using classical optimization methods. Therefore, we have developed a reinforcement learning algorithm called deep Q-learning algorithm (DQL) to solve the problem. The proposed algorithm is a combination of a Q-learning algorithm and two deep neural networks for the primary and discount sales periods, which uses the neural network to estimate the Q-values in a large space of states and actions. The performances of the demand model and the proposed optimization algorithm have been tested using a real-world dataset taken from the clothing industry. The results of our experiments demonstrate that the proposed demand model performs better than a fully connected neural network-based model and a latent class model tested in this paper. Furthermore, the performance of the DQL algorithm is significantly superior to those of two simulated annealing and genetic algorithms. In addition, the results of a comparison between the DQL algorithm and another reinforcement learning algorithm called State-Action-Reward-State-Action (SARSA) indicate that the proposed algorithm results in higher revenues and takes less time to converge. Consequently, the proposed algorithm has a high potential for solving such a large scale integrated pricing and ordering optimization problem.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call