The capacity expansion problem is solved by accurately measuring the existing demand-supply mismatch and controlling the emissions output, considering multiple objectives, specific constraints, resource diversity, and resource allocation. This article proposes a reinforcement learning (RL) framework embedded with data envelopment analysis (DEA) to generate the optimal policy and guide the productivity improvement. The proposed framework uses DEA to evaluate efficiency and effectiveness for reward estimation in RL, and also assesses conditional value-at-risk to characterize the risk-averse capacity decision. Instead of focusing on short-term fluctuations in demand, RL optimizes the expected future reward with sequential capacity decisions over time. An empirical study of U.S. power generation validates the proposed framework and provides the managerial implications to policy makers. The results show that the RL agent can successfully learn the optimal policy through observing the interactions between the agent and the environment, and suggest the capacity adjustment that can improve efficiency by 8.3% and effectiveness by 0.9%. We conclude that RL complements productivity analysis, and emphasizes ex-ante planning over ex-post evaluation.
Read full abstract