Abstract

The capacity expansion problem is solved by accurately measuring the existing demand-supply mismatch and controlling the emissions output, considering multiple objectives, specific constraints, resource diversity, and resource allocation. This article proposes a reinforcement learning (RL) framework embedded with data envelopment analysis (DEA) to generate the optimal policy and guide the productivity improvement. The proposed framework uses DEA to evaluate efficiency and effectiveness for reward estimation in RL, and also assesses conditional value-at-risk to characterize the risk-averse capacity decision. Instead of focusing on short-term fluctuations in demand, RL optimizes the expected future reward with sequential capacity decisions over time. An empirical study of U.S. power generation validates the proposed framework and provides the managerial implications to policy makers. The results show that the RL agent can successfully learn the optimal policy through observing the interactions between the agent and the environment, and suggest the capacity adjustment that can improve efficiency by 8.3% and effectiveness by 0.9%. We conclude that RL complements productivity analysis, and emphasizes ex-ante planning over ex-post evaluation.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.