Abstract

The electricity market has provided a complex economic environment, and consequently has increased the requirement for advancement of learning methods. In the agent-based modeling and simulation framework of this economic system, the generation company's decision-making is modeled using reinforcement learning. Existing learning methods that model the generation company's strategic bidding behavior are not adapted to the non-stationary and non-Markovian environment involving multidimensional and continuous state and action spaces. This paper proposes a reinforcement learning method to overcome these limitations. The proposed method discovers the input space structure through the self-organizing map, exploits learned experience through Roth-Erev reinforcement learning and explores through the actor critic map. Simulation results from experiments show that the proposed method outperforms Simulated Annealing Q-Learning and Variant Roth-Erev reinforcement learning. The proposed method is a step towards more realistic agent learning in Agent-based Computational Economics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call