Abstract

In this study, procedural content generation (PCG) using reinforcement learning (RL) is focused. PCG is defined as the generation of game content tailored to the defined evaluation function using RL models, which is one of the examples of PCG via machine learning. Compared to other generation content areas such as computer vision and natural language process, generative models such as variational autoencoders, PixelCNN, and generative adversarial networks exhibit some difficulties for applications to the game area because during the development of a new game, the content data used for training is typically not sufficient. Hence, RL is considered to be used as a method for PCG. In particular, the stage of turn-based RPG is selected as our research target because it comprises discrete sections, and its parameters were closely related; hence, it is a challenge to generate desirable stages, and the main goal is to generate various stages guided by the designed evaluation function. Two RL models, Deep Q-Network and Deep Deterministic Policy Gradient, respectively, are selected, and the generated stages are evaluated as 0.78 and 0.85 by our designed function, respectively. By the application of the stochastic noise policy, diverse stages are successfully obtained, and those diversities are evaluated by the parameter mse and the different number of valid strategies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call