Abstract

Video game research is an ever-changing and dynamic area where sophisticated methods and algorithms are being developed. Procedural content generation (PCG), which aims to merge user-generated assets with algorithms to automate and improve video game content, has been the core of this sophistication. However, the outcomes are primarily reflected in game aesthetics, not in the game mechanics and gameplay. In this study, we introduce the “game scene as a canvas” concept where simple prototype game development pipelines, that can convert a 2D game-level image into a game development environment with ready-to-use colliders and artistically different styles that enhance the game aesthetics, are introduced. To do so, edge-based and color-based features of the input game level image are extracted using the Canny edge detector, Simple Linear Iterative Clustering, and Felzenszwalb segmentation. The Unity game engine is then used to generate colliders based on the provided edge and color features where the game level is style transferred with spatial control. Results of different neural style transfer algorithms are presented on benchmark games such as Super Mario and Kid Icarus. Results show that this study can become a promising tool to simplify 2D video game development, focusing on game mechanics and aesthetics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call