Abstract

Designing a 3D game scene is a tedious task that often requires a substantial amount of work. Typically, this task involves the synthesis and coloring of 3D models within the scene. To lessen this workload, we can apply machine learning to automate some aspects of the game scene development. Earlier research has already tackled automated generation of the game scene background with machine learning. However, model auto-coloring remains an underexplored problem. The automatic coloring of a 3D model is a challenging task, especially when dealing with the digital representation of a colorful, multipart object. In such a case, we have to “understand” the object’s composition and coloring scheme of each part. Moreover, existing single-stage methods have their caveats. We address these limitations by proposing a two-stage training approach to synthesize auto-colored 3D models. In the first stage, we obtain a 3D point cloud representing a 3D object, while in the second stage, we assign colors to points within such a cloud. Next, we generate a 3D mesh in which the surfaces are colored based on the interpolation of colored points representing vertices of a given mesh triangle. This approach allows us to develop a smooth coloring scheme.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call