In recent years, 3D scene construction has become increasingly popular in movies and games. However, it is without doubt that the involved effort is significant, and therefore how to simplify such a process has drawn the attentions from many researchers. More specifically, the construction of a 3D scene consists of two parts: the creation of 3D objects, and their deployment. In general, one possible and popular solution is to reuse previous 3D scene construction results. In this regard, there are least two types of approaches. The first type of approaches places more emphasis on the spatial relationships. In particular, by placing a query box in the current scene and comparing its relationships with other objects under the current scene, a desired object in a previous scene can be retrieved if it shares a similar configuration. However, inappropriate representations of previous spatial relationships may lead to ambiguous or superfluous retrieval results. The second type of approaches focuses on the generation of a single object. A method of such could either start from an initial model and gradually evolve into a more complex/specific one by selecting a similar model in the database, or directly synthesize a new model by a combination of more than one model from the database. This paper proposes a framework that not only integrates the two types of approaches just mentioned, but also unifies the previous two different ways for model construction. In addition, the representation of spatial relationships is further refined so that more desired retrieval results can be obtained, together with a meaningful object class scheme to facilitate the involved interaction for model construction.
Read full abstract