Nowadays, vast amounts of multimedia content are being produced, archived, and digitized, resulting in great troves of data of interest. Examples include user-generated content, such as images, videos, text, and audio posted by users on social media and wikis, or content provided through official publishers and distributors, such as digital libraries, organizations, and online museums. This digital content can serve as a valuable source of inspiration to the creative industries, such as architecture and gaming, to produce new innovative assets or to enhance and (re-)use existing ones. However, in its current form, this content is difficult to be reused and repurposed due to the lack of appropriate solutions for its retrieval, analysis, and integration into the design process. In this article, we present V4Design, a novel framework for the automatic content analysis, linking, and seamless transformation of heterogeneous multimedia content to help architects and virtual reality game designers establish innovative value chains and end-user applications. By integrating and intelligently combining state-of-the-art technologies in computer vision, 3-D generation, text analysis, generation and semantic integration, and interlinking, V4Design provides architects and video game designers with innovative tools to draw inspiration from archive footage and documentaries, inspiring and eventually supporting the design process.
Read full abstract