Abstract

Abstract Design by analogy is a design ideation strategy to find inspiration from source domains to generate design concepts in target domains. Recently, many computational methods were proposed to measure similarities between source domains and target domains to build connections between them. However, most existing methods only explore either visual or semantic cues of the concepts in source and target domains but ignore the integration of both modalities. In fact, humans have remarkable visual reasoning ability to transfer knowledge learned from objects in familiar categories (source domains) to recognize objects from unfamiliar categories (target domains). In this paper, we propose a visual reasoning framework to support design by visual analogy. The challenge of this research is how computation methods can mimic the process of humans’ visual reasoning, which fuses visual and semantic knowledge. In the framework, the convolutional neural network (CNN) is applied to learn visual knowledge from objects in familiar categories. The hierarchy-based graph convolutional network (HGCN) is proposed to transfer learned visual knowledge from familiar categories to unfamiliar categories by their semantic distances. Finally, the unfamiliar objects can be reasoned and recognized based on the transferred visual knowledge. Extensive experimental results on one mechanical component benchmark dataset demonstrate the favorable performance of our proposed methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call