Abstract
AbstractIn order to develop novel solutions for complex systems and in increasingly competitive markets, it may be advantageous to generate large numbers of design concepts and then to identify the most novel and valuable ideas. However, it can be difficult to process, review, and assess thousands of design concepts. Based on this need, we develop and demonstrate an automated method for design concept assessment. In the method, machine learning technologies are first applied to extract ontological data from design concepts. Then, a filtering strategy and quantitative metrics are introduced that enable creativity rating based on the ontological data. This method is tested empirically. Design concepts are crowd-generated for a variety of actual industry design problems/opportunities. Over 4000 design concepts were generated by humans for assessment. Empirical evaluation assesses: (1) correspondence of the automated ratings with human creativity ratings; (2) whether concepts selected using the method are highly scored by another set of crowd raters; and finally (3) if high scoring designs have a positive correlation or relationship to industrial technology development. The method provides a possible avenue to rate design concepts deterministically. A highlight is that a subset of designs selected automatically out of a large set of candidates was scored higher than a subset selected by humans when evaluated by a set of third-party raters. The results hint at bias in human design concept selection and encourage further study in this topic.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have