Abstract

Conceptual Blending (CB) theory describes the cognitive mechanisms underlying the way humans process the emergence of new conceptual spaces by blending two input spaces. CB theory has been primarily used as a method for interpreting creative artefacts, while recently it has been utilised in the context of computational creativity for algorithmic invention of new concepts. Examples in the domain of music include the employment of CB interpretatively as a tool to explain musical semantic structures based on lyrics of songs or on the relations between body gestures and music structures. Recent work on generative applications of CB has shown that proper low-level representation of the input spaces allows the generation of consistent and sometimes surprising blends. However, blending high-level features (as discussed in the interpretative studies) of music explicitly, is hardly feasible with mere low-level representation of objects. Additionally, selecting features that are more salient in the context of two input spaces and relevant background knowledge and should, thus, be preserved and integrated in new interesting blends has not yet been tackled in a cognitively pertinent manner. The paper at hand proposes a novel approach to generating new material that allows blending high-level features by combining low-level structures, based on statistically computed salience values for each high-level feature extracted from data. The proposed framework is applied to a basic but, at the same time, complicated field of music, namely melodic generation. The examples presented herein allow an insightful examination of what the proposed approach does, revealing new possibilities and prospects.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call