Abstract

This article will focus on abstracting and generalising a well-studied paradigm in visual, event-related potential based brain–computer interfaces, for the spelling of characters forming words, into the visually encoded discrimination of shape features forming design aggregates. After identifying typical technologies in neuroscience and neuropsychology of high interest for integrating fast cognitive responses into generative design and proposing the machine learning model of an ensemble of linear classifiers in order to tackle the challenging features that electroencephalography data carry, it will present experiments in encoding shape features for generative models by a mechanism of visual context updating and the computational implementation of vision as inverse graphics, to suggest that discriminative neural phenomena of event-related potentials such as P300 may be used in a visual articulation strategy for modelling in generative design.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call