Abstract

In a world of rich, complex, and demanding audio environments, intelligent systems can mediate our interaction with the sounds around us—both to enable meaningful, aesthetic experiences and to transition work from humans to computational agents. Drawing from several years of our research, we suggest that the design of such systems must be driven by a deep understanding of auditory cognition. In this article, we discuss two concrete approaches we take toward cognition-informed interface design—one that begins with sounds themselves to form explicit, contextualized, cognitive models, built on the foundations of large-data parsing infrastructure; and one that begins with the individual, built from intuition surrounding the influence of the cognitive state on perception. We point toward an unexplored and compelling future at their intersection.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.