Abstract

With information technologies being increasingly involved in areas such as (online) shopping, entertainment or advertisement, computer systems are bound to be able to process Kansei information, i.e. information relevant to users' sensibilities. Rather than modelling the biology of users' sensibilities, we suggest a functional approach by modelling the translation process between different modalities of expression of a same Kansei concept. We hypothesise that this translation process can be grounded into the categorization of users' perception, i.e. the extraction of structures in the multimedia information. Because this translation process is intrinsically variable, we propose a computational agent, called K-Agent, able to learn categories in its visual perception and interactively evolve a translation language. The K-Agent consists of three main modules: a multi-feature image processing unit, a learning kernel that iteratively constructs the translation language, and a feedback interpreter system which incorporates self-supervision and user feedback to structurally tune the learning kernel. The concept of K-Agent has been evaluated in a real-world application involving user Kansei, more specifically, the filtering of images against a given user Kansei impression. Our experimental results demonstrate the feasibility of the concept as well as a superior performance compared to manually filtering the output of existing search engines.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.