Can combining affective computing and large language models improve user experience in human-agent interactions?

Answer from top 10 papers

The integration of affective computing with large language models (LLMs) appears to enhance user experience (UX) in human-agent interactions. Affective computing's ability to recognize and process human emotions contributes to more personalized and emotionally intelligent interactions (Pan et al., 2024). When combined with the advanced natural language processing capabilities of LLMs, this integration can lead to more intuitive and engaging user experiences. For instance, LLMs can perform sentiment analysis and emotion recognition, which are crucial for developing socially interactive agents and applications that resonate with users on an emotional level (Plaat et al., 2023).
However, there are indications that LLMs may not yet fully convey empathy as effectively as humans, which is a critical aspect of affective computing (Zhang et al., 2023). This suggests that while the combination of these technologies has potential, there is room for improvement in how LLMs handle the affective dimensions of human-agent interaction. Moreover, the user-centric approach in LLM development is essential for aligning technological advancements with the complex realities of human interactions (Kheder, 2023).
In summary, the fusion of affective computing with LLMs holds promise for improving UX in human-agent interactions by providing more emotionally aware and responsive systems. Nevertheless, the current limitations in LLMs' empathy abilities highlight the need for ongoing research and development to fully realize the benefits of this integration (Plaat et al., 2023; Zhang et al., 2023). The potential for enhanced UX through such technological synergy is evident, yet it requires a concerted effort to address the nuances of human emotion in the context of HCI (Pan et al., 2024; Plaat et al., 2023).

Source Papers

A Hybrid Multimodal Emotion Recognition Framework for UX Evaluation Using Generalized Mixture Functions.

Multimodal emotion recognition has gained much traction in the field of affective computing, human-computer interaction (HCI), artificial intelligence (AI), and user experience (UX). There is growing demand to automate analysis of user emotion towards HCI, AI, and UX evaluation applications for providing affective services. Emotions are increasingly being used, obtained through the videos, audio, text or physiological signals. This has led to process emotions from multiple modalities, usually combined through ensemble-based systems with static weights. Due to numerous limitations like missing modality data, inter-class variations, and intra-class similarities, an effective weighting scheme is thus required to improve the aforementioned discrimination between modalities. This article takes into account the importance of difference between multiple modalities and assigns dynamic weights to them by adapting a more efficient combination process with the application of generalized mixture (GM) functions. Therefore, we present a hybrid multimodal emotion recognition (H-MMER) framework using multi-view learning approach for unimodal emotion recognition and introducing multimodal feature fusion level, and decision level fusion using GM functions. In an experimental study, we evaluated the ability of our proposed framework to model a set of four different emotional states (Happiness, Neutral, Sadness, and Anger) and found that most of them can be modeled well with significantly high accuracy using GM functions. The experiment shows that the proposed framework can model emotional states with an average accuracy of 98.19% and indicates significant gain in terms of performance in contrast to traditional approaches. The overall evaluation results indicate that we can identify emotional states with high accuracy and increase the robustness of an emotion classification system required for UX measurement.

Read full abstract
Open Access
Exploring the Potential of Large Language Models in Radiological Imaging Systems: Improving User Interface Design and Functional Capabilities

Large language models (LLMs) have demonstrated remarkable capabilities in natural language processing tasks, including conversation, in-context learning, reasoning, and code generation. This paper explores the potential application of LLMs in radiological information systems (RIS) and assesses the impact of integrating LLMs on RIS development and human–computer interaction. We present ChatUI-RIS, a prototype chat-based user interface that leverages LLM capabilities to enhance RIS functionality and user experience. Through an exploratory study involving 26 medical students, we investigate the efficacy of natural language dialogue for learning and operating RIS. Our findings suggest that LLM integration via a chat interface can significantly improve operational efficiency, reduce learning time, and facilitate rapid expansion of RIS capabilities. By interacting with ChatUI-RIS using natural language instructions, medical students can access and retrieve radiology information in a conversational manner. The LLM-powered chat interface not only streamlines user interactions, but also enables more intuitive and efficient navigation of complex RIS functionalities. Furthermore, the natural language processing capabilities of LLMs can be harnessed to automatically generate code snippets and database queries, accelerating RIS development and customization. Preliminary observations indicate that integrating LLMs in RIS has the potential to revolutionize user interface design, enhance system capabilities, and ultimately improve the overall user experience for radiologists and medical professionals.

Read full abstract
Open Access
HUMAN-COMPUTER INTERACTION: ENHANCING USER EXPERIENCE IN INTERACTIVE SYSTEMS

In this research, we investigate how human-computer interaction (HCI) can be used to improve the user experience (UX) of interactive systems. Studies in cognitive psychology, information processing, and human factors are examined as they relate to the development of HCI. It highlights how HCI has shifted its focus from functionality to user-friendliness, teaching ability, efficiency, enjoyment, and emotion. To better understand the current state of HCI and UX research, practice, and theory, a systematic literature study is performed. Focusing on users' goals wants, and characteristics at every stage of the design process is central to user-centered design (UCD) ideas and approaches, which are discussed at length in this article. We investigate usability testing as a crucial technique for bettering HCI, focusing on its advantages in pinpointing usability problems, boosting system efficacy, and boosting user pleasure. Methods for creating tests, finding participants, collecting data, and analyzing results are discussed. The importance of prototype methods in HCI and user-centric design is also emphasized in the study. This article delves into the practice of creating prototypes to collect user feedback, iterate designs, and perfect interactive systems. Techniques covered include paper prototyping, interactive wireframes, and high-fidelity prototypes. We propose interaction design frameworks like the User-Centered Design Process (UCDP) and the Double Diamond model to help designers prioritize users when developing interactive systems. The study also delves into how technologies like augmented reality, virtual reality, natural language processing, machine learning, and gesture-based interfaces have revolutionized HCI in recent years. The paper defends user-centric design's place in HCI, pointing out how UX affects user happiness, participation, and output. Researchers and practitioners in HCI and software engineering can greatly benefit from this paper's findings.

Read full abstract
Open Access
From Voices to Validity: Leveraging Large Language Models (LLMs) for Textual Analysis of Policy Stakeholder Interviews

Obtaining stakeholders' diverse experiences and opinions about current policy in a timely manner is crucial for policymakers to identify strengths and gaps in resource allocation, thereby supporting effective policy design and implementation. However, manually coding even moderately sized interview texts or open-ended survey responses from stakeholders can often be labor-intensive and time-consuming. This study explores the integration of Large Language Models (LLMs)--like GPT-4--with human expertise to enhance text analysis of stakeholder interviews regarding K-12 education policy within one U.S. state. Employing a mixed-methods approach, human experts developed a codebook and coding processes as informed by domain knowledge and unsupervised topic modeling results. They then designed prompts to guide GPT-4 analysis and iteratively evaluate different prompts' performances. This combined human-computer method enabled nuanced thematic and sentiment analysis. Results reveal that while GPT-4 thematic coding aligned with human coding by 77.89% at specific themes, expanding to broader themes increased congruence to 96.02%, surpassing traditional Natural Language Processing (NLP) methods by over 25%. Additionally, GPT-4 is more closely matched to expert sentiment analysis than lexicon-based methods. Findings from quantitative measures and qualitative reviews underscore the complementary roles of human domain expertise and automated analysis as LLMs offer new perspectives and coding consistency. The human-computer interactive approach enhances efficiency, validity, and interpretability of educational policy research.

Read full abstract
Can Large Language Models Exhibit Cognitive and Affective Empathy as Humans?

Empathy, a critical component of human-social interaction, has become a core concern in human-computer interaction. This study examines whether current large language models (LLMs) can exhibit empathy in both cognitive and affective dimensions, akin to humans. We proposed a novel paradigm for LLMs' evaluation based on the standardized questionnaires. Four main experiments were reported on LLMs' empathy abilities. Specifically, GPT-4 and Llama3 were tested as indexed by the Interpersonal Reactivity Index (IRI) and the Basic Empathy Scale (BES). Two levels of evaluations were conducted to investigate whether the structural validity of the questionnaire in LLMs is aligned with humans and to compare the LLMs’ empathy abilities with humans directly. GPT-4 showed identical empathy dimensions to humans while exhibiting significantly lower empathy abilities in both cognitive and affective dimensions. The completely different empathy ability was more evident in Llama3 by showing its failure to exhibit the same empathy dimensions as we humans. All these findings indicate that LLMs cannot convey empathy ability as we humans currently. This highlights the need for further development and fine-tuning of LLMs to enhance their empathy abilities. In addition, the way to prompt LLMs to simulate diverse LLMs-based participants was discussed as well as the sampling strategy.

Read full abstract
DB-GPT: Empowering Database Interactions with Private Large Language Models

The recent breakthroughs in large language models (LLMs) are positioned to transition many areas of software. Database technologies particularly have an important entanglement with LLMs as efficient and intuitive database interactions are paramount. In this paper, we present DB-GPT, a revolutionary and production-ready project that integrates LLMs with traditional database systems to enhance user experience and accessibility. DB-GPT is designed to understand natural language queries, provide context-aware responses, and generate complex SQL queries with high accuracy, making it an indispensable tool for users ranging from novice to expert. The core innovation in DB-GPT lies in its private LLM technology, which is fine-tuned on domain-specific corpora to maintain user privacy and ensure data security while offering the benefits of state-of-the-art LLMs. We detail the architecture of DB-GPT, which includes a novel retrieval augmented generation (RAG) knowledge system, an adaptive learning mechanism to continuously improve performance based on user feedback and a service-oriented multi-model framework (SMMF) with powerful data-driven agents. Our extensive experiments and user studies confirm that DB-GPT represents a paradigm shift in database interactions, offering a more natural, efficient, and secure way to engage with data repositories. The paper concludes with a discussion of the implications of DB-GPT framework on the future of human-database interaction and outlines potential avenues for further enhancements and applications in the field. The project code is available at https://github.com/eosphoros-ai/DB-GPT. Experience DB-GPT for yourself by installing it with the instructions https://github.com/eosphoros-ai/DB-GPT#install and view a concise 10-minute video at https://www.youtube.com/watch?v=KYs4nTDzEhk.

Read full abstract
Understanding User Experience in Large Language Model Interactions

In the rapidly evolving landscape of large language models (LLMs), most research has primarily viewed them as independent individuals, focusing on assessing their capabilities through standardized benchmarks and enhancing their general intelligence. This perspective, however, tends to overlook the vital role of LLMs as user-centric services in human-AI collaboration. This gap in research becomes increasingly critical as LLMs become more integrated into people's everyday and professional interactions. This study addresses the important need to understand user satisfaction with LLMs by exploring four key aspects: comprehending user intents, scrutinizing user experiences, addressing major user concerns about current LLM services, and charting future research paths to bolster human-AI collaborations. Our study develops a taxonomy of 7 user intents in LLM interactions, grounded in analysis of real-world user interaction logs and human verification. Subsequently, we conduct a user survey to gauge their satisfaction with LLM services, encompassing usage frequency, experiences across intents, and predominant concerns. This survey, compiling 411 anonymous responses, uncovers 11 first-hand insights into the current state of user engagement with LLMs. Based on this empirical analysis, we pinpoint 6 future research directions prioritizing the user perspective in LLM developments. This user-centered approach is essential for crafting LLMs that are not just technologically advanced but also resonate with the intricate realities of human interactions and real-world applications.

Read full abstract
User experience: tool for Human-Computer Interaction (HCI) design

The changes in methodology and ideological implementation of User Experience (UX) and Human Computer Interaction (HCI) in the 21st century shift focus from traditional analogue process management to digital process management of organisations’ data and content. In instance of a university interactive web portal whereby institutions provide parties with access to data and information on the university portal. A focus shift from HCI design to UX design for improved peoples’ experience when navigating the institutions portal. A focus on understanding the relationship between UX and HCI as a phenomenon, considering the analogy of interactive organisation (institution, company, government) portal. Need for system developers to adopt UX evaluation, a methodology to ensure effective usability, accessibility and efficiency of a designed system. The nature of the study has its’ background in HCI. HCI relate to the mental (cognitive), technology and affective factors that influence the way people use computer for interaction in a computer-supported collaborative work/learn (CSCW/L) environment. The objective of this study is to ascertain at what point or put succinctly, what stage of system development does a system developer commits to user experience evaluation methods (UXEMs)?Keywords: Human Computer Interaction, User Experience, User Evaluation Method

Read full abstract
Open Access