Can combining affective computing and large language models improve user experience in human-agent interactions?
Answer from top 10 papers
The integration of affective computing with large language models (LLMs) appears to enhance user experience (UX) in human-agent interactions. Affective computing's ability to recognize and process human emotions contributes to more personalized and emotionally intelligent interactions (Pan et al., 2024). When combined with the advanced natural language processing capabilities of LLMs, this integration can lead to more intuitive and engaging user experiences. For instance, LLMs can perform sentiment analysis and emotion recognition, which are crucial for developing socially interactive agents and applications that resonate with users on an emotional level (Plaat et al., 2023).
However, there are indications that LLMs may not yet fully convey empathy as effectively as humans, which is a critical aspect of affective computing (Zhang et al., 2023). This suggests that while the combination of these technologies has potential, there is room for improvement in how LLMs handle the affective dimensions of human-agent interaction. Moreover, the user-centric approach in LLM development is essential for aligning technological advancements with the complex realities of human interactions (Kheder, 2023).
In summary, the fusion of affective computing with LLMs holds promise for improving UX in human-agent interactions by providing more emotionally aware and responsive systems. Nevertheless, the current limitations in LLMs' empathy abilities highlight the need for ongoing research and development to fully realize the benefits of this integration (Plaat et al., 2023; Zhang et al., 2023). The potential for enhanced UX through such technological synergy is evident, yet it requires a concerted effort to address the nuances of human emotion in the context of HCI (Pan et al., 2024; Plaat et al., 2023).
Source Papers