Young adults may feel embarrassed when disclosing sensitive information to their parents, while parents might similarly avoid sharing sensitive aspects of their lives with their children. How to design interactive interventions that are sensitive to the needs of both younger and older family members in mediating sensitive information remains an open question. In this paper, we explore the integration of large language models (LLMs) with social robots. Specifically, we use GPT-4 to adapt different Robot Communication Styles (RCS) for a social robot mediator designed to elicit self-disclosure and mediate health information between parents and young adults living apart. We design and compare four literature-informed RCS: three LLM-adapted (Humorous, Self-deprecating, and Persuasive) and one manually created (Human-scripted), and assess participant perceptions of Likeability, Usefulness, Helpfulness, Relatedness, and Interpersonal Closeness . Through an online experiment with 183 participants, we assess the RCS across two groups: adults with children (Parents) and young adults without children (Young Adults). Our results indicate that both Parents and Young Adults favoured the Human-scripted and Self-deprecating RCS as compared to the other two RCS. The Self-deprecating RCS furthermore led to increased relatedness as compared to the Humorous RCS. Our qualitative findings reveal challenges people have in disclosing health information to family members, and who normally assumes the role of family facilitator-two areas in which social robots can play a key role. The findings offer insights for integrating LLMs with social robots in health-mediation and other contexts involving the sharing of sensitive information.
Read full abstract