Abstract

Media systems that personalize their offerings keep track of users’ tastes by constantly learning from their activities. Some systems use this characteristic of machine learning to encourage users with statements like “the more you use the system, the better it can serve you in the future.” However, it is not clear whether users indeed feel encouraged and consider the system to be helpful and beneficial, or begin to worry about jeopardizing their privacy in the process. We conducted a between-subjects experiment (N = 269) to find out. Guided by the HAII-TIME model (Sundar, 2020), we examined the effects of both explicit and implicit cues on the interface which conveyed that the machine is learning. Data indicate that users consider the system to be a helper and tend to trust it more when the system is transparent about its learning, regardless of the quality of its performance and the degree of explicitness in conveying the fact that it is learning from their activities. The study found no evidence to suggest privacy concerns arising from the machine disclosing that it is learning from its users. We discuss theoretical and practical implications of deploying machine learning cues to enhance user experience of AI-embedded systems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.