AbstractArtificial intelligence (AI) is transforming healthcare operations. Nevertheless, particularly in the context of preventive care, little is known about how laypeople perceive and accept AI and change their behavior accordingly. Grounded in a solid theoretical framework of trust, this study bridges this gap by exploring individuals’ acceptance of AI‐based preventive health interventions and following health behavior change, which is critical for preventive care providers’ operational and business performance. Through a randomized field experiment with 15,000 users of a mobile health app complemented by a survey, we first show that the use and disclosure of AI in preventive health interventions improve their effectiveness. However, individuals are less likely to accept and achieve the health behavior change suggested by AI than when they receive similar interventions from health experts. We also observe that the effectiveness of AI‐based interventions can be improved by combining them with human expert opinions, increasing their algorithmic transparency, or emphasizing their genuine care and warmth. These results collectively suggest that, different from conventional technologies, AI's deficient affective trust, rather than comparable cognitive trust, play a decisive role in the acceptance of AI‐based preventive health interventions. This study sheds light on the literature on the role of new‐age information technologies in behavioral operations management, consumer marketing, and healthcare as well as the role of trust in technology acceptance. Valuable practical implications for more effective management of AI for preventive care operations and promotion of consumers’ health behavior are also provided.
Read full abstract