Abstract
AbstractWhen consumers avoid taking algorithmic advice, it can prove costly to both marketers (whose algorithmic product offerings go unused) and to themselves (who fail to reap the benefits that algorithmic predictions often provide). In a departure from previous research focusing on when algorithm aversion proves more or less likely, we sought to identify and remedy one reason why it occurs in the first place. In seven pre‐registered studies, we find that consumers tend to avoid algorithmic advice on the often faulty assumption that those algorithms, unlike their human counterparts, cannot learn from mistakes, in turn offering an inroad by which to reduce algorithm aversion: highlighting their ability to learn. Process evidence, through both mediation and moderation, examines why consumers fail to trust algorithms that err across a variety of prediction domains and how different theory‐driven interventions can solve the practical problem of enhancing trust and consequential choice in algorithms.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.