Abstract

Algorithmic or automated decision-making has become commonplace, with firms implementing either rule-based or statistical models to determine whether or not to provide services to customers based on their past behaviors. Policy-makers are pressed to determine if and how to require firms to explain the decisions made by their algorithms, especially in cases where the algorithms are “unexplainable,” or are equivalently subject to legal or commercial confidentiality restrictions or too complex for humans to understand. We study consumer responses to goal-oriented, or “teleological,” explanations, which present the purpose or objective of the algorithm without revealing its mechanism, making them candidates for explaining decisions made by “unexplainable” algorithms. In a field experiment with a technology firm and several online lab experiments, we demonstrate the effectiveness of teleological explanations and identify conditions when teleological and mechanistic explanations can be equally satisfying. Participants perceive teleological explanations as fair, even though algorithms with a fair goal may employ an unfair mechanism. Our results show that firms may benefit by offering teleological explanations for unexplainable algorithm behavior. Regulators can mitigate possible risks by educating consumers about the potential disconnect between an algorithm’s goal and its mechanism.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.