Abstract
Should firms that apply machine learning algorithms in their decision making make their algorithms transparent to the users they affect? Despite the growing calls for algorithmic transparency, most firms keep their algorithms opaque, citing potential gaming by users that may negatively affect the algorithm’s predictive power. In this paper, we develop an analytical model to compare firm and user surplus with and without algorithmic transparency in the presence of strategic users and present novel insights. We identify a broad set of conditions under which making the algorithm transparent actually benefits the firm. We show that, in some cases, even the predictive power of the algorithm can increase if the firm makes the algorithm transparent. By contrast, users may not always be better off under algorithmic transparency. These results hold even when the predictive power of the opaque algorithm comes largely from correlational features and the cost for users to improve them is minimal. We show that these insights are robust under several extensions of the main model. Overall, our results show that firms should not always view manipulation by users as bad. Rather, they should use algorithmic transparency as a lever to motivate users to invest in more desirable features. This paper was accepted by D. J. Wu, information systems. Supplemental Material: The online appendix is available at https://doi.org/10.1287/mnsc.2022.4475 .
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.