Abstract

The growing use of artificial intelligence (A.I.) algorithms in businesses raises regulators' concerns about consumer protection. While pricing and recommendation algorithms have undeniable consumer-friendly effects, they can also be detrimental to them through, for instance, the implementation of dark patterns. These correspond to algorithms aiming to alter consumers' freedom of choice or manipulate their decisions. While the latter is hardly new, A.I. offers significant possibilities for enhancing them, altering consumers' freedom of choice and manipulating their decisions. Consumer protection comes up against several pitfalls. Sanctioning manipulation is even more difficult because the damage may be diffuse and not easy to detect. Symmetrically, both ex-ante regulation and requirements for algorithmic transparency may be insufficient, if not counterproductive. On the one hand, possible solutions can be found in counter-algorithms that consumers can use. On the other hand, in the development of a compliance logic and, more particularly, in tools that allow companies to self-assess the risks induced by their algorithms. Such an approach echoes the one developed in corporate social and environmental responsibility. This contribution shows how self-regulatory and compliance schemes used in these areas can inspire regulatory schemes for addressing the ethical risks of restricting and manipulating consumer choice.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.