Abstract

Concerns about threats to human autonomy feature prominently in the field of AI ethics. One aspect of this concern relates to the use of AI systems for problematically manipulative influence. In response to this, the European Union's draft AI Act (AIA) includes a prohibition on AI systems deploying subliminal techniques that alter people's behavior in ways that are reasonably likely to cause harm (Article 5(1)(a)). Critics have argued that the term ‘subliminal techniques’ is too narrow to capture the target cases of AI-based manipulation. We propose a definition of ‘subliminal techniques’ that (a) is grounded on a plausible interpretation of the legal text; (b) addresses all or most of the underlying ethical concerns motivating the prohibition; (c) is defensible from a scientific and philosophical perspective; and (d) does not over-reach in ways that impose excessive administrative and regulatory burdens. The definition provides guidance for design teams seeking to pursue responsible and ethically aligned AI innovation.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.