Abstract

Human behaviour is increasingly governed by automated decisional systems based on machine learning (ML) and ‘Big Data’. While these systems promise a range of benefits, they also throw up a congeries of challenges, not least for our ability as humans to understand their logic and ramifications. This chapter maps the basic mechanics of such systems, the concerns they raise, and the degree to which these concerns may be remedied by data protection law, particularly those provisions of the EU General Data Protection Regulation that specifically target automated decision-making. Drawing upon the work of Ulrich Beck, the chapter employs the notion of ‘cognitive sovereignty’ to provide an overarching conceptual framing of the subject matter. Cognitive sovereignty essentially denotes our moral and legal interest in being able to comprehend our environs and ourselves. Focus on this interest, the chapter argues, fills a blind spot in scholarship and policy discourse on ML-enhanced decisional systems, and is vital for grounding claims for greater explicability of machine processes.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.