Abstract

This paper confronts assertions made by Dr Michael Veale, Dr Reuben Binns, and Professor Lilian Edwards in “Algorithms that remember: Model Inversion Attacks and Data Protection Law”, as well as the general trend by the courts to broaden the definition of ‘personal data’ under Article 4(1) GDPR to include ‘everything data-related’. Veale et al use examples from computer science to suggest some models, subject to certain attacks, reveal personal data. Accordingly, Veale et al argue that data subject rights could be exercised against the model itself. A computer science perspective, as well as case law from the Court of Justice of the European Union, is used to argue that effective machine-learning model governance can be achieved without widening the scope of personal data and that the governance of machine-learning models is better achieved through already existing provisions of data protection and other areas of law. Extending the scope of personal data to machine-learning models would render the protections granted to intelligent endeavours within the black box ineffectual.

Highlights

  • There are growing calls for regulation of models used to make inferences about an individual

  • A computer science perspective, as well as case law from the Court of Justice of the European Union, is used to argue that effective machine-learning model governance can be achieved without widening the scope of personal data and that the governance of machine-learning models is better achieved through already existing provisions of data protection and other areas of law

  • In its final report on ‘Disinformation and fake news’, the United Kingdom’s Department of Culture, Media and Sport called for the extension of protections of privacy law “beyond personal information to include models used to make inferences about an individual.”[2]. Arguably, this reflects the general trend of the Court of Justice of the European Union (CJEU) to extend the meaning of ‘personal data’ to almost everything data-related.[3]

Read more

Summary

Introduction

There are growing calls for regulation of models used to make inferences about an individual. While Veale et al recognise training-models have long been regulated (and protected) by intellectual property laws, their approach to extending models the same protection as personal data requires re-identification of data subjects from anonymised data. We argue that this is not the case. We conclude that areas of the law outside of the GDPR are better suited to help ensure the protection of data subjects We achieve this without expanding the scope of personal data to machine learning models

Synopsis of the Veale et al argument
Conceptual discussion of models as data
Data Protection and the “Legal Means” test
Criminal law as deterrence
Market as modality for model governance
Conclusion
Findings
Conceptually
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.