Abstract

In this article, we examine the concept of non-personal data from a law and computer science perspective. The delineation between personal data and non-personal data is of paramount importance to determine the GDPR’s scope of application. This exercise is, however, fraught with difficulty, also when it comes to de-personalised data – that is to say data that once was personal data but has been manipulated with the goal of turning it into anonymous data. This article charts that the legal definition of anonymous data is subject to uncertainty. Indeed, the definitions adopted in the GDPR, by the Article 29 Working Party and by national supervisory authorities diverge significantly. Whereas the GDPR admits that there can be a remaining risk of identification even in relation to anonymous data, others have insisted that no such risk is acceptable. A review of the technical underpinnings of anonymisation that is subsequently applied to two concrete case studies involving personal data used on blockchains, we conclude that there always remains a residual risk when anonymisation is used. The concluding section links this conclusion more generally to the notion of risk in the GDPR.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.