Abstract

The GDPR poses special requirements for the processing of sensitive data, but it is not clear whether these requirements are sufficient to prevent the risk associated with this processing because this risk is not clearly defined. Furthermore, the GDPR’s clauses on the processing of—and profiling based on—sensitive data do not sufficiently account for the fact that individual data subjects are parts of complex systems, whose emergent properties betray sensitive traits from non-sensitive data. The algorithms used to process big data are largely opaque to both controllers and data subjects: if the output of an algorithm has discriminatory effects coinciding with sensitive traits because the algorithm accidentally discerns an emergent property, this may remain unnoticed. At the moment, there are no remedies that can prevent the discovery of sensitive traits from non-sensitive data. Managing the risks resulting from processing data that can reveal sensitive traits requires a strategy combining precautionary measures, public discourse, and enforcement until the risks are more completely understood. Insights from complex systems science are likely to be useful in better understanding these risks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.