Abstract

Increasingly, digital technology can be used not only to measure relevant parameters of human anatomy and activity but also to gain exploratory information about mental faculties such as cognitive processes, personal preferences, and affective states. Although decoding the conceptual and non-conceptual content of mental states is unattainable at the current stand of technology development, several digital technologies such as neural interfaces, affective computing systems and digital behavioural technologies allow to establish increasingly reliable statistical associations between certain data patterns and mental activities such as memories, intensions and emotions. Furthermore, AI and big-data analytics potentially permit to explore these activities not just retrospectively but also in real-time and in a predictive manner. These converging technological developments are increasingly enabling what can be defined the digital mind—namely the moment-by-moment quantification of the individual-level human mind. In this article, we introduce the notion of ‘mental data’, which we define as any data that can be organized and processed to infer the mental states of a person, including their cognitive, affective and conative states. Further, we analyse the existing legal protection for this broad category of “mental data” by assessing meaningful risks for individuals’ rights and freedoms. Our analysis is focused on the EU GDPR, since it is one of the most advanced and comprehensive data protection laws in the world, having also an extraterritorial impact on other legal systems. In particular, we reflect on the nature of mental data, the lawfulness of their processing considering the different legal bases and purposes, and relevant compliance measures. We conclude that, although the contextual definition of “sensitive data” might appear inadequate to cover many examples of mental data (e.g., “emotions” or other “thoughts” not related to health status, sexuality or political/religious beliefs), the GDPR – through an extensive interpretation of “risk” indexes as the EDPB proposes – seems to be an adequate tool to prevent or mitigate risks related to mental data processing. In conclusion, we recommend that interpreters and stakeholders focus on the “processing” characteristics, rather than merely on the “category of data” at issue. To achieve this goal, we call for a “Mental Data Protection Impact Assessment” (MDPIA), i.e. a specific DPIA procedure that can help to better assess and mitigate risks that mental data processing can bring to fundamental rights and freedom of individuals.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.