Abstract

Abstract In the digital age, privacy is an asset that is worth being protected. It gives users the opportunity to decide what he or she wants to reveal to others about themselves. Although privacy is legally protected, e.g. by the General Data Protection Regulation (GDPR) when processing personal data, users often have no freedom to decide which data is collected and processed for what purpose and to what extent. Usually, systems only support an all-or-nothing approach. To give users back a more fine-grained control over their privacy, it is necessary to selectively approve the data for related functionalities. This addresses the requirements to data minimization. A consent of processing personal data should consider personal preferences and must be withdrawn at any time. The consequences of a consent or a withdrawal should be explained to users. In this paper we present an approach to address the above challenges. Our approach enables us to support I) specific and in-situ explanations of data processing to the users on request and II) in-situ opportunities to make fine-grained decisions about the data usage. For that, we use a context-based adaptive learning environment and a specific domain model. This domain model is used by domain experts that are capable of concerning content, legal and technical requirements equally and that know what is relevant for specific learning resp. collaboration situations. They define related content, legal and technical policies that must be considered at runtime. We illustrated how we use our approach to give users back fine-grained control over their privacy and data, by enabling them to selectively approve the data for related functionalities at hand.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call