Abstract

Companies expect great and promising benefits from automated decision-making with personal data; however, scientific research indicates that legal uncertainty exists among private controllers with the interpretation of provisions relevant to automated decision-making under the General Data Protection Regulation (GDPR). Article 35 GDPR obliges private controllers to execute a Data Protection Impact Assessment (DPIA) prior to deploying automated decisions on humans. Assessing potential fundamental rights impacts is part of that DPIA. The objective of this article is to provide private controllers with a practical approach for a DPIA to automated decision-making to detect potential impacts on fundamental rights. The approach indicates levels of impacts and types of measures a controller should consider to achieve an appropriate risk management. The impact assessment is based on four benchmarks: (i) to identify fundamental rights potentially at risk; (ii) to identify risks occurring in their ADM systems at design stages and during operation; (iii) to balance fundamental rights risks and controller interests involved; and (iv) to establish to what extent data subjects exercise control over data processing. By responding to the benchmarks, controllers identify risk levels that indicate the type of measures that should be considered to achieve fundamental rights compliant ADM. This approach enables controllers to give account towards data subjects and supervisory authorities about envisaged risk management to potential impacts on fundamental rights. The proposed approach seeks to foster compliant, fair, and transparent automated decision-making.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call