Abstract

Here, we provide an ethical analysis of discrimination in private insurance to guide the application of non-discriminatory algorithms for risk prediction in the insurance context. This addresses the need for ethical guidance of data-science experts, business managers, and regulators, proposing a framework of moral reasoning behind the choice of fairness goals for prediction-based decisions in the insurance domain. The reference to private insurance as a business practice is essential in our approach, because the consequences of discrimination and predictive inaccuracy in underwriting are different from those of using predictive algorithms in other sectors (e.g., medical diagnosis, sentencing). Here we focus on the trade-off in the extent to which one can pursue indirect non-discrimination versus predictive accuracy. The moral assessment of this trade-off is related to the context of application—to the consequences of inaccurate risk predictions in the insurance domain.

Highlights

  • Insurance has always been a data-driven business that relies on the statistical analysis of data about past cases and risk predictions regarding existing or prospective clients

  • We provide an ethical analysis of discrimination in private insurance to guide the application of non-discriminatory algorithms for risk prediction in the insurance context

  • Prioritarianism, like utilitarianism, supports incentives if they benefit persons in the worst-off group in absolute terms, even if better-off clients benefit from the premium reduction, proportionally, more than the worst off

Read more

Summary

Introduction

Insurance has always been a data-driven business that relies on the statistical analysis of data about past cases and risk predictions regarding existing or prospective clients. Algorithms can be used to assign a personalized premium They can make or suggest decisions, for example, whether to reject a client or pay their claim. We provide an ethical analysis of discrimination in private insurance to guide the application of non-discriminatory algorithms for risk prediction in the insurance context. We build interdisciplinary connections between debates on discrimination and fairness in general across computer science and philosophy (Binns 2018; Custers et al 2012; Gajane 2017) and those in the ethics of insurance. These debates have not yet been connected in the literature. Combine all these arguments into a decision whether to use “fairer” (or better, less indirectly discriminatory) data-driven predictive tools

Discrimination in the insurance domain
Definitions of discrimination types
Algorithms against direct and indirect discrimination
Avoid direct discrimination in machine learning
Algorithms against indirect discrimination
Trade-offs
Fairness and choice
When is indirect discrimination morally objectionable?
Overall assessment of reasons for using anti-discrimination techniques
Moral reasons in favor of accurate predictive models
Adverse selection
Incentives
Morally acceptable inaccuracy
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call