Abstract

Clients may feel trapped into sharing their private digital data with insurance companies to get a desired insurance product or premium. However, private insurance must collect some data to offer products and premiums appropriate to the client’s level of risk. This situation creates tension between the value of privacy and common insurance business practice. We argue for three main claims: first, coercion to share private data with insurers is pro tanto wrong because it violates the autonomous choice of a privacy-valuing client. Second, we maintain that irrespective of being coerced, the choice of accepting digital surveillance by insurers makes it harder for the client to protect his or her autonomy (and to act spontaneously and authentically). The violation of autonomy also makes coercing customers into digital surveillance pro tanto morally wrong. Third, having identified an economically plausible process involving no direct coercion by insurers, leading to the adoption of digital surveillance, we argue that such an outcome generates further threats against autonomy. This threat provides individuals with a pro tanto reason to prevent this process. We highlight the freedom dilemma faced by regulators who aim to prevent this outcome by constraining market freedoms and argue for the need for further moral and empirical research on this question.

Highlights

  • Insurance is a genuinely data-driven industry and shows a keen interest in many applications of big data analytics and artificial intelligence, such as telematics in car insurance, fraud detection capabilities, or quantified-self applications for health and life insurances

  • That our analysis of threats is not based on the claim that this is the only right approach, and we explore the potential for moral wrongness that even noncoercive market transactions involving digital surveillance generate

  • Such a regulator should claim that even individuals who voluntarily adopt digital surveillance face a serious risk of losing their autonomy, spontaneity, or authenticity with respect to other choices made under insurance surveillance

Read more

Summary

Introduction

Insurance is a genuinely data-driven industry and shows a keen interest in many applications of big data analytics and artificial intelligence, such as telematics in car insurance, fraud detection capabilities, or quantified-self applications for health and life insurances. The first part of the addition (Q’s available options are made worse) and the last part (the consequence is intended by P to make A-ing less desirable) are intended to distinguish threats from offers and warnings, respectively Concerning the former, to determine whether Q’s options are made better or worse as a result of P’s decision to bring about such consequence, we rely here on the status quo baseline, not the moral baseline, as argued above. In the case of a conditional warning, the target of communication is made aware of this unfreedom, that is, the fact that the most preferred conjunctive choice from the option set is not possible This unfreedom does not exist because P intends to make one of Q’s alternatives (for example, joining the trade union) less desirable. Coercion is the result of a threat that has worked as intended

Distinguishing Threats from Warnings and Offers in the Insurance Domain
Further Research
Conclusion
Compliance with Ethical Standards
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call