"Artificial Intelligence (AI) systems are increasingly being developed and various applications are already used in medical practice. This development promises improvements in prediction, diagnostics and treatment decisions. As one example, in the field of psychiatry, AI systems can already successfully detect markers of mental disorders such as depression. By using data from social media (e.g. Instagram or Twitter), users who are at risk of mental disorders can be identified. This potential of AI-based depression detectors (AIDD) opens chances, such as quick and inexpensive diagnoses, but also leads to ethical challenges especially regarding users’ autonomy. The focus of the presentation is on autonomy-related ethical implications of AI systems using social media data to identify users with a high risk of suffering from depression. First, technical examples and potential usage scenarios of AIDD are introduced. Second, it is demonstrated that the traditional concept of patient autonomy according to Beauchamp and Childress does not fully account for the ethical implications associated with AIDD. Third, an extended concept of “Health-Related Digital Autonomy” (HRDA) is presented. Conceptual aspects and normative criteria of HRDA are discussed. As a result, HRDA covers the elusive area between social media users and patients. "
Read full abstract