As artificial intelligence (AI) gains prominence in pathology and medicine, the ethical implications and potential biases within such integrated AI models will require careful scrutiny. Ethics and bias are important considerations in our practice settings, especially as increased number of machine learning (ML) systems are being integrated within our various medical domains. Such machine learning based systems, have demonstrated remarkable capabilities in specified tasks such as but not limited to image recognition, natural language processing, and predictive analytics. However, the potential bias that may exist within such AI-ML models can also inadvertently lead to unfair and potentially detrimental outcomes. The source of bias within such machine learning models can be due to numerous factors but can be typically put in three main buckets (data bias, development bias and interaction bias). These could be due to the training data, algorithmic bias, feature engineering and selection issues, clinical and institutional bias (i.e. practice variability), reporting bias, and temporal bias (i.e. changes in technology, clinical practice or disease patterns). Therefore despite the potential of these AI-ML applications, their deployment in our day to day practice also raises noteworthy ethical concerns. To address ethics and bias in medicine, a comprehensive evaluation process is required which will encompass all aspects such systems, from model development through clinical deployment. Addressing these biases is crucial to ensure that AI-ML systems remain fair, transparent, and beneficial to all. This review will discuss the relevant ethical and bias considerations in AI-ML specifically within the pathology and medical domain.
Read full abstract