Abstract

Introduction Artificial Intelligence (AI) is an emerging technology that promises advancements across blood cancer discovery and care delivery. There are already FDA-approved uses of AI in hematology and oncology but few data available about its use or utility. Concerns have been raised over AI's potential biases, “explainability,” and who is responsible for its use. Oncologists' familiarity with AI and perspectives on these issues are not well known and are critical to understanding the ethical deployment of AI in blood cancer care. Methods From January to July 2023, we performed a national cross-sectional survey of practicing oncologists. A draft survey instrument was developed by a team with content and methodological expertise. Iterative refinement occurred through cognitive debriefing with 5 practicing hematologist/oncologists. The final instrument included 24 questions covering domains of AI familiarity, predictions, explainability, and responsibilities. A random sample of US oncologists was identified using the National Plan & Provider Enumeration System. Paper surveys were mailed with $25 incentivizing gift cards. Reminder letters and phone calls were used for non-responders with an electronic survey option provided. Responses were analyzed descriptively. X2 testing assessed bivariate associations between responses and participant practice setting, with odds ratios (OR) reported. Results Of 399 mailed surveys, 12 were undeliverable and 201 were completed for a response rate of 51.9%. Question missingness was <1%. Participants were 63.2% White, 16.9% Indian Asian, 10.0% Eastern Asian or Pacific Islander, and 4.5% Black; 6.0% were Hispanic. They represented 37 states. The majority were male (63.7%), 40-59 years old (54.2%), and had no specific training in AI (52.2%); 68.2% had at least 11 years in practice and 29.9% practiced at a primary academic hospital. Questions assessing overall familiarity with and predictions related to AI are shown in Table 1. Those at primary academic hospitals were more likely to be familiar with radiology models (OR 2.82, p<0.001) but not pathology, prognostic, or clinical decision models (all p>0.05). They were more likely to predict that AI would improve side effect management (OR 2.01, p=0.03) and end-of-life decision making (OR 2.07, p=0.02). High proportions of respondents thought that AI prognostic (67.1%) and clinical decision models (62.1%) needed to be explainable by clinicians and researchers to be used in clinic. When posed with a scenario where an FDA-approved AI model selects a different regimen than the oncologist planned to propose (Figure 1), the largest proportion reported they would present both options and let the patient decide (37.3%). Of those recommending a regimen, respondents from primary academic hospitals more frequently recommend the AI-generated regimen (56.1%), while respondents from elsewhere recommended their original regimen (69.0%; OR 2.85, p=0.007). Acceptability of some direct-to-patient AI model applications is shown in Table 1. A majority of respondents (90.5%) thought that AI developers should be responsible for medico-legal problems caused by AI use; 47.8% thought that physicians using the tool should be responsible. Though most (76.1%) agreed that it was oncologists' responsibility to protect patients from biased AI tools, few (27.3%) were confident in their ability to identify how representative the data used in an AI model was relative to the population with that cancer (including 66.7% of those who agreed it was oncologists' responsibility). More respondents thought that patients should need to consent to the use of AI tools in cancer treatment decisions (81.1%) than cancer diagnosis (55.7%); those from primary academic hospitals were less likely to respond that patients should consent for treatment or diagnosis (OR 0.43, p=0.02 and OR 0.44, p=0.009). Conclusions In this national survey of oncologists' views on AI, there was optimism about its impact on care with notable caveats related to end-of-life care, patient trust, and care disparities. Despite many approving use of AI models too complex for patients to explain, a substantial proportion deferred to patient choice when AI and physician recommendations differed. There was also dissonance between accepting responsibility to protect patients from bias and respondents' self-perceived ability to do so.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call