Abstract

With all the attention currently being paid to the application of artificial intelligence to healthcare, it is easy to forget that the technology is not new. The term ‘artificial intelligence’ itself was coined over 60 years ago [[1]Nilsson N. The quest for artificial intelligence. Cambridge University Press, Cambridge2009Crossref Scopus (80) Google Scholar]. The dominant paradigm of artificial intelligence research until the late 1980s — ‘symbolic artificial intelligence’ or ‘good old-fashioned artificial intelligence’ — relies on human-readable representations of problems and logic [[2]Haugeland J. Artificial intelligence: the very idea.New Edition. MIT Press, Cambridge, MA1989Crossref Google Scholar]. An example of a healthcare tool developed using this approach was MYCIN, which used a knowledgebase of around 600 rules to infer the likely organism causing a bacteraemia and the recommended course of treatment [[3]Shortliffe E.H. Buchanan B.G. A model of inexact reasoning in medicine.Math Biosci. 1975; 23: 351-379https://doi.org/10.1016/0025-5564(75)90047-4Crossref Scopus (840) Google Scholar]. Although never used in clinical practice, MYCIN is widely credited with demonstrating the power of symbolic representation and reasoning to get computers to solve cognitive tasks, and various ‘expert systems’ inspired by MYCIN were deployed for use in other sectors throughout the 1980s [[1]Nilsson N. The quest for artificial intelligence. Cambridge University Press, Cambridge2009Crossref Scopus (80) Google Scholar]. By contrast, the majority of current excitement around artificial intelligence is focused on machine learning techniques, particularly deep learning, which rely on complex mathematical methods to recognise patterns in data, ‘learn’ from these patterns and subsequently make predictions based on these data [[4]Russell S.J. Norvig P. Artificial intelligence: a modern approach. Prentice Hall, Englewood Cliffs, NJ1995Google Scholar]. Table 1 outlines five use cases for the application of artificial intelligence to health and care [[5]Fenech M. Strukelj N. Buston O. Ethical, social, and political challenges of AI in health.http://futureadvocacy.com/wp-content/uploads/2018/04/1804_26_FA_ETHICS_08-DIGITAL.pdfDate: 2018Date accessed: May 25, 2019Google Scholar]. Clearly, many of these use cases are applicable to clinical oncology.Table 1Use cases for artificial intelligence in health and care, with examples of potential specific applications [[5]Fenech M. Strukelj N. Buston O. Ethical, social, and political challenges of AI in health.http://futureadvocacy.com/wp-content/uploads/2018/04/1804_26_FA_ETHICS_08-DIGITAL.pdfDate: 2018Date accessed: May 25, 2019Google Scholar]Use caseExamplesProcess optimisation•Rota/staff schedule management•Ambulance dispatch management•Patient experience analysisPreclinical research•Candidate small molecule screening•Predicting potential side-effects•Automated analysis of -omics datasetsClinical pathways•Analysis of digital imaging, including optical computed tomography, radiological imaging•Analysis of clinical conversations•Prognostication, e.g. prediction of all-cause mortalityPatient-facing applications•Chatbots•Symptom checkers•Closed-loop insulin pumpsPopulation-level applications•Prediction of infectious disease outbreaks•Data-driven targeting of public health spending and other interventions•Better understanding of risk-factors for non-communicable disease, e.g. childhood obesity Open table in a new tab The automated analysis of imaging investigations is probably the area where artificial intelligence has made the most progress in medicine — most of the Food and Drug Administration (FDA) approvals for artificial intelligence tools are for algorithms with radiological applications [[6]The Medical Futurist FDA approvals for smart algorithms in medicine in one giant infographic.2019https://medicalfuturist.com/fda-approvals-for-algorithms-in-medicineDate accessed: June 7, 2019Google Scholar]. There are a number of reasons for this. Since the widespread adoption of picture archiving and communication systems (PACS) in the 1990s, vast digital datasets have been created that are amenable to deep learning, which has shown particular adeptness at solving computer vision and other perceptual tasks in various sectors [[7]Arenson R.L. Andriole K.P. Avrin D.E. Gould R.G. Computers in imaging and health care: now and in the future.J Digit Imaging. 2000; 13: 145-156Crossref PubMed Scopus (57) Google Scholar,[8]Alexander A. McGill M. Tarasova A. Ferreira C. Zurkiya D. Scanning the future of medical imaging.J Am Coll Radiol. 2019; 16: 501-507https://doi.org/10.1016/j.jacr.2018.09.050Abstract Full Text Full Text PDF PubMed Scopus (21) Google Scholar]. Any type of image can be analysed by these tools, including clinical photographs — which may have particular impact on the diagnosis of dermatological malignancies — and histopathological images [[9]Kann B.H. Thompson R. Thomas Jr., C.R. Dicker A. Aneja S. Artificial intelligence in oncology: current applications and future directions.Oncology. 2019; 33: 46-53PubMed Google Scholar]. Despite grand claims of superhuman accuracy by such algorithms, a meta-analysis of studies comparing deep learning algorithms versus healthcare professionals in classifying diseases from medical imaging (currently in preprint) found that performance — as defined by specificity and sensitivity — was comparable [[10]Faes L. Liu X. Kale A. Bruynseels A. Shamdas M. Moraes G. et al.Deep learning under scrutiny: performance against health care professionals in detecting diseases from medical imaging - systematic review and meta-analysis.https://ssrn.com/abstract=3384923Date: 2019Google Scholar]. Even if diagnostic accuracy is no better than human, there is still a role for these tools. They could help address the mismatch between the increasing use of imaging and a lack of trained operators to interpret them — algorithms can be applied and deployed at scale in a way that humans cannot [[11]The Royal College of Radiologists Clinical radiology UK workforce census 2018 report. The Royal College of Radiologists, London2019Google Scholar]. Furthermore, the output of image classification is not restricted to diagnosis; such algorithms have been trained to predict the response to treatment from computed tomography images [[12]Bibault J.E.JE Giraud P.P. Durdux C. Taieb J. Berger A. Coriat R. et al.Deep learning and radiomics predict complete response after neo-adjuvant chemoradiation for locally advanced rectal cancer.Sci Rep. 2018; 8: 12611Crossref PubMed Scopus (92) Google Scholar,[13]Sun R. Limkin E.J. Vakalopoulou M. Dercle L. Champiat S. Han S.R. et al.A radiomics approach to assess tumour-infiltrating CD8 cells and response to anti-PD-1 or anti-PD-L1 immunotherapy: an imaging biomarker, retrospective multicohort study.Lancet Oncol. 2018; 19: 1180-1191Abstract Full Text Full Text PDF PubMed Scopus (486) Google Scholar]. The applications in research are equally interesting. Machine learning has been used to predict therapeutic effects of candidate molecules at the screening stage — potentially reducing the chances of hugely expensive failed trials down the line — and to identify possible new targets for established drugs [[14]Fleming N. How artificial intelligence is changing drug discovery.Nature. 2018; 557: S55-S57https://doi.org/10.1038/d41586-018-05267-xCrossref PubMed Scopus (171) Google Scholar,[15]Aliper A. Plis S. Artemov A. Ulloa A. Mamoshina P. Zhavoronkov A. Deep learning applications for predicting pharmacological properties of drugs and drug repurposing using transcriptomic data.Mol Pharm. 2016; 13: 2524-2530https://doi.org/10.1021/acs.molpharmaceut.6b00248Crossref PubMed Scopus (246) Google Scholar]. Wearable activity monitors are being used to obtain objective measures of physical activity in oncology trials (and in the wider clinical setting), potentially addressing the recall and response biases of questionnaires normally used to quantify these important outcomes [[16]Gresham G. Schrack J. Gresham L.M. Shinde A.M. Hendifar A.E. Tuli R. et al.Wearable activity monitors in oncology trials: current use of an emerging technology.Contemp Clin Trial. 2018; 64: 13-21https://doi.org/10.1016/j.cct.2017.11.002Abstract Full Text Full Text PDF PubMed Scopus (68) Google Scholar,[17]Purswani J.M. Ohri N. Champ C. Tracking steps in oncology: the time is now.Cancer Manag Res. 2018; 10: 2439-2447https://doi.org/10.2147/CMAR.S148710Crossref PubMed Scopus (18) Google Scholar]. Such devices provide rich data streams that could be subjected to analysis by machine learning to provide novel insights about trial participants' responses to the agents under investigation [[18]Kańtoch E. Recognition of sedentary behavior by machine learning analysis of wearable sensors during activities of daily living for telemedical assessment of cardiovascular risk.Sensors. 2018; 18 (pii: E3219): 3219https://doi.org/10.3390/s18103219Crossref Scopus (33) Google Scholar,[19]Banaee H. Ahmed M.U. Loutfi A. Data mining for wearable sensors in health monitoring systems: a review of recent trends and challenges.Sensors. 2013; 13: 17472-17500https://doi.org/10.3390/s131217472Crossref PubMed Scopus (274) Google Scholar]. Another area where machine learning has made great strides forward is natural language processing, which is the automated analysis and synthesis of speech and text [[20]Jurafsky D. Martin J.H. Speech and language processing: an introduction to natural language processing, computational linguistics, and speech recognition.1st ed. Prentice Hall, Upper Saddle River, NJ2000Google Scholar]. With increasing use of electronic health records in many healthcare systems, myriad academic and commercial groups have sought to replicate the success seen with computer vision by applying deep learning to the analysis of digitised text-based health records. Algorithms have been developed that predict the development of various cancers from a review of electronic health records [[21]Miotto R. Li L. Kidd B.A. Dudley J.T. Deep patient: an unsupervised representation to predict the future of patients from the electronic health records.Sci Rep. 2016; 6: 26094Crossref PubMed Scopus (738) Google Scholar] or all-cause mortality over the next 3–12 months [[22]Avati A. Jung K. Harman S. Downing L. Ng A. Shah N.H. Improving palliative care with deep learning.BMC Med Inform Decis Mak. 2018; 18: 122https://doi.org/10.1186/s12911-018-0677-8Crossref PubMed Scopus (128) Google Scholar]. Natural language processing applications in oncology have largely failed to live up to their promise thus far, however; the high-profile failure of the collaboration between IBM Watson for Oncology and the University of Texas' MD Anderson Cancer Centre after a lack of progress over 4 years serves as a salutary lesson about the dangers of over-promising and under-delivering in this space [[23]Hernandez D. Hospital stumbles in bid to teach a computer to treat cancer; 2017.https://www.wsj.com/articles/hospital-stumbles-in-bid-to-teach-a-computer-to-treat-cancer-1488969011?mod=article_inlineDate: 2017Date accessed: November 24, 2018Google Scholar,[24]Herper M. MD Anderson benches IBM Watson in setback for artificial intelligence in medicine.https://www.forbes.com/sites/matthewherper/2017/02/19/md-anderson-benches-ibm-watson-in-setback-for-artificial-intelligence-in-medicine/#37422a163774Date: 2017Date accessed: November 24, 2018Google Scholar]. The increasing use of artificial intelligence technologies in healthcare raises a series of important questions for healthcare practitioners, policymakers and patients (see Table 2) [[5]Fenech M. Strukelj N. Buston O. Ethical, social, and political challenges of AI in health.http://futureadvocacy.com/wp-content/uploads/2018/04/1804_26_FA_ETHICS_08-DIGITAL.pdfDate: 2018Date accessed: May 25, 2019Google Scholar].Table 2Ten major ethical, social and political challenges of the use of artificial intelligence technologies in health and care [[5]Fenech M. Strukelj N. Buston O. Ethical, social, and political challenges of AI in health.http://futureadvocacy.com/wp-content/uploads/2018/04/1804_26_FA_ETHICS_08-DIGITAL.pdfDate: 2018Date accessed: May 25, 2019Google Scholar]1. What effect will artificial intelligence have on human relationships in health and care?2. How is the use, storage and sharing of medical data impacted by artificial intelligence?3. What are the implications of issues around algorithmic transparency/explainability on health?4. Will these technologies help eradicate or exacerbate existing health inequalities?5. What is the difference between an algorithmic decision and a human decision?6. What do patients and members of the public want from artificial intelligence and related technologies?7. How should these technologies be regulated?8. Just because these technologies could enable access to new information, should we always use them?9. What makes algorithms, and the entities that create them, trustworthy?10. What are the implications of collaboration between public and private sector organisations in the development of these tools? Open table in a new tab These questions cover a broad range of concerns, from the pragmatic to the philosophical, but can be distilled into three overarching themes. The first is to do with consent. This is a central pillar of modern healthcare, particularly with the move in the 19th and 20th centuries towards emphasising the importance of autonomy in treatment decisions [[25]Cocanour C.J. Informed consent - It's more than a signature on a piece of paper.Am J Surg. 2017; 214: 993-997https://doi.org/10.1016/j.amjsurg.2017.09.015Abstract Full Text Full Text PDF PubMed Scopus (58) Google Scholar]. In an era where: (i) there may be a degree of autonomy in an algorithmic decision and (ii) the algorithm's internal workings may be more opaque than we are used to with other digital tools, how do patients give meaningful informed consent? There are a number of dimensions to the second theme: fairness. Algorithmic bias, for example, is well recognised in technology ethics circles, but is perhaps underappreciated in healthcare. Biases can occur when the data used to train and test algorithms are not an accurate reflection of the group of people they are meant to represent, which could be a result of inexact measurement, incomplete data gathering or other data collection flaws. It is not a new problem (research conducted by the FDA, for example, shows that African-Americans comprise less than 5% of clinical trial participants and Hispanics just 1%, even though they make up 12% and 16% of the total US population, respectively [[26]Buch B.D. Progress and collaboration on clinical trials.2016https://blogs.fda.gov/fdavoice/index.php/tag/fdasia-section-907/Date accessed: May 2, 2019Google Scholar]), but the reliance of machine learning on huge datasets makes data-related bias a pressing problem. Moreover, if the process being modelled itself exhibits unfairness, and is simply replicated faithfully in silico, then the algorithm will just propagate that unfairness, but at scale. For example, a biased algorithm meant that Google advertisements for jobs paying more than $200 000 were shown to significantly fewer women than men, reflecting established gender pay gaps [[27]Datta A. Tschantz M.C. Datta A. Automated experiments on ad privacy settings.Proc Privacy Enhancing Technol. 2015; 1: 92-112https://doi.org/10.1515/popets-2015-0007Crossref Google Scholar]. A lack of diversity in the artificial intelligence field, with most developers being white and male, may lead to bias being considered less of a problem, or not being identified at all. All of these issues could result in artificial intelligence algorithms perpetuating or even exacerbating health inequalities that result in minorities or underprivileged members of society demonstrating worse health outcomes. Fairness is also the issue at stake when we think about how a publicly funded healthcare system, such as the National Health Service (NHS), gets fair value from collaboration or commercial agreements with private companies that develop these algorithms. The stakeholders here are not just NHS users and staff members, but the wider tax-paying public. How do we ensure value for them? Finally, the right to health is enshrined in major international agreements, such as the Constitution of the World Health Organisation and the UN's Universal Declaration of Human Rights. How do we update our understanding of this basic human right for the artificial intelligence age? If — and it is a big ‘if’ — artificial intelligence can be definitively proven to improve health, or even provide the same standard of healthcare at lower cost, do people have a right to know how much artificial intelligence is used in their care? Perhaps more controversially, do people have a right not to have artificial intelligence involved in their care at all? The establishment of NHSX, a new agency within Government under the direct oversight of the Secretary of State for Health and Social Care, points towards a drive in the UK to prioritise digital transformation of health services, and to ensure increased accountability for this. NHSX has a number of responsibilities — and indeed, it remains to be seen whether this degree of centralisation is excessive and will lead to bottlenecks — but one that is relevant to this discussion is the development and iterative improvement of a ‘Code of conduct for data-driven health and care technology’, which covers artificial intelligence and related technologies [[28]Department of Health and Social Care Code of conduct for data-driven health and care technology.2019https://www.gov.uk/government/publications/code-of-conduct-for-data-driven-health-and-care-technology/initial-code-of-conduct-for-data-driven-health-and-care-technologyDate accessed: June 12, 2019Google Scholar]. Comprising 10 principles, this code outlines what the UK Government expects from developers of these technologies, be they in the private or public sector. Two major themes that emerge are: (i) the importance of openness from developers (e.g. in the use of data, about the limitations of their products, about their commercial strategy and about their standards) and (ii) the necessity that the development of these tools is user- and stakeholder-driven, not simply imposed in a top-down manner by well-meaning policymakers or technologists. Perhaps more importantly, NHSX are taking the lead on translating lofty principles into practical policy by supporting the development of a toolkit of resources to help developers show compliance, which will be published later this year. One area of particular difficulty for policymakers is to do with so-called ‘dynamic’ or ‘online learning’ algorithms, which is a method of machine learning in which newly available data are used to update the model continuously [[29]Bottou L. Online algorithms and stochastic approximations.Online learning and neural networks. Cambridge University Press, Cambridge1998Google Scholar]. Although not currently used in healthcare, its use in other sectors suggests that it is important to think about the regulatory implications. Unlike drugs or existing medical devices, these algorithms could foreseeably become very different over the course of their use from the original version that was reviewed by the responsible regulator. How does a regulator ensure the safety and effectiveness of a tool that could be continuously changing? The FDA has opened a public consultation on a series of proposals for dealing with this eventuality, which include clear setting of performance targets, an agreed system of monitoring of performance against these targets and a mechanism for triggering rapid review of the algorithm when performance strays from these pre-set parameters [[30]Food and Drug Administration Proposed regulatory framework for modifications to artificial intelligence/machine learning (AI/ML)-based software as a medical device (SaMD) - Discussion paper and request for feedback.https://www.regulations.gov/document?D=D;FDA-2019-N-1185-0001Date: 2019Date accessed: April 4, 2019Google Scholar]. In his review for the Department of Health and Social Care, Professor Eric Topol highlighted the opportunity for digital technologies including artificial intelligence to provide ‘the gift of time’ to healthcare practitioners and their patients [[31]Health Education EnglandThe Topol Review: preparing the healthcare workforce to deliver the digital future.2019https://topol.hee.nhs.uk/wp-content/uploads/HEE-Topol-Review-2019.pdfDate accessed: June 16, 2019Google Scholar]. If time-consuming, relatively low-risk processes such as administrative tasks could be offloaded to algorithms, then doctors and nurses could spend more time with patients and their relatives, answering their questions and supporting them as they take their treatment decisions. The increasing number of people living with chronic diseases [[32]Kingston A. Robinson L. Booth H. Knapp M. Jagger C. MODEM project. Projections of multi-morbidity in the older population in England to 2035: estimates from the Population Ageing and Care Simulation (PACSim) model.Age Ageing. 2018; 47: 374-380https://doi.org/10.1093/ageing/afx201Crossref PubMed Scopus (234) Google Scholar,[33]Raghupathi W. Raghupathi V. An empirical study of chronic diseases in the United States: a visual analytics approach.Int J Environ Res Public Health. 2018; 15: 431https://doi.org/10.3390/ijerph15030431Crossref PubMed Scopus (197) Google Scholar] may provide greater incentives to go beyond automating routine aspects of care, towards providing more information and support for patients to manage their own conditions with artificial intelligence. Healthcare practitioners —already in short supply worldwide — cannot be with their patients 24/7, whereas digital tools could be available at the touch of a screen or with a voice command. Cancer is increasingly becoming a chronic disease [34Phillips J.L. Currow D.C. Cancer as a chronic disease.Collegian. 2010; 17: 47-50Abstract Full Text Full Text PDF PubMed Scopus (118) Google Scholar, 35Harley C. Pini S. Bartlett Y.K. Velikova G. Defining chronic cancer: patient experiences and self-management needs.BMJ Support Palliat Care. 2012; 2: 248-255Crossref PubMed Scopus (25) Google Scholar, 36Markman M. Commentary: Implications of cancer managed as a “chronic illness”.Curr Oncol Rep. 2011; 13: 90-91https://doi.org/10.1007/s11912-010-0148-6Crossref PubMed Scopus (6) Google Scholar], so cancer survivors may be brought into this orbit of self-care technologies, potentially having everything from questions about symptoms of relapse, to their psychological needs supported by artificial intelligence-enabled tools. If these allow people with cancer to feel their health needs are being adequately addressed, then they should be welcomed. However, it is essential that the wishes of the prospective users are meaningfully included in the development process, and current evidence strongly supports the idea that we still want human carers to be involved in difficult discussions, such as those around end-of-life care [[5]Fenech M. Strukelj N. Buston O. Ethical, social, and political challenges of AI in health.http://futureadvocacy.com/wp-content/uploads/2018/04/1804_26_FA_ETHICS_08-DIGITAL.pdfDate: 2018Date accessed: May 25, 2019Google Scholar,[37]Ipsos M.O.R.I. Public views of machine learning: findings from public research and engagement conducted on behalf of the. Royal Society, 2017https://royalsociety.org/∼/media/policy/projects/machine-learning/publications/public-views-of-machine-learning-ipsos-mori.pdfDate accessed: June 7, 2019Google Scholar]. Also, oncologists need to be wary of the language of ‘empowering’ patients using such digital tools. There is a risk that rather than allowing patients to take truly autonomous decisions about their health, these tools could ‘nudge’ patients to behave how healthcare practitioners expect them to, imposing a normative, monolithic framework of behaviour on individuals rather than respecting their unique needs, drives and expectations [[38]Morley J. Floridi L. The limits of empowerment: how to reframe the role of mhealth tools in the healthcare ecosystem.Sci Eng Ethics. 2019; https://doi.org/10.1007/s11948-019-00115-1Crossref Scopus (23) Google Scholar]. The incorporation of artificial intelligence-enabled digital tools into existing healthcare systems and pathways will be hard. Nevertheless, the combination of their potential opportunities, as well as increasing demands for their use by patients, healthcare administrators and others, makes it unlikely that this particular genie can be put back into its bottle. It is up to all oncologists to educate themselves sufficiently about these technologies to be able to understand their opportunities, mitigate their risks and communicate both to their patients. M.E. Fenech took on a role at Ada Health GmbH on 1 September 2019. The work referred to in this article was made possible by funding provided to Future Advocacy by the Wellcome Trust and NHSX .

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call