Abstract
Recent revelations concerning data firm Cambridge Analytica’s illegitimate use of the data of millions of Facebook users highlights the ethical and, relatedly, legal issues arising from the use of machine learning techniques. Cambridge Analytica is, or was – the revelations brought about its demise - a firm that used machine learning processes to try to influence elections in the US and elsewhere by, for instance, targeting ‘vulnerable’ voters in marginal seats with political advertising. Of course, there is nothing new about political candidates and parties employing firms to engage in political advertising on their behalf, but if a data firm has access to the personal information of millions of voters, and is skilled in the use of machine learning techniques, then it can develop detailed, fine-grained voter profiles that enable political actors to reach a whole new level of manipulative influence over voters. My focus in this paper is not with the highly publicised ethical and legal issues arising from Cambridge Analytic’s activities but rather with some important ethical issues arising from the use of machine learning techniques that have not received the attention and analysis that they deserve. I focus on three areas in which machine learning techniques are used or, it is claimed, should be used, and which give rise to problems at the interface of law and ethics (or law and morality, I use the terms “ethics” and “morality” interchangeably). The three areas are profiling and predictive policing (Saunders et al. 2016), legal adjudication (Zeleznikow, 2017), and machines’ compliance with legally enshrined moral principles (Arkin 2010). I note that here, as elsewhere, new and emerging technologies are developing rapidly making it difficult to predict what might or might not be able to be achieved in the future. For this reason, I have adopted the conservative stance of restricting my ethical analysis to existing machine learning techniques and applications rather than those that are the object of speculation or even informed extrapolation (Mittelstadt et al. 2015). This has the consequence that what I might regard as a limitation of machine learning techniques, e.g. in respect of predicting novel outcomes or of accommodating moral principles, might be thought by others to be merely a limitation of currently available techniques. After all, has not the history of AI recently shown the naysayers to have been proved wrong? Certainly, AI has seen some impressive results, including the construction of computers that can defeat human experts in complex games, such as chess and Go (Silver et al. 2017), and others that can do a better job than human medical experts at identifying the malignancy of moles and the like (Esteva et al. 2017). However, since by definition future machine learning techniques and applications are not yet with us the general claim that current limitations will be overcome cannot at this time be confirmed or disconfirmed on the basis of empirical evidence.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.