Abstract

As advancements in machine learning and artificial intelligence (AI) continue at an ever-increasing rate, there are growing concerns over the potential development of lethal autonomous weapons systems (LAWS), commonly known as ‘killer robots’.1 Such systems are defined as any weapon capable of targeting and initiating the use of potentially lethal force without direct human supervision and direct human involvement in lethal decision making.2 Several countries, including the UK, are developing these weapons for military use, setting the stage for an imminent arms race. The emergence of these technologies would represent the complete automation of lethal harm, which AI experts fear would mark a third revolution in warfare, following gunpowder and nuclear weapons.3 LAWS would radically violate the ethical principles and moral code that are integral to our profession, necessitating urgent and collective action from the entire healthcare community. The prospect of a world with LAWS generates ethical, legal, and diplomatic apprehensions. These technologies would bring dire humanitarian consequences and geopolitical destabilisation. They would make possible the …

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.