Abstract
One of the basic principles of risk management is that we should always keep an eye on ways that things could go badly wrong, even if they seem unlikely. The more disastrous a potential failure, the more improbable it needs to be, before we can safely ignore it. This principle may seem obvious, but it is easily overlooked in public discourse about risk, even by well-qualified commentators who should certainly know better. The present piece is prompted by neglect of the principle in recent discussions about the potential existential risks of artificial intelligence. The failing is not peculiar to this case, but recent debates in this area provide some particularly stark examples of how easily the principle can be overlooked.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.