Abstract

The likely near future creation of artificial superintelligence carries significant risks to humanity. These risks are difficult to conceptualise and quantify, but malicious use of existing artificial intelligence by criminals and state actors is already occurring and poses risks to digital security, physical security and integrity of political systems. These risks will increase as artificial intelligence moves closer to superintelligence. While there is little research on risk management tools used in artificial intelligence development, the current global standard for risk management, ISO 31000:2018, is likely used extensively by developers of artificial intelligence technologies. This paper argues that risk management has a common set of vulnerabilities when applied to artificial superintelligence which cannot be resolved within the existing framework and alternative approaches must be developed. Some vulnerabilities are similar to issues posed by malicious threat actors such as professional criminals and terrorists. Like these malicious actors, artificial superintelligence will be capable of rendering mitigation ineffective by working against countermeasures or attacking in ways not anticipated by the risk management process. Criminal threat management recognises this vulnerability and seeks to guide and block the intent of malicious threat actors as an alternative to risk management. An artificial intelligence treachery threat model that acknowledges the failings of risk management and leverages the concepts of criminal threat management and artificial stupidity is proposed. This model identifies emergent malicious behaviour and allows intervention against negative outcomes at the moment of artificial intelligence’s greatest vulnerability.

Highlights

  • Many experts think that machines many times more intelligent than humans will exist by 2075 and that some form of superintelligent machines will exist within the 25–50 years (Bostrom 2014; Brundage et al 2018; Meek et al 2016)

  • The assumption is that superintelligence is an achievable reality that will happen within the time frame of 25 to 50 years with increasingly powerful artificial intelligence (AI) leading to the 50-year upper limit

  • The implications of a malicious artificial superintelligence (ASI) that is many millions of times more capable than current single purpose AIs are hard to conceptualise (Brundage et al 2018) Identifying and combating this threat is a new frontier for risk professionals and this paper argues that the risk assessment portion of ISO 31000, the current globally accepted model for risk management and the model on which most risk management is based, is not fit for this purpose

Read more

Summary

Introduction

Many experts think that machines many times more intelligent than humans will exist by 2075 and that some form of superintelligent machines will exist within the 25–50 years (Bostrom 2014; Brundage et al 2018; Meek et al 2016). It is a threat that far exceeds the risks of climate change, overpopulation, and nuclear war. In his own words, Musk says “With artificial intelligence, we are summoning the demon. The implications of a malicious ASI that is many millions of times more capable than current single purpose AIs are hard to conceptualise (Brundage et al 2018) Identifying and combating this threat is a new frontier for risk professionals and this paper argues that the risk assessment portion of ISO 31000, the current globally accepted model for risk management and the model on which most risk management is based, is not fit for this purpose. Subjects that may be seen as common knowledge to AI developers will be discussed in some detail and vice versa for risk professionals

Current use of risk management in AI development
Superintelligence
Defining the common model for risk management
ISO 31000 process
Risk management has a common architecture
Issues with risk management architecture and superintelligence
Risk identification and anthropomorphic bias
Likelihood and consequence in risk analysis
Common mode failure
Issues with the criminal threat management approach
Treachery threat management model
Artificial intelligence treachery threat model
10 Discussions and conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call