Abstract

The next generation of artificial intelligence, known as Artificial General Intelligence (AGI), could either revolutionise or destroy humanity. Human Factors and Ergonomics (HFE) has a critical role to play in the design of safe and ethical AGI; however, there is little evidence that HFE is contributing to development programs. This paper presents the findings from a study which involved the use of the Work Domain Analysis-Broken Nodes approach to identify the risks that could emerge in a future ‘envisioned world’ AGI-based unmanned combat aerial vehicle system. The findings demonstrate that there are various potential risks, but that the most critical arise not due to poor performance, but rather when the AGI attempts to achieve goals at the expense of other system values, or when the AGI becomes ‘super-intelligent’, and humans can no longer manage it. The urgent need for further work exploring the design of AGI controls is emphasised.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call