Abstract
AbstractArtificial General Intelligence (AGI) is the next and forthcoming evolution of Artificial Intelligence (AI). Though there could be significant benefits to society, there are also concerns that AGI could pose an existential threat. The critical role of Human Factors and Ergonomics (HFE) in the design of safe, ethical, and usable AGI has been emphasized; however, there is little evidence to suggest that HFE is currently influencing development programs. Further, given the broad spectrum of HFE application areas, it is not clear what activities are required to fulfill this role. This article presents the perspectives of 10 researchers working in AI safety on the potential risks associated with AGI, the HFE concepts that require consideration during AGI design, and the activities required for HFE to fulfill its critical role in what could be humanity's final invention. Though a diverse set of perspectives is presented, there is broad agreement that AGI potentially poses an existential threat, and that many HFE concepts should be considered during AGI design and operation. A range of critical activities are proposed, including collaboration with AGI developers, dissemination of HFE work in other relevant disciplines, the embedment of HFE throughout the AGI lifecycle, and the application of systems HFE methods to help identify and manage risks.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Human Factors and Ergonomics in Manufacturing & Service Industries
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.