Abstract

Artificial general intelligence (AGI) progression metrics indicate AGI will occur within decades. No proof exists that AGI will benefit humans and not harm or eliminate humans. A set of logically distinct conceptual components is proposed that are necessary and sufficient to (1) ensure various AGI scenarios will not harm humanity, and (2) robustly align AGI and human values and goals. By systematically addressing pathways to malevolent AI we can induce the methods/axioms required to redress them. Distributed ledger technology (DLT, “blockchain”) is integral to this proposal, e.g., “smart contracts” are necessary to address the evolution of AI that will be too fast for human monitoring and intervention. The proposed axioms: (1) Access to technology by market license. (2) Transparent ethics embodied in DLT. (3) Morality encrypted via DLT. (4) Behavior control structure with values at roots. (5) Individual bar-code identification of critical components. (6) Configuration Item (from business continuity/disaster recovery planning). (7) Identity verification secured via DLT. (8) “Smart” automated contracts based on DLT. (9) Decentralized applications—AI software modules encrypted via DLT. (10) Audit trail of component usage stored via DLT. (11) Social ostracism (denial of resources) augmented by DLT petitions. (12) Game theory and mechanism design.

Highlights

  • The problem of superhuman artificial intelligence (‘artificial general intelligence”, Artificial general intelligence (AGI)) harming or eradicating humankind is an increasing concern as the prospect of AGI nears

  • (7) Identity verification secured via digital ledger technology (DLT). (8) “Smart” automated contracts based on DLT. (9) Decentralized applications—AI software modules encrypted via DLT. (10) Audit trail of component usage stored via DLT. (11) Social ostracism augmented by DLT petitions

  • Two key formulae underlying the computational complexity of AGI evolution and diversity are offered, the controversial issue of restricting access to AGI technology is addressed, and metrics of AGI progress are described toward the goal of monitoring proximity to a singularity

Read more

Summary

Introduction

The problem of superhuman artificial intelligence (‘artificial general intelligence”, AGI) harming or eradicating humankind is an increasing concern as the prospect of AGI nears. This article offers a new, comprehensive set of solutions to the AGI safety problem in which distributed ledger technology ( known as “blockchain”) plays multiple key roles. The Methods section gives the methods used to generate the axiom set proposed here and a justification for describing them at a high systems level. The Results/Discussion section first describes the proposed axioms in some detail, referring to Appendices for detailed examples of use cases in solving exhaustive enumerations of AGI failure pathways by others, and highlights some pathways where a solution would fail without a given axiom. The problems of control and value alignment in successive generations of AGI, the related issue of creating a singleton versus a pluralistic separation-and-balance-of-powers approach, and using “sandbox” simulations to examine AGI safety methods are described

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.