Abstract
Hopes for biasing the odds towards the development of AGI that is human-friendly depend on finding and employing ethical theories and practices that can be incorporated successfully in the construction, programming and/or developmental growth, education and mature life world of future AGI. Mainstream ethical theories are ill-adapted for this purpose because of their mono-logical decision procedures which aim at “Golden rule” style principles and judgments which are objective in the sense of being universal and absolute. A much more helpful framework for ethics is provided by a dialogical approach using conflict resolution and negotiation methods, a “Rainbow rule” approach to diversity, and a notion of objectivity as emergent impartiality. This conflict resolution approach will also improve our chances in dealing with two other problems related to the “Friendly AI” problem, the difficulty of programming AI to be not merely smarter but genuinely wiser and the dilemmas that arise in considering whether AGIs will be Friendly to humans out of mere partisanship or out of genuine intent to promote the Good. While these issues are challenging, a strategy for pursuing and promoting research on them can be articulated and basic legislation and corporate policies can be adopted to encourage their development as part of the project of biasing the odds in favor of Friendly and Wise AGI.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have