Abstract

Artificial intelligence (AI), autonomous systems, and robotics are digital technologies that impact us all today, and will have momentous impact on the development of humanity and transformation of our society in the very near future. AI is implicated in the fields of computer science, law, philosophy, economics, religion, ethics, health, and more. This paper discusses the emerging field of AI ethics, how the AI industry is viewed by some as using AI ethics as window-dressing, or ethics-washing, and how employees have advanced corporate social responsibility and AI ethics as a check to big tech, with governments and public opinion often following with actions to develop responsible AI, in the aftermath of employee protests, such as against Google, Amazon, Microsoft, Salesforce, and others. This straightforward definition of ethics put forth by Walz and Firth-Butterfield is easiest to work with, when discussing ethical applications and design of AI. “Ethics is commonly referred to as the study of morality. Morality... is a system of rules and values for guiding human conduct, as well as principles for evaluating those rules. Consequently, ethical behavior does not necessarily mean “good” behavior. Ethical behavior instead indicates compliance with specific values. Such values can be commonly accepted as being part of human nature (e.g., the protection of human life, freedom, and human dignity) or as a moral expectation characterizing beliefs and convictions of specific groups of people (e.g., religious rules). Moral expectations may also be of individual nature (e.g., an entrepreneur’s expectation that employees accept a company’s specific code of conduct). This broad definition is used here because….the benefit of this neutral definition of ethics is that it enables one to address the issue of ethical diversity from a regulatory and policymaking perspective. Industry self governance is unlikely to fully protect the public interest when it comes to powerful general purpose technologies. It is encouraging to see that there is significant effort being made from those in government, such as the US Department of Defense and the Joint Artificial Intelligence Center (JAIC),as well as from civil society to promote responsible and trustworthy AI. U.S. federal government activity addressing AI accelerated during the 115th and 116th Congresses. President Donald Trump issued two executive orders, establishing the American AI Initiative (E.O. 13859) and promoting the use of trustworthy AI in the federal government (E.O. 13960). Federal committees, working groups, and other entities have been formed to coordinate agency activities, help set priorities, and produce national strategic plans and reports, including an updated National AI Research and Development Strategic Plan and a Plan for Federal Engagement in Developing Technical Standards and Related Tools in AI. In Congress, committees held numerous hearings, and Members introduced a wide variety of legislation to address federal AI investments and their coordination; AI-related issues such as algorithmic bias and workforce impacts; and AI technologies such as facial recognition and deepfakes. Despite the differences we see and shall see between nations’ approaches to AI, there are also numerous synergies. There are many opportunities for governments and organizations to coordinate and collaborate internationally. This is likely to be increasingly important as many of the challenges and opportunities from AI extend well beyond national borders. AI regulation is hard for national governments to do by themselves. There are certainly issues of national competitiveness, but failing to partner internationally on AI development will not serve anyone's interests. The role of inter governmental initiatives is really valuable in responsible AI to support its development. The OECD AI recommendation is a really encouraging example. The OECD Principles on Artificial Intelligence promote artificial intelligence that is innovative and trustworthy and that respects human rights and democratic values. They were adopted in May 2019 by OECD member countries when they approved the OECD Council Recommendation on Artificial Intelligence. The OECD AI Principles are the first such principles signed up to by governments. The OECD AI Principles set standards for AI that are practical and flexible enough to stand the test of time in a rapidly evolving field. They complement existing OECD standards in areas such as privacy, digital security risk management and responsible business conduct. To ensure the successful implementation of the Principles, the OECD launched the AI Policy Observatory in February 2020. The Observatory publishes practical guidance about how to implement the AI Principles, and supports a live database of AI policies and initiatives globally. It also compiles metrics and measurement of global AI development and uses its convening power to bring together the private sector, governments, academia, and civil society. In June 2019, the G20 adopted human-centered AI Principles that draw from the OECD AI Principles. Over 40 countries including the U.S. as well as some non OECD members have signed on to the OECD AI principles. This is the first intergovernmental AI standard to date. Thus, international coordination on AI is not only critical but possible. AI will impact everyone so everyone should have a say. It is really valuable and important at these relatively early stages of AI governance that we make the effort to hear from all people, including those who struggle to be heard.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call