Abstract

Astonishing progress is being made in the field of artificial intelligence (AI) and particularly in machine learning (ML). Novel approaches of deep learning are promising to even boost the idea of AI equipped with capabilities of self-improvement. But what are the wider societal implications of this development and to what extent are classical AI concepts still relevant? This paper discusses these issues including an overview on basic concepts and notions of AI in relation to big data. Particular focus lies on the roles, societal consequences and risks of machine and deep learning. The paper argues that the growing relevance of AI in society bears serious risks of deep automation bias reinforced by insufficient machine learning quality, lacking algorithmic accountability and mutual risks of misinterpretation up to incrementally aggravating conflicts in decision-making between humans and machines. To reduce these risks and avoid the emergence of an intelligentia obscura requires overcoming ideological myths of AI and revitalising a culture of responsible, ethical technology development and usage. This includes the need for a broader discussion about the risks of increasing automation and useful governance approaches to stimulate AI development with respect to individual and societal well-being.

Highlights

  • Creating intelligent machines has always been a vision of mankind

  • How great is this potential factually, what are the prospects and limits, societal consequences and risks of deep learning and similar machine learning approaches? This paper critically examines the wider societal implications of artificial intelligence (AI) along these questions and discusses ethical problems related to developments towardlearning machines

  • Deep automation bias bears a number of interrelated sub-problems such as: (1) insufficient quality and performance of machine learning (ML); (2) lacking algorithmic accountability and misinterpretation of AI; and (3) conflicts between human and machine autonomy

Read more

Summary

Introduction

Creating intelligent machines has always been a vision of mankind. Rapid technological progress in the field of artificial intelligence (AI) being made over the last few years makes this vision more tangible. The second perspective is on issues of human–machine interaction related to the Turing test which implies automated imitations of human behaviour Both perspectives are considered when discussing societal consequences, ethical problems and conflicts of AI in real-world situations (as recent empirical examples from different domains reveal). At a workshop at Dartmouth College in New Hampshire in 1956 a number of influential computer scientists of that time (e.g., Herbert Simon, Marvin Minsky, John McCarthy and others) discussed thoroughly the options to use computers as a means to explore the mystery of human intelligence and to create intelligent machines This workshop counts as starting point of AI as an academic discipline [4,6]. There is a close relationship between AI and big data, methodologically as well as ideologically, as examined

Big Data as Catalyser of AI
Turing’s Imitation Game and Issues of Human–Machine Interaction
Deep Automation Bias—A Wicked Problem?
Insufficient Quality and Performance of ML
Lacking Algorithmic Accountability and Risks of Misinterpretation
Conflicts between Human and Machine Autonomy
Findings
Discussion and Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.