Abstract

Applications of Artificial Intelligence (AI) can be examined from perspectives of different disciplines and research areas ranging from computer science and security, engineering, policymaking, and sociology. The technical scholarship of emerging technologies usually precedes the discussion of their societal implications but can benefit from social science insight in scientific development. Therefore, there is an urgent need for scientists and engineers developing AI algorithms and applications to actively engage with scholars in the social sciences. Without collaborative engagement, developers may encounter resistance to the approval and adoption of their technological advancements. This paper reviews a dataset, collected by Elsevier from the Scopus database, of papers on AI application published between 1997 and 2018, and examines how the co-development of technical and social science communities has grown throughout AI's earliest to latest stages of development. Thus far, more AI research exists that combines social science and technical explorations than AI scholarship of social sciences alone, and both categories are dwarfed by technical research. Moreover, we identify a relative absence of AI research related to its societal implications such as governance, ethics, or moral implications of the technology. The future of AI scholarship will benefit from both technical and social science examinations of the discipline's risk assessment, governance, and public engagement needs, to foster advances in AI that are sustainable, risk-informed, and societally beneficial.

Highlights

  • Advances in artificial intelligence (AI) have expanded its adoption in computer security of defense and financial systems, economics, education, and many other fields (Wachter et al, 2017; Winfield et al, 2018; Linkov et al, 2020)

  • We examine Artificial Intelligence (AI) scholarship in the last two decades to determine whether research has been dominated by technical development or is accompanied by a discussion of social implications of the technology

  • The second goal of this paper is to identify important gaps in the interaction between the technical and social science domains in AI research, and to discuss the implications of these gaps

Read more

Summary

Introduction

Advances in artificial intelligence (AI) have expanded its adoption in computer security of defense and financial systems, economics, education, and many other fields (Wachter et al, 2017; Winfield et al, 2018; Linkov et al, 2020). Social science inquiries into AI technology contribute to examining its potential to behave in ways that are harmful. Potential threats from AI (intentional or not) arise in computer security applications such as classifying cyber attackers as legitimate users or vice-versa, monitoring and predicting activity of individuals in social networks for commercial or surveillance purposes (e.g., social credit systems), with possible implications on privacy and civil rights. Understanding these trends in greater detail can help shape the technological growth, science policy, and public discussion surrounding AI. Artificial intelligence developers must innovate while identifying and mitigating real and perceived risks which may threaten innovation with premature or prohibitive regulation

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call