Abstract

Artificial Intelligence (AI) seems to be impacting all industry sectors, while becoming a motor for innovation. The diffusion of AI from the civilian sector to the defense sector, and AI’s dual-use potential has drawn attention from security and ethics scholars. With the publication of the ethical guideline Trustworthy AI by the European Union (EU), normative questions on the application of AI have been further evaluated. In order to draw conclusions on Trustworthy AI as a point of reference for responsible research and development (R&D), we approach the diffusion of AI across both civilian and military spheres in the EU. We capture the extent of technological diffusion and derive European and German patent citation networks. Both networks indicate a low degree of diffusion of AI between civilian and defense sectors. A qualitative investigation of project descriptions of a research institute’s work in both civilian and military fields shows that military AI applications stress accuracy or robustness, while civilian AI reflects a focus on human-centric values. Our work represents a first approach by linking processes of technology diffusion with normative evaluations of R&D.

Highlights

  • General consensus among ethics researchers underscores that as technologies based on Artificial Intelligence (AI) shape many aspects of our daily lives, necessary steps to be taken in technology development should include the assessment of risks and the implementation of safeguarding principles (Floridi et al, 2018; Taebi et al, 2019)

  • We investigate the extent of AI diffusion, which may already implyresponsible Research and Development (R&D), and norms that are diffused across civilian and military fields as well as normative patterns of AI R&D which may be indicated by values specific to the field of application

  • AI is seen as a general-purpose technology, and the study of the patterns of diffusion of innovation between civilian and defense applications is relevant for technology assessment (TA) and regarding normative concepts that influence the R&D of AI, such as Trustworthy AI

Read more

Summary

Introduction

General consensus among ethics researchers underscores that as technologies based on Artificial Intelligence (AI) shape many aspects of our daily lives, necessary steps to be taken in technology development should include the assessment of risks and the implementation of safeguarding principles (Floridi et al, 2018; Taebi et al, 2019). The prospect of proliferating autonomous weapon systems has convinced China and the USA but has led other states to reevaluate their military advantage (Riebe et al, 2020) These innovations are often developed in the private sector, increasingly permeate social spheres, and have a high dual-use potential (Meunier & Bellais, 2019). Regarding AI, civilian actors appear to be more engaged in Research and Development (R&D) for commercial end-use than actors in the defense sector. This suggests that directions and centralities of technology diffusion may have changed towards a stronger use of commercial innovation by defense firms (Acosta et al, 2019; Reppy, 2006; Shields, 2018). Approaching the diffusion of AI in European civilian and defense industries and its implications for responsible R&D, we pose the following question: To what extent does AI diffusion occur in the EU and which patterns does it follow?

Methods
Findings
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.