Artificial intelligence (AI) is called the end-to-end, closing and most discussed technology of the 21st century. Along with issues of the effectiveness of technologies, ideas about a better life as a result of their implementation, there are pressing questions about how to regulate and manage them in terms of justice, the safety of society, its individual groups and future generations. Yet at the same time besides growth in the number of new Al-based products and their widespread use, there are issues of reliable AI, security and control of potential risks up for debates. In this regard, the beginning of the 2020s is marked with the search for optimal regulatory tools in the field of AI. Academic structures, universities and institutions for development, along with governments and business community in the broad sense, are key platforms for discussing the social implications of new technologies, as well as the tools and mechanisms for their regulation. The materials of such discussions were taken as the basis for this study - these are two panel discussions (2022 and 2023), which were held at Tomsk State University as part of the International Congress “Language, Culture and Technological Transits: New Facets of the Human”. The study was conducted using the focus group method. The list of participants for the panel discussions was formed in a similar way, so as, on the one hand, to preserve the interdisciplinary contour of the discussion due to the nature of AI technologies and, on the other hand, to provide a comparison of the two discussions and consisted of representatives of leading companies developing AI, researchers in the field of law, philosophy, ethics and sociology, academic structures represented by vice-rectors. Representatives of business and academia included participants in government councils and intergovernmental groups on AI development. Questions for discussion included the following blocks: co-production of technology and society in relation to AI, changing concepts of risk and benefit; issues of regulation of digital and AI technologies, ethical dilemmas in the AI era and ways to solve them; maintaining a balance of interests and building trust in the field of AI development. The dilemmas of “government - governance” of AI as well as options for social and business responses to the use of AI were considered. Based on the results of a qualitative analysis of the expert discussion, we can conclude that, under turbulence in the AI technology itself, ethics becomes the basis that provides an opportunity to dialogue and development of optimal norms and rules at this stage for all players involved. Moreover, variability and flexibility of these norms and rules are being observed. Ethics as an instrument of soft law comes forward with a self-regulation mechanism, which is evidence-based and which builds up public confidence and fosters the development of AI technologies. The authors declare no conflicts of interests.
Read full abstract