What are the potential negative consequences of unregulated AI development?

Answer from top 10 papers

The dangers of unregulated Artificial Intelligence (AI) development are multifaceted, encompassing risks and threats across various spheres of human activity (Sennott et al., 2019). Unregulated AI can lead to negative outcomes such as privacy violations, increased cybersecurity threats, and the potential misuse of technology for harmful purposes (Bielova & Byelov, 2023; Shevchenko et al., 2021). For instance, the lack of transparent methods for verifying the conclusions of intelligent information systems can result in the dissemination of unverified information or even fakes, which could be used as a form of information warfare (Shevchenko et al., 2021). Additionally, AI requires substantial resources, and without proper regulation, there is a risk of exacerbating inequalities or enabling the misuse of AI by malicious actors, such as hackers who could leverage AI to enhance their capabilities (Sultangazina et al., 2021).
Interestingly, while AI presents significant risks, it also offers considerable opportunities for economic growth, social progress, and national security (Shevchenko et al., 2021; Skitsko et al., 2023). However, these benefits can only be realized if the development and implementation of AI are guided by ethical and moral principles, as highlighted in the context of its use in harmony with the Islamic religion (M et al., 2023; Patel, 2023). Moreover, the rapid pace of AI evolution necessitates a proactive approach to risk assessment and the establishment of robust mechanisms for the protection of personal data and human rights (Bielova & Byelov, 2023).
In summary, the unregulated development of AI poses serious dangers, including threats to privacy, security, and the integrity of information. To mitigate these risks, it is imperative to develop transparent verification methods, ethical guidelines, and regulatory frameworks that ensure the responsible deployment of AI technologies. Such measures are crucial for harnessing the positive potential of AI while safeguarding against its potential harms (Bielova & Byelov, 2023; Sennott et al., 2019; Shevchenko et al., 2021).

Source Papers

THREATS AND RISKS OF THE USE OF ARTIFICIAL INTELLIGENCE

The article analyzes the advantages of using Artificial Intelligence (AI) in various fields and the risks of impact on the performance of information security and cyber security tasks, as integral components of national security. It was determined that the development of AI has become a key priority for many countries, and at the same time, questions have arisen regarding the safety of this technology and the consequences of its use. The expansion of the scope of application of AI to critical infrastructure objects, the difficulty of verifying the information resources and solutions created by these systems, the threat of a dangerous impact of the results of their operation on the safety of people, society and the state leads to the emergence of risks associated with the use of AI. The lack of transparent methods for checking the conclusions and recommendations of the proposed SSI is a source of uncertainty regarding their accuracy and practical value. This effectively means that SSI can be part of a set of information warfare measures aimed at spreading dubious unverified information and common fakes. The use of artificial intelligence technology can improve the level of computer security. The paper considers the mechanism of risk assessment from the use of AI in various industries and methods of their processing. Proposed approaches to the use of artificial intelligence systems for identification and assessment of risks that arise as a result of the use of artificial intelligence systems. Artificial intelligence plays a key role in ensuring national security, and its application in various industries contributes to improving efficiency, however, there is an urgent need to develop risk assessment mechanisms for the use of artificial intelligence systems.

Read full abstract
Artificial intelligence and machine learning

Artificial Intelligence (AI) is an area of ​​research driven by innovation and development that culminates in computers, machines with human-like intelligence characterized by cognitive ability, learnability, adaptability and decision-making ability. The study found that AI is widely adopted and used in education, especially by educational institutions, in various forms. This article reviewed articles by various scientists from different countries. The paper discusses the prospects for the application of artificial intelligence and machine learning technologies in education and in everyday life. The history of the development of artificial intelligence is described, technologies of machine learning and neural networks are analyzed. An overview of already implemented projects for the use of artificial intelligence is given, a forecast of the most promising, according to the authors, directions for the development of artificial intelligence technologies for the next period is given. This article provides an analysis of how educational research is being transformed into an experimental science. AI is combined with the study of science into new ‘digital laboratories’, in which ownership of data, as well as power and authority in the production of educational knowledge, are redistributed between research complexes of computers and scientific knowledge.

Read full abstract
Open Access
Challenges and threats of personal data protection in working with artificial intelligence

With the development of artificial intelligence technologies, there are new opportunities to use personal data for various purposes, such as machine learning, process automatization and management of large volumes of information. However, along with these opportunities, questions arise regarding the protection of personal data privacy and the observance of human rights. This article aims to explore the challenges and threats that are becoming relevant in the context of the use of artificial intelligence from the perspective of legal scholars.
 This research paper explores the challenges and threats associated with the protection of personal data when working with artificial intelligence (AI). The growing role of AI in various spheres of life causes the need to exchange and process a large amount of personal data. However, this process raises serious privacy and security issues.
 The article analyzes the main challenges, in particular, the instability of technological progress, which complicates the development of effective methods of personal data protection. The problem of processing a large amount of data while ensuring its confidentiality and integrity is also investigated. Important attention is paid to the problem of identification and management of risks related to the protection of personal data in the context of AI.
 The article also identifies threats to the security of personal data when working with AI. The author consider the possibility of unauthorized access to personal data, identity theft, and the possibility of using AI to manipulate data for the purpose of fraud or discrimination. The problem of algorithmic bias and the risks of insufficient anonymization of data are also analyzed in the present article. The article concludes with recommendations and strategies for protecting personal data when working with AI. In particular, the need to establish strict rules and regulations for the processing of personal data, the use of encryption and anonymization of data, the development of control mechanisms and verification of compliance with security policies, as well as the education and awareness raising of users regarding the protection of personal data.

Read full abstract
Open Access