Veštačka inteligencija u politici velikih sila - moguće posledice po civilnu kontrolu vojske
Today artificial intelligence permeates all the pores of social life, including international relations and security, where the attention of the professional public is primarily drawn in the context of armed conflicts and rivalry of great powers. Scientific research has not sufficiently considered the consequences of its use to civilian-military relations, although it is a key institutional component of defence policy and an important aspect of national security. This study uses the qualitative analysis of available data to monitor the process of the interaction of inherent characteristics of artificial intelligence and great powers0 politics, and by testing the existing theories, it explores why great powers uncritically place artificial intelligence in the function of security and how it will affect civilian-military relations. Moreover, this paper, in line with the already presented research question, analyzes some of the new security dilemmas, such as: military security of society turned outwards vs. internal values, ideologies and institutions. A brief historical context of the phenomenon and its definition determination prove to be indispensable in this example as well in the academic completion of the paper.
- Research Article
16
- 10.58600/eurjther1719
- Jul 22, 2023
- European Journal of Therapeutics
A few weeks ago, we published an editorial discussion on whether artificial intelligence applications should be authors of academic articles [1] . We were delighted to receive more than one interesting reply letter to this editorial in a short time [2, 3] . We hope that opinions on this
- Research Article
16
- 10.1162/daed_e_01897
- May 1, 2022
- Daedalus
This dialogue is from an early scene in the 2014 film Ex Machina, in which Nathan has invited Caleb to determine whether Nathan has succeeded in creating artificial intelligence.1 The achievement of powerful artificial general intelligence has long held a grip on our imagination not only for its exciting as well as worrisome possibilities, but also for its suggestion of a new, uncharted era for humanity. In opening his 2021 BBC Reith Lectures, titled "Living with Artificial Intelligence," Stuart Russell states that "the eventual emergence of general-purpose artificial intelligence [will be] the biggest event in human history."2Over the last decade, a rapid succession of impressive results has brought wider public attention to the possibilities of powerful artificial intelligence. In machine vision, researchers demonstrated systems that could recognize objects as well as, if not better than, humans in some situations. Then came the games. Complex games of strategy have long been associated with superior intelligence, and so when AI systems beat the best human players at chess, Atari games, Go, shogi, StarCraft, and Dota, the world took notice. It was not just that Als beat humans (although that was astounding when it first happened), but the escalating progression of how they did it: initially by learning from expert human play, then from self-play, then by teaching themselves the principles of the games from the ground up, eventually yielding single systems that could learn, play, and win at several structurally different games, hinting at the possibility of generally intelligent systems.3Speech recognition and natural language processing have also seen rapid and headline-grabbing advances. Most impressive has been the emergence recently of large language models capable of generating human-like outputs. Progress in language is of particular significance given the role language has always played in human notions of intelligence, reasoning, and understanding. While the advances mentioned thus far may seem abstract, those in driverless cars and robots have been more tangible given their embodied and often biomorphic forms. Demonstrations of such embodied systems exhibiting increasingly complex and autonomous behaviors in our physical world have captured public attention.Also in the headlines have been results in various branches of science in which AI and its related techniques have been used as tools to advance research from materials and environmental sciences to high energy physics and astronomy.4 A few highlights, such as the spectacular results on the fifty-year-old protein-folding problem by AlphaFold, suggest the possibility that AI could soon help tackle science's hardest problems, such as in health and the life sciences.5While the headlines tend to feature results and demonstrations of a future to come, AI and its associated technologies are already here and pervade our daily lives more than many realize. Examples include recommendation systems, search, language translators - now covering more than one hundred languages - facial recognition, speech to text (and back), digital assistants, chatbots for customer service, fraud detection, decision support systems, energy management systems, and tools for scientific research, to name a few. In all these examples and others, AI-related techniques have become components of other software and hardware systems as methods for learning from and incorporating messy real-world inputs into inferences, predictions, and, in some cases, actions. As director of the Future of Humanity Institute at the University of Oxford, Nick Bostrom noted back in 2006, "A lot of cutting-edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore."6As the scope, use, and usefulness of these systems have grown for individual users, researchers in various fields, companies and other types of organizations, and governments, so too have concerns when the systems have not worked well (such as bias in facial recognition systems), or have been misused (as in deepfakes), or have resulted in harms to some (in predicting crime, for example), or have been associated with accidents (such as fatalities from self-driving cars).7Dædalus last devoted a volume to the topic of artificial intelligence in 1988, with contributions from several of the founders of the field, among others. Much of that issue was concerned with questions of whether research in AI was making progress, of whether AI was at a turning point, and of its foundations, mathematical, technical, and philosophical-with much disagreement. However, in that volume there was also a recognition, or perhaps a rediscovery, of an alternative path toward AI - the connectionist learning approach and the notion of neural nets-and a burgeoning optimism for this approach's potential. Since the 1960s, the learning approach had been relegated to the fringes in favor of the symbolic formalism for representing the world, our knowledge of it, and how machines can reason about it. Yet no essay captured some of the mood at the time better than Hilary Putnam's "Much Ado About Not Very Much." Putnam questioned the Dædalus issue itself: "Why a whole issue of Dædalus? Why don't we wait until AI achieves something and then have an issue?" He concluded:This volume of Dædalus is indeed the first since 1988 to be devoted to artificial intelligence. This volume does not rehash the same debates; much else has happened since, mostly as a result of the success of the machine learning approach that was being rediscovered and reimagined, as discussed in the 1988 volume. This issue aims to capture where we are in AI's development and how its growing uses impact society. The themes and concerns herein are colored by my own involvement with AI. Besides the television, films, and books that I grew up with, my interest in AI began in earnest in 1989 when, as an undergraduate at the University of Zimbabwe, I undertook a research project to model and train a neural network.9 I went on to do research on AI and robotics at Oxford. Over the years, I have been involved with researchers in academia and labs developing AI systems, studying AI's impact on the economy, tracking AI's progress, and working with others in business, policy, and labor grappling with its opportunities and challenges for society.10The authors of the twenty-five essays in this volume range from AI scientists and technologists at the frontier of many of AI's developments to social scientists at the forefront of analyzing AI's impacts on society. The volume is organized into ten sections. Half of the sections are focused on AI's development, the other half on its intersections with various aspects of society. In addition to the diversity in their topics, expertise, and vantage points, the authors bring a range of views on the possibilities, benefits, and concerns for society. I am grateful to the authors for accepting my invitation to write these essays.Before proceeding further, it may be useful to say what we mean by artificial intelligence. The headlines and increasing pervasiveness of AI and its associated technologies have led to some conflation and confusion about what exactly counts as AI. This has not been helped by the current trend-among researchers in science and the humanities, startups, established companies, and even governments-to associate anything involving not only machine learning, but data science, algorithms, robots, and automation of all sorts with AI. This could simply reflect the hype now associated with AI, but it could also be an acknowledgment of the success of the current wave of AI and its related techniques and their wide-ranging use and usefulness. I think both are true; but it has not always been like this. In the period now referred to as the AI winter, during which progress in AI did not live up to expectations, there was a reticence to associate most of what we now call AI with AI.Two types of definitions are typically given for AI. The first are those that suggest that it is the ability to artificially do what intelligent beings, usually human, can do. For example, artificial intelligence is:The human abilities invoked in such definitions include visual perception, speech recognition, the capacity to reason, solve problems, discover meaning, generalize, and learn from experience. Definitions of this type are considered by some to be limiting in their human-centricity as to what counts as intelligence and in the benchmarks for success they set for the development of AI (more on this later). The second type of definitions try to be free of human-centricity and define an intelligent agent or system, whatever its origin, makeup, or method, as:This type of definition also suggests the pursuit of goals, which could be given to the system, self-generated, or learned.13 That both types of definitions are employed throughout this volume yields insights of its own.These definitional distinctions notwithstanding, the term AI, much to the chagrin of some in the field, has come to be what cognitive and computer scientist Marvin Minsky called a "suitcase word."14 It is packed variously, depending on who you ask, with approaches for achieving intelligence, including those based on logic, probability, information and control theory, neural networks, and various other learning, inference, and planning methods, as well as their instantiations in software, hardware, and, in the case of embodied intelligence, systems that can perceive, move, and manipulate objects.Three questions cut through the discussions in this volume: 1) Where are we in AI's development? 2) What opportunities and challenges does AI pose for society? 3) How much about AI is really about us?Notions of intelligent machines date all the way back to antiquity.15 Philosophers, too, among them Hobbes, Leibnitz, and Descartes, have been dreaming about AI for a long time; Daniel Dennett suggests that Descartes may have even anticipated the Turing Test.16 The idea of computation-based machine intelligence traces to Alan Turing's invention of the universal Turing machine in the 1930s, and to the ideas of several of his contemporaries in the mid-twentieth century. But the birth of artificial intelligence as we know it and the use of the term is generally attributed to the now famed Dartmouth summer workshop of 1956. The workshop was the result of a proposal for a two-month summer project by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon whereby "An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves."17In their respective contributions to this volume, "From So Simple a Beginning: Species of Artificial Intelligence" and "If We Succeed," and in different but complementary ways, Nigel Shadbolt and Stuart Russell chart the key ideas and developments in AI, its periods of excitement as well as the aforementioned AI winters. The current AI spring has been underway since the 1990s, with headline-grabbing breakthroughs appearing in rapid succession over the last ten years or so: a period that Jeffrey Dean describes in the title of his essay as a "golden decade," not only for the pace of AI development but also its use in a wide range of sectors of society, as well as areas of scientific research.18 This period is best characterized by the approach to achieve artificial intelligence through learning from experience, and by the success of neural networks, deep learning, and reinforcement learning, together with methods from probability theory, as ways for machines to learn.19A brief history may be useful here: In the 1950s, there were two dominant visions of how to achieve machine intelligence. One vision was to use computers to create a logic and symbolic representation of the world and our knowledge of it and, from there, create systems that could reason about the world, thus exhibiting intelligence akin to the mind. This vision was most espoused by Allen Newell and Hebert Simon, along with Marvin Minsky and others. Closely associated with it was the "heuristic search" approach that supposed intelligence was essentially a problem of exploring a space of possibilities for answers. The second vision was inspired by the brain, rather than the mind, and sought to achieve intelligence by learning. In what became known as the connectionist approach, units called perceptrons were connected in ways inspired by the connection of neurons in the brain. At the time, this approach was most associated with Frank Rosenblatt. While there was initial excitement about both visions, the first came to dominate, and did so for decades, with some successes, including so-called expert systems.Not only did this approach benefit from championing by its advocates and plentiful funding, it came with the suggested weight of a long intellectual tradition-exemplified by Descartes, Boole, Frege, Russell, and Church, among others-that sought to manipulate symbols and to formalize and axiomatize knowledge and reasoning. It was only in the late 1980s that interest began to grow again in the second vision, largely through the work of David Rumelhart, Geoffrey Hinton, James McClelland, and others. The history of these two visions and the associated philosophical ideas are discussed in Hubert Dreyfus and Stuart Dreyfus's 1988 Dædalus essay "Making a Mind Versus Modeling the Brain: Artificial Intelligence Back at a Branchpoint."20 Since then, the approach to intelligence based on learning, the use of statistical methods, back-propagation, and training (supervised and unsupervised) has come to characterize the current dominant approach.Kevin Scott, in his essay "I Do Not Think It Means What You Think It Means: Artificial Intelligence, Cognitive Work & Scale," reminds us of the work of Ray Solomonoff and others linking information and probability theory with the idea of machines that can not only learn, but compress and potentially generalize what they learn, and the emerging realization of this in the systems now being built and those to come. The success of the machine learning approach has benefited from the boon in the availability of data to train the algorithms thanks to the growth in the use of the Internet and other applications and services. In research, the data explosion has been the result of new scientific instruments and observation platforms and data-generating breakthroughs, for example, in astronomy and in genomics. Equally important has been the co-evolution of the software and hardware used, especially chip architectures better suited to the parallel computations involved in data- and compute-intensive neural networks and other machine learning approaches, as Dean discusses.Several authors delve into progress in key subfields of AI.21 In their essay, "Searching for Computer Vision North Stars," Fei-Fei Li and Ranjay Krishna chart developments in machine vision and the creation of standard data sets such as ImageNet that could be used for benchmarking performance. In their respective essays "Human Language Understanding & Reasoning" and "The Curious Case of Commonsense Intelligence," Chris Manning and Yejin Choi discuss different eras and ideas in natural language processing, including the recent emergence of large language models comprising hundreds of billions of parameters and that use transformer architectures and self-supervised learning on vast amounts of data.22 The resulting pretrained models are impressive in their capacity to take natural language prompts for which they have not been trained specifically and generate human-like outputs, not only in natural language, but also images, software code, and more, as Mira Murati discusses and illustrates in "Language & Coding Creativity." Some have started to refer to these large language models as foundational models in that once they are trained, they are adaptable to a wide range of tasks and outputs.23 But despite their unexpected performance, these large language models are still early in their development and have many shortcomings and limitations that are highlighted in this volume and elsewhere, including by some of their developers.24In "The Machines from Our Future," Daniela Rus discusses the progress in robotic systems, including advances in the underlying technologies, as well as in their integrated design that enables them to operate in the physical world. She highlights the limitations in the "industrial" approaches used thus far and suggests new ways of conceptualizing robots that draw on insights from biological systems. In robotics, as in AI more generally, there has always been a tension as to whether to copy or simply draw inspiration from how humans and other biological organisms achieve intelligent behavior. Elsewhere, AI researcher Demis Hassabis and colleagues have explored how neuroscience and AI learn from and inspire each other, although so far more in one than the other, as and have the success of the current approaches to AI, there are still many shortcomings and as well as problems in It is useful to on one such as when AI does not as or or or that can to or when it on or information about the world, or when it has such as of all of which can to a of public shortcomings have captured the attention of the wider public and as well as among there is an on AI and In recent years, there has been a of to principles and approaches to AI, as well as involving and such as the on AI, that to best important has been the of with to and - in the and developing AI in both and as has been well in recent This is an important in its own but also with to the of the resulting AI and, in its intersections with more the other there are limitations and problems associated with the that AI is not capable of if could to more more or more general AI. In their Turing deep learning and Geoffrey took of where deep learning and highlighted its current such as the with In the case of natural language processing, Manning and Choi the challenges in and despite the of large language Elsewhere, and have the notion that large language models do anything learning, or In & of in a and discuss the problems in systems, the as how to reason about other their systems, and well as challenges in both and especially when the include both humans and Elsewhere, and others a useful of the problems in there is a growing among many that we do not have for the of AI systems, especially as they become more capable and the of use although AI and its related techniques are to be powerful tools for research in science, as examples in this volume and recent examples in which AI not only help results but also by design and become what some have AI to science and and to and challenges for the possibility that more powerful AI could to new in science, as well as progress in some of challenges and has long been a key for many at the frontier of AI research to more capable the of each of AI, the of more general problems that to the possibility of more capable AI learning, reasoning, of and and of these and other problems that could to more capable systems the of whether current characterized by deep learning, the of and and more foundational and and reinforcement or whether different approaches are in such as cognitive agent approaches or or based on logic and probability theory, to name a few. whether and what of approaches be the AI is but many the current along with of and learning architectures have to their about the of the current approaches is associated with the of whether artificial general intelligence can be and if how and Artificial general intelligence is in to what is called that AI and for tasks and goals, such as The development of on the other aims for more powerful AI - at as powerful as is generally to problem or and, in some the capacity to and improve as well as set and its own and the of and when will be is a for most that its achievement have and as is often in and such as A through and The to Ex and it is or there is growing among many at the frontier of AI research that we for the possibility of powerful with to and and with humans, its and use, and the possibility that of could and that we these into how we approach the development of of the research and development, and in AI is of the AI and in its what Nigel Shadbolt the of AI. This is given the for useful and applications and the for in sectors of the However, a few have made the development of their the most of these are and each of which has demonstrated results of increasing still a long way from the most discussed impact of AI and automation is on and the future of This is not In in the of the excitement about AI and and concerns about their impact on a on and the was that such technologies were important for growth and and "the that but not Most recent of this including those I have been involved have and that over time, more are than are that it is the and the and the of will the In their essay AI & and John discuss these for work and further, in & the of & to discuss the with to and and as well as the opportunities that are especially in developing In "The Turing The & of Artificial Intelligence," discusses how the use of human benchmarks in the development of AI the of AI that rather than human He that the AI's development will take in this and resulting for will on the for companies, and a that the that more will be than too much from of the and does not far enough into the future and at what AI will be capable The for AI could from of that in the is and labor and ability to are and and until automation has mostly physical and but that AI will be on more cognitive and tasks based on and, if early examples are even tasks are not of the In other are now in the world machines that that learn and that their ability to do these is to a range of problems they can will be with the range to which the human has been This was and Allen Newell in that this time could be different usually two that new labor will in which will by other humans for their own even when machines may be capable of these as well as or even better than The other is that AI will create so much and all without the for human and the of will be to for when that will the that once the first time since his creation will be with his his to use his from how to the which science and interest will have for to live and and However, most researchers that we are not to a future in which the of will and that until then, there are other and that be in the labor now and in the such as and other and how humans work increasingly capable that and John and discuss in this are not the only of the by AI. Russell a of the potentially from artificial general intelligence, once a of or ten But even we to general-purpose AI, the opportunities for companies and, for the and growth as well as from AI and its related technologies are more than to pursuit and by companies and in the development, and use of AI. At the many the is it is generally that is a in AI, as by its growth in AI research, and as highlighted in several will have for companies and given the of such technologies as discussed by and others the may in the way of approaches to AI and (such as whether they are companies or as and have have the to to in AI. The role of AI in intelligence, systems, autonomous even and other of increasingly In &
- Research Article
9
- 10.1093/cjip/poq004
- Apr 28, 2010
- The Chinese Journal of International Politics
*Corresponding author. Email: twukong@yahoo.com Tang Shiping is Professor at the School of International Relations and Public Affairs (SIRPA), Fudan University, Shanghai, China. Prior to his current appointment, Shiping was Senior Fellow at the S. Rajaratnam School of International Studies, Nanyang Technological University, Singapore, where this article was finished. He also thanks Taylor Fravel, Evan Montgomery, and Jack Snyder for helpful comments on an earlier draft. Beatrice Bieger provided outstanding research assistance. The usual disclaimer applies. 1 George H. Quester, Offense and Defense in the International System (New York, N.Y.: John Wiley and Sons, 1977); Robert Jervis ‘‘Cooperation under the security dilemma,’’ World Politics, Vol. 30, No. 2 (1978), pp. 167–214; Jack Snyder, ‘‘Civil-Military Relations and the Cult of the Offensive, 1914 and 1984,’’ International Security, Vol. 9, No. 1 (1984), pp. 108–146; Stephen Van Evera, ‘‘The Cult of the Offensive and the Origins of the First World War,’’ International Security, Vol. 9, No. 1 (1984), pp. 58–107; Stephen Van Evera, Causes of War: Power and the Roots of Conflict (Ithaca, N.Y.: Cornell University Press, 1999); and Charles L. Glaser and Chaim Kaufmann, ‘‘What Is the Offense-Defense Balance and How Can We Measure It?’’ International Security, Vol. 22, No. 4 (1998), pp. 44–82. In this article, ODT means orthodox or standard ODT (defined in section 1 below). In the literature, the works of Jervis, Quester, and Van Evera are usually accepted as the foundational works of orthodox ODT. 2 See, for example, Thomas J. Christensen and Jack Snyder, ‘Chain Gangs and Passed Bucks: Predicting Alliance Patterns in Multipolarity’, International Organization, Vol. 44, No. 2 (1990), pp. 137–68; James D. Fearon, ‘Rationalist Explanations for War’, International Organization, Vol. 49, No. 3 (1995), pp. 401–404; Robert Gilpin, War and Changes in World Politics (Cambridge: Cambridge University Press, 1981), pp. 59–63; Charles L. Glaser, ‘Realists as Optimists: Cooperation as Self-help’, International Security, Vol. 19, No. 3 (1994), pp. 50–90; Charles L. Glaser, ‘When Are Arms Races Dangerous?’ International Security, Vol. 28, No. 4 (2004), pp. 44–84; Ted Hopf, ‘Polarity, The Offense Defense Balance, and War’, American Political Science Review, Vol. 85, No. 2 (1991), pp. 475–493; Andrew Kydd, Trust and Mistrust in International Relations (Princeton: Princeton University Press, 2005), pp. 31–33; Peter Liberman, ‘The Offense-defense Balance, Interdependence, and War’, Security Studies, Vol. 9, No. 1 & 2, 1999–2000, pp. 59–91; Evan Braden Montgomery, ‘Breaking out of the Security Dilemma: Realism, Reassurance, and the Problem of Uncertainty’, International Security, Vol. 31, No. 2 (2006), pp. 151–185; Barry Posen, The Sources of Military Doctrines: France, Britain, and Germany between the World Wars (Ithaca: Cornell The Chinese Journal of International Politics, Vol. 3, 2010, 213–260 doi:10.1093/cjip/poq004
- Research Article
2
- 10.4467/29567610pib.24.002.19838
- Jun 10, 2024
- Prawo i Bezpieczeństwo
Technologically advanced artificial intelligence (AI) is making a significant contribution to strengthening national security. AI algorithms facilitate the processing of vast amounts of information, increasing the speed and accuracy of decision-making. Artificial intelligence and machine learning (AI/ML) are crucial for state and integrated hybrid attacks and protecting new threats in cyberspace. Existing AI capabilities have significant potential to impact national security by leveraging existing machine learning technology for automation in labor-intensive activities such as satellite imagery analysis and defense against cyber attacks. This article examines selected aspects of the impact of artificial intelligence on enhancing a state’s ability to protect its interests and its citizens., artificial intelligence through the use of neutron networks, predictive analytics and machine learning algorithms enables security agencies to analyse vast amounts of data and identify patterns indicative of potential threats. Integrating artificial intelligence into surveillance, border control and threat assessment systems enhances the ability to respond preemptively to security challenges. In addition, artificial intelligence algorithms facilitate the processing of vast amounts of information, increasing the speed and accuracy of decision-making by police authorities. The rapid development of AI raises a number of questions for its use in securing not only national security but protecting all citizens. In particular, it is worth answering the question How does artificial intelligence affect national security and clarifying the issue of how law enforcement agencies can use artificial intelligence to maximise the benefits of the new technology in terms of security and protecting communities from rising crime. The analysis is based on a descriptive method in describing the phenomenon; by explaining the concepts and applications of artificial intelligence to determine its role in the national security sphere. An analysis of the usefulness of artificial intelligence in particular in police operations is undertaken, with the aim of defending the thesis that, despite some threats to the protection of human rights from AI, it is becoming the best tool in the fight against all types of crime in the country. Technological advances in AI can also have many positive effects for law enforcement, and useful for law enforcement agencies, for example in facilitating the identification of persons or vehicles, predicting trends in criminal activities, tracking illegal criminal activities or illegal money flows, flagging and responding to fake news. Artificial intelligence (AI) has emerged as one of the biggest threats to information security, but efforts are being made to mitigate this new threat, but also to find solutions on how AI can become an ally in the fight against cyber-security, crime and terrorist threats. Artificial intelligence algorithms search huge datasets of communication traffic, satellite images and social media posts to identify potential cyber security threats, terrorist activities and organized crime. It is advisable, when analyzing the opportunities and threats that AI poses to national and public security, to gain a strategic advantage in the context of rapid technological change and also to manage the many risks associated with AI. The conclusion highlights the impact of AI on national security, creating a range of new opportunities coupled with challenges that government agencies should be prepared for in addressing ethical and security dilemmas. Furthermore, AI improves predictive analytics, thereby enabling security agencies to more accurately anticipate potential threats and enhance their preparedness by identifying vulnerabilities in the national security infrastructure
- Research Article
2
- 10.1177/002070209905400109
- Mar 1, 1999
- International Journal: Canada's Journal of Global Policy Analysis
HELPING TO CONTAIN AND TO REVERSE THE PROCESS OF 'HORIZONTAL' proliferation of weapons of mass destruction (WMD) is the most important security challenge facing Canada and the world community. Twenty-first century international security relations should comprise four elements: international security community-building through redistributive assistance; and the widespread inhibition and elimination of military capabilities for aggressive warfare under an ever-widening regime of verified self-restraint; ever more reliable arms control and disarmament verification that leads quickly towards disassembled 'virtual' nuclear arsenals; and, finally broadly collaborative, collective security enforcement of the world's emerging anti-WMD norms.The threat of WMD can be tamed only through co-operative international measures. A closer, more intimate security relationship with the United States is inescapable - but that alone may deter responsible, out-ward-looking military reform in Ottawa. A Canadian retreat into hemispheric isolation militarily would only support those conservative forces in the United States who argue for American strategic disengagement from the world's troubles. Deciding to try to do something useful and responsible about proliferation and the rising risk of the use of nuclear, chemical, and biological weapons would be a large and innovative step for any Canadian government. It would require farsighted political leadership able to develop a national consensus on political and strategic objectives. Clear goals for stability enhancement would have to be related to plausible and collectively affordable foreign aid and military capability. The temptation to yield to domestic political inertia and short-term economic self-interest is powerful. Without skilled, strategically sensitive leadership to explain how an active Canadian role might help to achieve radically improved international security, public support for military spending will remain low.(f.1)Canada's population is half that of Britain, France, or Italy, and more than one and one-half that of Australia. It is almost three times that of Belgium, and more than 20 per cent larger than that of the four Scandinavian countries combined. In 1996 Canada's gross domestic product (GDP) was still larger than China's and India's and over half the size of faltering Russia's. Per-capita income is among the highest in the world. Thus, an easily accessible tax base has long been available for spending much more on international security than recent governments have been willing to contemplate. Negotiating the landmines ban, discouraging trade in small arms, promoting the United Nations arms register are all worthwhile, popular activities that polish the national self-image. But they should all be supplements to, not substitutes for, a proportionately equitable commitment of resources to the management and prevention of international conflict - and thus the containment of the WMD threat.Future American governments will not 'police the world' alone. For almost fifty years the Soviet threat compelled disproportionate military expenditures and sacrifice by the United States. That world is gone. Only by enmeshing the capabilities of the United States and other leading powers in a co-operative security management regime where the burdens are widely shared does the world community have any plausible hope of avoiding warfare involving nuclear or other WMD.Canadian international security policy does not require much innovation to justify force expansion and improvement. Over the past decade Ottawa helped pioneer the notion of co-operative security that involves substituting multilateral security dialogue, confidence-building measures, regional security co-operation, defence policy transparency, and so forth for traditional approaches based on narrow conceptions of national self-interest. By promoting alternatives to security through unilateral military build-up, co-operative security offered the hope of ameliorating or even ending the classic 'security dilemma' (wherein the efforts of individual states to improve their military security threatened neighbouring states, thereby triggering arms races, mutual suspicion, and often conflict). …
- Research Article
- 10.21681/2311-3456-2025-6-158-165
- Jan 1, 2025
- Voprosy kiberbezopasnosti
Purpose: to identify the current opportunities, threats and prospects for the application of artificial intelligence in military affairs to develop proposals for expanding the potential for its use, ensuring the economic, scientific and technological development and security of Russia. Research method: analysis of data on the use of artificial intelligence in military affairs, synthesis and scientific forecasting, expert assessment, factual analysis within the framework of a systems approach, interdisciplinary approach. Result: this article analyzes the concept of «artificial intelligence in military affairs», its current indicators, and characteristics against the backdrop of the accelerated development of artificial intelligence in general. It presents key factors determining the feasibility of developing and implementing artificial intelligence systems in the military sphere, as well as the main areas of their use and their role in international politics and global security. The risks and threats of their application are identified. An analysis of the capabilities of various countries in using artificial intelligence technologies at strategic, operational, and tactical levels, the corresponding threats in armed conflicts and wars, and a forecast for the development of promising technologies is provided. The impact of artificial intelligence technologies in military affairs on strategic stability, national, and international security is discussed. It is demonstrated that the characteristics of artificial intelligence technologies in military affairs are currently one of the most important indicators of a state's influence and potential in the world but require the development of trust-building measures and the creation of an international control regime. Practical value: proposals for expanding the potential for using artificial intelligence in military affairs to ensure economic, scientific and technological development and security of Russia.
- Research Article
- 10.58583/em.4.1.5
- Jun 1, 2025
- Education Mind
This study examines the potential integration of artificial intelligence (AI) tools, particularly ChatGPT, in qualitative data analysis educational research. The research tries to clarify AI-supported data analysis processes in qualitative data from teacher interviews that examine deficiencies in a "Fundamentals of Programming" course in vocational high schools. The methodology employs a hybrid (both heuristic and literature based) prompt engineering strategy, utilizing open, axial, and selective coding, to ensure a comprehensive analysis. The aim was to compare human and AI-supported analysis to evaluate the depth, efficiency, and reliability of AI tools in qualitative research. This study chose a comparative case study design because examining programming instruction with two different data analysis methods (human researcher and AI-supported) requires a comparative perspective. The findings indicate that hybrid prompt design for AI can significantly enhance the efficiency and accuracy of qualitative data analysis, providing deeper insights and more structured outputs. However, the study also highlights the importance of carefully designed prompts and human oversight to mitigate potential biases and errors inherent in AI-supported analysis. This research contributes to the growing field of AI in data analysis, offering a framework for future studies to leverage AI technologies for qualitative data analysis, thereby enhancing research quality and productivity.
- Research Article
10
- 10.1080/08956308.2024.2324407
- Apr 29, 2024
- Research-Technology Management
Overview: Artificial intelligence (AI) is rapidly transforming the business landscape, but limited research is available regarding how companies are responding and adapting to AI. Using dynamic capability theory, we hypothesized that companies would develop new dynamic capabilities to seize AI opportunities. Our analysis of qualitative data from six major Chinese construction companies revealed that these companies’ existing dynamic capabilities are key to adopting and adapting AI technology for business model innovation. We developed a dynamic capability model that describes how companies use their existing dynamic capabilities for sensing AI, seizing AI, and transforming to leverage AI for business model innovation. Our research bridges the gap in understanding the relationship between AI adoption, existing dynamic capabilities, and AI adaptation. Integrating AI is not as daunting as it may seem. Companies, practitioners, and other stakeholders can transition into AI, starting with their existing capabilities.
- Research Article
- 10.63665/ijicsitr.v1i01.01
- Jan 1, 2025
- International Journal of Innovative Computer Science and IT Research
Artificial Intelligence (AI) has become a transformative tool in scientific research, reshaping traditional methodologies by enabling advanced data analysis, hypothesis testing, and predictive modeling. The integration of machine learning (ML), deep learning (DL), and natural language processing (NLP) has significantly accelerated discoveries in medicine, physics, chemistry, environmental science, and other disciplines. AI-driven technologies allow researchers to process large datasets, identify complex patterns, and generate predictive insights with unprecedented accuracy and speed. These innovations have led to breakthroughs in drug discovery, climate modeling, quantum physics simulations, and genetic research, demonstrating AI’s potential to enhance efficiency, automation, and precision in scientific investigations. Despite its numerous advantages, AI-driven research presents challenges, including ethical concerns, algorithmic bias, data security risks, and high computational demands. The reliance on large datasets and complex AI models raises concerns about data privacy, model transparency, and fairness in scientific conclusions. Additionally, AI systems require high-performance computing resources, making accessibility and affordability key concerns for many research institutions. Addressing these challenges through robust regulatory frameworks, ethical AI development, and improved AI model interpretability is crucial for ensuring responsible AI-driven scientific exploration. This study explores AI’s impact on scientific research, analyzing its applications, benefits, and challenges. The findings are supported by statistical data and two tables, illustrating AI’s adoption trends, efficiency improvements, and transformative role in modern research. Future advancements, such as AI-augmented automation, AI-driven robotics, and interdisciplinary AI applications, will further revolutionize scientific inquiry, making AI an indispensable tool for data-driven discovery and innovation.
- Research Article
- 10.7358/ecps-2024-030-luon
- Jan 15, 2025
- Journal of Educational, Cultural and Psychological Studies (ECPS Journal)
L’INTELLIGENZA ARTIFICIALE PER POTENZIARE LA RICERCA QUALITATIVA: RIFLESSIONI METODOLOGICHE SU UNO STUDIO PILOTA Abstract Qualitative analysis is essential in research across diverse fields, offering in-depth insights that often cannot be captured through quantitative methods. However, managing large volumes of qualitative data presents challenges, including its labour intensive nature and the potential for interpretive biases. In this study, we introduce and show a methodology step by step that integrates artificial intelligence (AI) in the analysis of qualitative data, with a focus on textual responses extracted from survey questions. Specifically, our approach employs AI techniques, utilizing Word2Vec for word embedding extraction and K-Means clustering to streamline the analysis of qualitative textual data, while ultimately integrating the researcher’s interpretation of the identified clusters to improve the relevance of the analysis. Moreover, the present article discusses the relevance and significance of this approach as well as its ethical and methodological challenges by means of an empirical illustration taken from a study on teachers’ sensemaking regarding a range of different educational activities.
- Research Article
1
- 10.59287/ijanser.454
- Apr 17, 2023
- International Journal of Advanced Natural Sciences and Engineering Researches
Artificial General Intelligence (AGI) is a rapidly developing field in the domain of artificial intelligence. AGI systems aim to replicate human-like intelligence and adaptability by possessing the capacity to perform a variety of intellectual tasks that are commonly associated with human beings. As opposed to narrow or weak AI systems, which are designed to perform specific tasks or solve particular problems, AGI seeks to generate machines that can reason, learn, and solve problems with the same level of competence and flexibility as humans. The multimodal nature of data makes it possible to obtain high-quality solutions to problems of analyzing corrupted or visually attacked images, provided that additional, nonvisual information is available. Additionally, the trend in artificial intelligence towards models with billions of parameters is due to the growth of data modality, leading to significantly higher complexity of models. The paper discusses the field of Artificial General Intelligence (AGI) and its potential to replicate human-like intelligence and adaptability. AGI systems aim to perform a variety of intellectual tasks that are commonly associated with human beings. Overall, the paper provides insights into the current state and future prospects of AGI research, highlighting both the potential and challenges of this rapidly developing field.
- Research Article
3
- 10.1097/acm.0000000000006134
- Jun 25, 2025
- Academic medicine : journal of the Association of American Medical Colleges
How can artificial intelligence (AI) be used to support qualitative data analysis (QDA)? To address this question, the authors conducted 3 scholarly activities. First, they used a large language model, ChatGPT-4, to analyze 3 existing narrative datasets (February 2024). ChatGPT generated accurate brief summaries; for all other attempted tasks, the initial prompt failed to produce desired results. After iterative prompt engineering, some tasks (e.g., keyword counting, summarization) were successful, whereas others (e.g., thematic analysis, keyword highlighting, word tree diagram, cross-theme insights) never generated satisfactory results. Second, the authors conducted a brief scoping review of AI-supported QDA (through May 2024). They identified 130 articles (104 original research, 26 nonresearch), of which 64 were published in 2023 or 2024. Seventy studies inductively analyzed data for themes, 39 used keyword detection, 30 applied a coding rubric, 28 used sentiment analysis, and 13 applied discourse analysis. Seventy-five used unsupervised learning (e.g., transformers, other neural networks). Third, building on these experiences and drawing from additional literature, the authors examined the potential capabilities, shortcomings, dangers, and ethical repercussions of AI-supported QDA. They note that AI has been used for QDA for more than 25 years. AI-supported QDA approaches include inductive and deductive coding, thematic analysis, computational grounded theory, discourse analysis, analysis of large datasets, preanalysis transcription and translation, and offering suggestions for study planning and interpretation. Concerns include the imperative of a "human in the loop" for data collection and analysis, the need for researchers to understand the technology, the risk of unsophisticated analyses, inevitable influences on workforce, and apprehensions regarding data privacy and security. Reflexivity should embrace both strengths and weaknesses of AI-supported QDA. The authors conclude that AI has a long history of supporting QDA through widely varied methods. Evolving technologies make AI-supported QDA more accessible and introduce both promises and pitfalls.
- Research Article
9
- 10.1111/ajo.13661
- Apr 1, 2023
- Australian and New Zealand Journal of Obstetrics and Gynaecology
Artificial intelligence (AI) is the simulation of human intelligence in machines that are programmed to think and learn like humans. AI has the potential to revolutionise the way that healthcare professionals diagnose, treat, and manage conditions affecting the female reproductive system. Machine learning (ML) is a subset of AI which deals with the development of algorithms and statistical models that enable computers to learn from and make predictions or decisions without being explicitly programmed to do so. Deep learning (DL) is a subfield of ML that utilises neural networks with multiple layers, known as deep neural networks (DNNs), to learn from data. DNNs are inspired by the structure and function of the human brain and are capable of automatically learning high-level features from raw data, such as images, audio and text. DL has been very successful in various applications such as image and speech recognition, natural language processing and computer vision. ML algorithms can be divided into three categories: supervised learning, unsupervised learning, and reinforcement learning. Supervised learning algorithms are trained on a labelled dataset, where the desired output (label) is already known. Unsupervised learning algorithms are trained on an unlabelled dataset and are used to discover patterns or relationships in the data. Reinforcement learning algorithms are trained using a trial-and-error approach, where the agent receives a reward or penalty for its actions. The goal of reinforcement learning is to learn a policy that maximises the expected reward over time. AI and ML are increasingly being applied in the field of obstetrics and gynaecology, with the potential to improve diagnostic accuracy, patient outcomes, and efficiency of care. AI has been applied to the field of medicine for several decades. One of the earliest examples of AI in medicine was the development of MYCIN in the 1970s, a computer program that could diagnose bacterial infections and recommend appropriate antibiotic treatments. MYCIN was developed by a team at Stanford University led by Edward Shortliffe, and its success demonstrated the potential of AI in medical decision making. In the 1980s, AI-based expert systems such as DXplain, developed at Massachusetts General Hospital, were used to assist in the diagnosis of diseases. These early AI systems were based on rule-based systems and were limited in their capabilities. One of the earliest examples of AI was the development of computer-aided diagnostic systems for ultrasound images in the 1970s and 1980s. These systems were designed to assist radiologists in identifying fetal anomalies and other conditions. In recent years, there has been a renewed interest in the use of AI in obstetrics and gynaecology, driven by advances in ML and the availability of large amounts of data. One of the primary areas in which AI and ML are being used in obstetrics and gynaecology is in the analysis of imaging data, such as ultrasound and magnetic resonance imaging. AI algorithms can be trained to automatically identify and classify different structures in the images, such as the placenta or fetal organs, with high accuracy. Another area of focus is the use of AI to predict preterm birth. Researchers have used ML algorithms to analyse data from electronic health records and identify patterns that are associated with preterm birth. By analysing large datasets of patient information and outcomes, AI algorithms can identify patterns and risk factors that may not be apparent to human analysts. This can help to improve the prediction of obstetric outcomes and guide clinical decision making. In recent years, AI has also been applied in obstetrics and gynaecology for real-time monitoring of high-risk pregnancies and identifying fetal distress. These systems use ML algorithms to analyse data from fetal heart rate monitors and identify patterns that are associated with fetal distress. AI and ML are also being used to develop new tools for the management of gynaecological conditions, such as endometriosis and fibroids. These tools can be used to predict the progression of the disease and guide treatment decisions. One example of the use of AI in benign gynaecology is the development of computer-aided diagnostic systems for endometriosis. These systems use ML algorithms to analyse images of the pelvic region and identify the presence of endometrial tissue, which can be a sign of endometriosis. Another area where AI and ML are being applied is in the management of fibroids. ML algorithms are being used to analyse imaging data and predict the growth and behaviour of fibroids, which can aid in the development of personalised treatment plans. In the field of oncology, AI is being used to improve the accuracy and speed of cancer diagnosis. AI algorithms can analyse images of tissue samples to identify the presence of cancer cells and predict the likelihood of a positive outcome following treatment. AI algorithms can be trained to analyse images from pelvic scans and identify signs of ovarian cancer with high accuracy. In addition to these specific applications, AI and ML are also being used to improve the efficiency and organisation of care in obstetrics and gynaecology. For example, by analysing large amounts of clinical data, AI algorithms can be used to identify patients at high risk of complications, prioritise them for care and ensure that they receive the appropriate level of care in a timely manner. AI and ML have the potential to revolutionise the field of fertility and in vitro fertilisation (IVF). By using data from large patient populations, AI and ML algorithms can help identify patterns and predict outcomes that would be difficult for human experts to discern. This can lead to improvements in diagnosis, treatment planning, and overall success rates for patients undergoing IVF. One area where AI and ML are being applied is in the selection of embryos for transfer during IVF. By analysing images of embryos, AI and ML algorithms can predict which embryos are most likely to result in a successful pregnancy. Another area where AI and ML have shown potential is in the optimisation of culture conditions for embryos. This has the potential to improve the survival and development of embryos, leading to higher pregnancy rates. AI and ML are also being used to improve the timing of embryo transfer during IVF. By analysing data from patient medical histories, AI and ML algorithms can predict the optimal time for transfer to increase the chances of successful pregnancies. In addition to these applications, AI and ML are being used in other areas of fertility and IVF to improve patient outcomes. For example, AI and ML are being used to predict the likelihood of ovarian reserve, predict ovulation timing, and improve the efficiency and cost-effectiveness of fertility clinics. AI and ML are rapidly evolving fields that have the potential to revolutionise the field of surgery. These technologies can be used to assist surgeons in a variety of ways, from pre-operative planning to real-time guidance during procedures. One of the key areas where AI and ML are being applied in surgery is in image analysis. For example, algorithms can be used to automatically segment and identify structures in medical images, such as tumours or blood vessels. This can help surgeons plan procedures more accurately and reduce the risk of complications. Another area where AI and ML are being used in surgery is in the development of robotic systems. These systems can be programmed to perform specific tasks, such as suturing or cutting tissue, with a high degree of precision and accuracy. In addition, robotic systems can be equipped with sensors that provide real-time feedback to the surgeon, which can help to improve the outcome of the procedure. These systems can be programmed with advanced algorithms that allow them to make precise incisions, control bleeding, and minimise tissue damage. AI and ML can also be used to improve the efficiency and safety of surgical procedures. For example, algorithms can be trained to analyse data from vital signs monitors, such as heart rate and blood pressure, and alert surgeons to potential complications in real-time. AI and ML are also being used to assist with post-operative care. For example, algorithms can be used to analyse patient data and predict which patients are at risk of complications, such as infection or bleeding, allowing surgeons to take preventative measures. Overall, AI and ML have the potential to significantly improve the field of surgery by increasing accuracy and precision, reducing the risk of complications, and improving patient outcomes. As the technology continues to advance, it is likely that we will see an increasing number of AI-assisted surgical systems and applications in clinical practice. In gynaecology specifically, there is a scarcity of data and diversity in the data. This can lead to AI models that are not generalisable to certain populations or that make incorrect predictions for certain groups of patients. Overall, AI has the potential to improve the diagnosis and management of obstetrics and gynaecology conditions, and many studies have shown that AI systems can perform at least as well as human experts in several areas. However, it is important to note that AI and ML are still in the early stages of development in obstetrics and gynaecology and more research is needed to fully understand their potential benefits and limitations. Some of the key challenges facing the field include developing AI systems that can explain their decisions, improving the robustness of AI systems to adversarial attacks, and developing AI systems that can operate in a wide range of environments. However, it is important to note that AI is a complementary tool to the obstetrics and gynaecology specialist and it is not meant to replace human expertise. The preceding text is entirely a product of an AI system. The preceding review, Artificial Intelligence in Gynaecology: An Overview was composed and written by an evolutionary AI system, ChatGPT (Chat Generative Pre-trained Transformer). ChatGPT is an AI chatbot underpinned by the GPT architecture, an autoregressive language model that uses DL to produce human-like text. The system was trained on a dataset of over 500 GB of text data derived from books, articles, and websites prior to 2021. The system can engage in responsive dialogue, generate computer code, and produce coherent and fluent text.1 ChatGPT was conceived by OpenAI, an AI laboratory based in San Francisco, California, founded by Elon Musk and Sam Altman in 2015. Since its public release on November 30, 2022, the potential for use and misuse has exponentially grown,2 ultimately leading to the prohibition of the utilisation of AI systems by multiple organisations, including schools and universities. Prompted by this interest in AI, the aim of this study was to assess the capacity of ChatGPT to generate a scientific review. In January 2023, a multidisciplinary study group was assembled to develop the study protocol, confirm the methodology and approve the topic. This research was exempt from ethics review under National Health and Medical Research Council guidelines.3 ChatGPT was instructed to generate an narrative review based on dialogue with the lead author, AY. The input was informed by collaborative meetings of the study group over the study period. The study group nominated the topic, 'Artificial Intelligence in Gynaecology', but ChatGPT generated the title, structure and content for this paper. The study group defined the input parameters for ChatGPT and each AI output was reviewed by the authors for consistency and context, informing the next input. The dialogue thus became increasingly specific and refined in each iteration, as the initial general outline was expanded to include specific subheadings, academic language and academic references. The review was finalised from the ChatGPT output through an explicit composition protocol, limiting assembly to cut and paste, deletion to whole sentences (but not words) and conversion to Australian English. No grammatical or syntax correction was performed. The AI output was cross-referenced and verified by the study group. In this study, ChatGPT generated 7112 words in over 15 iterations, including 32 references. The output was restricted to the final review of 1809 words and nine unique references after removing duplicates4 and incorrect references (19). The final paper was submitted for blinded peer review. Thus, this study has demonstrated the capacity of an AI system, such as ChatGPT, to generate a scientific review through human academic instruction. AI is anticipated to expand the boundaries of evidence-based medicine through the potential of comprehensive analysis and summation of scientific publications. However, unlike systematic reviews or meta-analyses governed by explicit methodology, AI systems such as ChatGPT are the product of DL algorithms that are dependent upon the quality of the input to train the AI. Consequently, unlike systematic reviews, AI systems are bound by the bias, breadth, depth and quality of the training material. A dedicated medical AI would therefore be trained on an appropriate data set, such as the National Library of Medicine Medline/PubMed database. However, the volume of data is challenging: in 2022 alone, there were over 33 million citations equating to a dataset of almost 200 Gb for the minimum dataset. In contrast, ChatGPT has no external reference capabilities, such as access to the internet, search engines or any other sources of information outside of its own model. If forced outside of this framework, ChatGPT may generate plausible-sounding but incorrect or nonsensical responses.4 Most notably, pushing the AI to include references leads the system to generate bizarre fabrications.5 Our paper demonstrated that only 28% (9/32) of the references were authentic, although better than the 11% reported in a recent paper.6 In contrast to human writing, AI-generated content is more likely to be of limited depth, contain factual errors, fabricated references and repeat the instructions used to seed the output.7 The latter results in a formulaic language redundancy that all but identifies AI content. The human authors thus echo the conclusion of ChatGPT that AI is a complementary tool to the specialist and not meant to replace human expertise. For the moment. The authors report no conflicts of interest.
- Research Article
- 10.55709/tsbsbildirilerdergisi.539
- Aug 9, 2023
- TSBS Bildiriler Dergisi
The article discusses the relationship between Artificial Intelligence (AI) and personal data privacy. Artificial Intelligence which emerged in the 1950s with Alan Turing's question "Can Machines Think?" has achieved significant successes in recent years, particularly in the development of human-like machine designs. The advancement of AI is closely related to the collection of big data, which in turn involves gathering more personal data. In today's context, marked by increased philosophical discussions about freedom and heightened emphasis on the individual, the issue of personal data privacy has become considerably important, accompanied by numerous challenges. Despite the existence of many national and international legal regulations regarding data privacy, there lacks a regulation overseeing the development processes of related technologies. Artificial Intelligence is often classified into categories based on its technical capabilities, namely "Narrow AI," "General AI," and "Superintelligence." AI has attracted significant attention in philosophy and has brought about interdisciplinary research in fields ranging from logic and aesthetics to law, ethics, medicine, and even art. In the face of emerging technologies, it seems inevitable that philosophy will initiate new research trajectories. The paper examines these three concepts related to AI, their connection with the concept of privacy, the issues arising and potentially arising in the realm of personal privacy, and the measures to mitigate these risks. The methodology employed includes a literature review and qualitative data analysis. The research reveals that the notion of privacy is mainly linked to ensuring the security of personal data related to technological advancements and the changing perception of privacy in individuals. However, it's observed that these studies often do not encompass current developments. The paper endeavors to answer questions such as: What do the terms "AI, Narrow AI, General AI, Superintelligence, and Personal Data Privacy" refer to? Have studies been conducted on the relationship between AI concepts and personal data privacy? What are the risks associated with personal data privacy, and how can these risks be mitigated? The study aims to highlight the risks in the development processes of AI as increasingly vocalized by scientists working in this field. It delves into issues such as the collection and storage of personal data, often unbeknownst to individuals, through cloud technologies and the potential problems arising from the use of this data in AI training. The article also discusses potential solutions to these issues. Drawing attention to recent developments in this scope is of paramount importance for social scientists, and disseminating information about these advancements is crucial. Based on the examinations and analyses conducted, it's determined that AI, now possessing new competencies, requires an interdisciplinary approach to address the areas it influences. It's found that leaving this field solely to engineers would result in numerous problems in the near future. Safeguarding privacy in the face of intelligent machines that encompass every aspect of our lives is a formidable challenge. Reducing potential problems necessitates individual awareness, legal regulations concerning AI development processes, and recognizing that AI technologies causing issues can also be part of the solution.
- Research Article
- 10.12681/jpentai.35617
- Feb 9, 2024
- Journal of Politics and Ethics in New Technologies and AI
Counterintelligence (CI) and Artificial Intelligence (AI) represent two distinct yet interconnected domains that play pivotal roles in safeguarding National and International Security. On the first hand, CI involves activities and measures taken to identify, prevent and counter any Intelligence activities of hostile entities, such as spying, sabotage and information gathering. On the other hand, AI refers to the development and use of computer systems that can perform tasks that typically require human intelligence, such as learning, reasoning and problem-solving. Subsequently, in the ever-evolving landscape of global security, the rise of AI has ushered in a new era of CI practices. The present paper delves into the intersection of CI and AI, exploring the profound impact of AI on the CI processes and how it is transforming National Security strategies, highlighting at the same time the fields of mutually influence. Ultimately, underscores the imperative of harnessing AI's potential to strengthen CI efforts in an ever-evolving threat landscape. Plus, it investigates the ethical concerns and privacy implications associated with AI in CI emphasizing the imperative of responsible AI development and deployment. Finally, through comprehensive international case studies, offers insights into how United States, China, Russia and Israel have integrated AI into their Intelligence and CI strategies, shedding light on the diverse approaches and challenges faced by different countries. Summarizing, the paper underscores the potential synergy between AI and CI, while also acknowledging the formidable challenges it presents, such as privacy concerns and adversarial AI. Striking a balance between harnessing AI's power and safeguarding national interests remains a pivotal task for policymakers and intelligence agencies in the ever-evolving landscape of national security.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.