From tools of violence to machines of uncertainty: mapping public perceptions of autonomous weapon systems and manual weapon systems

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Abstract Autonomous weapon systems (AWS) are emerging technologies capable of selecting and engaging targets without direct human control, raising profound ethical, legal, and political concerns. Yet, little is known about how the public conceptualizes AWS, despite the relevance of public conscience for international humanitarian law and national policy-making. This study examines Austrian representations of AWS in comparison with manual weapon systems (MWS), using a within-subjects free-word association task ( N = 200) analyzed through content analysis, Hierarchical Evocation Method (HEM), Multidimensional Scaling (MDS), Mantel tests, and Procrustes analyses. Findings reveal that MWS are coherently represented around pragmatic and destructive functions, whereas the central core of the AWS representation is dominated by the category ‘Unknown’. This suggests that respondents are largely unable to anchor AWS to familiar cognitive, moral, or technological schemas. Structural analyses confirm this finding, indicating AWS and MWS to constitute distinct representational fields. These results highlight the cognitive indeterminacy of AWS, illustrating how uncertainty shapes public reasoning and raising implications for democratic legitimacy and international governance.

Similar Papers
  • Research Article
  • 10.24144/2307-3322.2025.90.5.22
Autonomous weapon systems and artificial intelligence as a challenge to international humanitarian law and human rights
  • Oct 14, 2025
  • Uzhhorod National University Herald. Series: Law
  • O.V Kutovyi + 1 more

The article explores autonomous weapon systems (AWS) operating with artificial intelligence as a complex challenge to contemporary international humanitarian law (IHL) and the international human rights framework. It analyses the technological capabilities and levels of autonomy of combat systems – including land, aerial, and naval unmanned platforms – that are already being used in current armed conflicts. Particular attention is given to the compliance of AWS with the core principles of IHL: distinction, proportionality, humanity, and the prohibition of indiscriminate attacks. The study substantiates the problem of «blurred» responsibility, particularly the difficulty of attributing violations committed by autonomous or semi-autonomous weapon systems to a specific accountable subject. It examines the risks posed by AWS to the observance of Articles 2, 3, 8, and 13 of the European Convention on Human Rights. The potential of the European Court of Human Rights’ case law to adapt to emerging technological realities through structured interpretation and the expansion of precedent is analysed. Special attention is given to international dialogue under the auspices of the United Nations – notably within the framework of the Convention on Certain Conventional Weapons (CCW), which addresses weapons deemed to cause excessive injury or have indiscriminate effects – and to the work of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons System, as well as the role of soft law, state positions, and international organisations. The article concludes with recommendations regarding the need to preserve meaningful human control, update legal mechanisms of responsibility, and develop a universal regulatory framework for AWS. It also highlights the importance of an interdisciplinary approach, particularly the integration of ethical, technical, and security considerations in shaping the legal regime governing the use of artificial intelligence systems and tools in military contexts.

  • Book Chapter
  • Cite Count Icon 22
  • 10.1017/cbo9781316597873.006
On banning autonomous weapons systems: from deontological to wide consequentialist reasons
  • Dec 31, 1920
  • Guglielmo Tamburrini

Introduction This chapter examines the ethical reasons supporting a moratorium and, more stringently, a pre-emptive ban on autonomous weapons systems (AWS). Discussions of AWS presuppose a relatively clear idea of what it is that makes those systems autonomous. In this technological context, the relevant type of autonomy is task autonomy, as opposed to personal autonomy, which usually pervades ethical discourse. Accordingly, a weapons system is regarded here as autonomous if it is capable of carrying out the task of selecting and engaging military targets without any human intervention. Since robotic and artificial intelligence technologies are crucially needed to achieve the required task autonomy in most battlefield scenarios, AWS are identified here with some sort of robotic systems. Thus, ethical issues about AWS are strictly related to technical and epistemological assessments of robotic technologies and systems, at least insofar as the operation of AWS must comply with discrimination and proportionality requirements of international humanitarian law (IHL). A variety of environmental and internal control factors are advanced here as major impediments that prevent both present and foreseeable robotic technologies from meeting IHL discrimination and proportionality demands. These impediments provide overwhelming support for an AWS moratorium – that is, for a suspension of AWS development, production and deployment at least until the technology becomes sufficiently mature with respect to IHL. Discrimination and proportionality requirements, which are usually motivated on deontological grounds by appealing to the fundamental rights of the potential victims, also entail certain moral duties on the part of the battlefield actors. Hence, a moratorium on AWS is additionally supported by a reflection on the proper exercise of these duties – military commanders ought to refuse AWS deployment until the risk of violating IHL is sufficiently low. Public statements about AWS have often failed to take into account the technical and epistemological assessments of state-of-the-art robotics, which provide support for an AWS moratorium. Notably, some experts of military affairs have failed to convey in their public statements the crucial distinction between the expected short-term outcomes of research programmes on AWS and their more ambitious and distant goals. Ordinary citizens, therefore, are likely to misidentify these public statements as well-founded expert opinions and to develop, as a result, unwarranted beliefs about the technological advancements and unrealistic expectations about IHL-compliant AWS.

  • Research Article
  • Cite Count Icon 45
  • 10.2139/ssrn.2184826
Autonomous Weapon Systems and International Humanitarian Law: A Reply to the Critics
  • Dec 5, 2012
  • SSRN Electronic Journal
  • Michael N Schmitt

In November 2012, Human Rights Watch, in collaboration with the International Human Rights Clinic at Harvard Law School, released Losing Humanity: The Case against Killer Robots.[2] Human Rights Watch is among the most sophisticated of human rights organizations working in the field of international humanitarian law. Its reports are deservedly influential and have often helped shape application of the law during armed conflict. Although this author and the organization have occasionally crossed swords,[3] we generally find common ground on key issues. This time, we have not. “Robots” is a colloquial rendering for autonomous weapon systems. Human Rights Watch’s position on them is forceful and unambiguous: “[F]ully autonomous weapons would not only be unable to meet legal standards but would also undermine essential non-safeguards for civilians.”[4] Therefore, they “should be banned and . . . governments should urgently pursue that end.”[5] In fact, if the systems cannot meet the legal standards cited by Human Rights Watch, then they are already unlawful as such under customary international law irrespective of any policy or treaty law ban on them.[6] Unfortunately, Losing Humanity obfuscates the on-going legal debate over autonomous weapon systems. A principal flaw in the analysis is a blurring of the distinction between international humanitarian law’s prohibitions on weapons per se and those on the unlawful use of otherwise lawful weapons.[7] Only the former render a weapon illegal as such. To illustrate, a rifle is lawful, but may be used unlawfully, as in shooting a civilian. By contrast, under customary international law, biological weapons are unlawful per se; this is so even if they are used against lawful targets, such as the enemy’s armed forces. The practice of inappropriately conflating these two different strands of international humanitarian law has plagued debates over other weapon systems, most notably unmanned combat aerial systems such as the armed Predator. In addition, some of the report’s legal analysis fails to take account of likely developments in autonomous weapon systems technology or is based on unfounded assumptions as to the nature of the systems. Simply put, much of Losing Humanity is either counter-factual or counter-normative. This Article is designed to infuse granularity and precision into the legal debates surrounding such weapon systems and their use in the future “battlespace.” It suggests that whereas some conceivable autonomous weapon systems might be prohibited as a matter of law, the use of others will be unlawful only when employed in a manner that runs contrary to international humanitarian law’s prescriptive norms. This Article concludes that Losing Humanity’s recommendation to ban the systems is insupportable as a matter of law, policy, and operational good sense. Human Rights Watch’s analysis sells international humanitarian law short by failing to appreciate how the law tackles the very issues about which the organization expresses concern. Perhaps the most glaring weakness in the recommendation is the extent to which it is premature. No such weapons have even left the drawing board. To ban autonomous weapon systems altogether based on speculation as to their future form is to forfeit any potential uses of them that might minimize harm to civilians and civilian objects when compared to other systems in military arsenals.

  • Research Article
  • Cite Count Icon 14
  • 10.2139/ssrn.2271158
Examining Autonomous Weapon Systems from a Law of Armed Conflict Perspective
  • Jun 13, 2013
  • SSRN Electronic Journal
  • Jeffrey S Thurnher

This chapter explores the legal implications of autonomous weapon systems and the potential challenges such systems might present to the laws governing weaponry and the conduct of hostilities. Autonomous weapon systems are weapons that are capable of selecting and engaging a target without further human operator involvement. Although such systems have not yet been fully developed, technological advances, particularly in artificial intelligence, make the appearance of such systems a distinct possibility in the years to come. Given such a possibility, it is essential to look closely at both the relevant technology involved in these cutting-edge systems and the applicable law. This chapter commences with an examination of the emerging technology supporting these sophisticated systems, by detailing autonomous features that are currently being designed for weapons and anticipating how technological advances might be incorporated into future weapon systems. A second aim of the chapter is to describe the relevant law of armed conflict principles applicable to new weapon systems, with a particular focus on the unique legal challenges posed by autonomous weapons. The legal analysis will outline how autonomous weapon systems would need to be designed for them to be deemed lawful per se, and whether the use of autonomous weapons during hostilities might be prohibited in particular circumstances under the law of armed conflict. The third and final focus of this chapter is to address potential lacunae in the law dealing with autonomous weapon systems. In particular, the author will reveal how interpretations of and issues related to subjectivity in targeting decisions and overall accountability may need to be viewed differently in response to autonomy.

  • Book Chapter
  • Cite Count Icon 14
  • 10.1007/978-90-6704-933-7_13
Examining Autonomous Weapon Systems from a Law of Armed Conflict Perspective
  • Dec 24, 2013
  • Jeffrey S Thurnher

This chapter explores the legal implications of autonomous weapon systems and the potential challenges such systems might present to the laws governing weaponry and the conduct of hostilities. Autonomous weapon systems are weapons that are capable of selecting and engaging a target without further human operator involvement. Although such systems have not yet been fully developed, technological advances, particularly in artificial intelligence, make the appearance of such systems a distinct possibility in the years to come. Given such a possibility, it is essential to look closely at both the relevant technology involved in these cutting-edge systems and the applicable law. This chapter commences with an examination of the emerging technology supporting these sophisticated systems, by detailing autonomous features that are currently being designed for weapons and anticipating how technological advances might be incorporated into future weapon systems. A second aim of the chapter is to describe the relevant law of armed conflict principles applicable to new weapon systems, with a particular focus on the unique legal challenges posed by autonomous weapons. The legal analysis will outline how autonomous weapon systems would need to be designed for them to be deemed lawful per se, and whether the use of autonomous weapons during hostilities might be prohibited in particular circumstances under the law of armed conflict. The third and final focus of this chapter is to address potential lacunae in the law dealing with autonomous weapon systems. In particular, the author will reveal how interpretations of and issues related to subjectivity in targeting decisions and overall accountability may need to be viewed differently in response to autonomy.

  • Book Chapter
  • Cite Count Icon 1
  • 10.1017/cbo9781316597873.012
Autonomy and uncertainty: increasingly autonomous weapons systems and the international legal regulation of risk
  • Dec 31, 1920
  • Nehal Bhuta + 1 more

Uncertainty and its problems The debate concerning the law, ethics and policy of autonomous weapons systems (AWS) remains at an early stage, but one of the consistent emergent themes is that of uncertainty. Uncertainty presents itself as a problem in several different registers: first, there is the conceptual uncertainty surrounding how to define and debate the nature of autonomy in AWS. Contributions to this volume from roboticists, sociologists of science and philosophers of science demonstrate that within and without the field of computer science, no stable consensus exists concerning the meaning of autonomy or of autonomy in weapons systems. Indeed, a review of definitions invoked during a recent expert meeting convened by states parties to the Convention on Certain Conventional Weapons shows substantially different definitions in use among military experts, computer scientists and international humanitarian lawyers. At stake in the debate over definitions are regulatory preoccupations and negotiating postures over a potential pre-emptive ban. A weapons system capable of identifying, tracking and firing on a target without human intervention, and in a manner consistent with the humanitarian law obligations of precaution, proportionality and distinction, is a fantastic ideal type. Defining AWS in such a way truncates the regulatory issues to a simple question of whether such a system is somehow inconsistent with human dignity – a question about which states, ethicists and lawyers can be expected to reasonably disagree. However, the definition formulated in this manner also begs important legal questions in respect of the design, development and deployment of AWS. Defining autonomous weapons in terms of this pure type reduces almost all questions of legality to questions of technological capacity, to which a humanitarian lawyer's response can only be: ‘If what the programmers and engineers claim is true, then … ’ The temporally prior question of whether international law generally, and international humanitarian law (IHL) in particular, prescribes any standards or processes that should be applied to the design, testing, verification and authorization of the use of AWS is not addressed. Yet these ex ante considerations are urgently in need of legal analysis and may prove to generate deeper legal problems.

  • Book Chapter
  • Cite Count Icon 5
  • 10.1007/978-94-6265-072-5_9
Means and Methods of the Future: Autonomous Systems
  • Nov 4, 2015
  • Jeffrey S Thurnher

Autonomous systems will fundamentally alter the way wars are waged. In particular, autonomous weapon systems, capable of selecting and engaging targets without direct human operator involvement, represent a significant shift of humans away from the battlefield. As these new means and methods of warfare are introduced, many important targeting decisions will likely need to be made earlier and further away from the front lines. Fearful of these changes and coupled with other legal and moral concerns, groups opposed to autonomous weapons have formed and begun campaigning for a pre-emptive ban on their development and use. Nations intending to use these emerging technologies must grapple with how best to adjust their targeting processes and procedures to accommodate greater autonomy in weapon systems. This chapter examines these cutting-edge and controversial weapons with a particular emphasis on the legal impact on targeting during international armed conflicts. Initially, this chapter will explore the promising technological advances and operational benefits which indicate these weapon systems may become a reality in the not-so-distant future. The focus will then turn to the unique challenges the systems present to the law of armed conflict under both weapons law and targeting law principles. Next, the examination will shift to two key aspects of targeting most affected by autonomous systems: targeting doubt and subjectivity in targeting. The author ultimately concludes that autonomous weapon systems are unlikely to be deemed unlawful per se and that, while these targeting issues raise legitimate concerns, the use of autonomous weapons under many circumstances will be lawful.

  • Research Article
  • 10.52152/n73h2x40
APPLICATIONS OF ARTIFICIAL INTELLIGENCE UNDER INTERNATIONAL HUMANITARIAN LAW
  • Oct 3, 2025
  • Lex localis - Journal of Local Self-Government
  • Dr Sadaf Fahim

As increasingly sophisticated weaponry reaches the field of battle, people are becoming increasingly isolated from the conflict. We already live in a world where a man sitting in a room can direct and carry out target-killing operations using robotic weapons on the opposite side of the globe. In this aspect, the advancement of weaponry technology has kept people off the battlefield, and the next step—artificial intelligence (AI) weapons—may do the same by removing people from decision-making. The use of AI technologies and techniques in warfare is growing quickly. This presented difficult difficulties to society, academics, lawmakers, military planners, and inventors. The development of AI weapons is already bolstering the armed markets; they are no longer the stuff of science fiction. Some nations have made significant progress in developing autonomous and machine learning systems from personnel systems, such as Israel's Iron Dome, which can stop approaching missiles autonomously and more quickly than a human could. President Putin stated to Russian students on September 8, 2017, that "artificial intelligence is the future, not only for Russia but for all of humankind.” Whoever assumes control of this arena will also assume control of the entire planet. An international discussion about whether and how such autonomous and machine learning weapons systems can conform with the standards of international humanitarian and customary law is being sparked by the development of AI weapons and technologies. The main issues in this paper are whether such autonomous weapons systems are effectively under human control, whether they can adhere to the fundamental principles of humanitarian law, such as distinction, proportionality, and the protection of civilians, what the nature of such armed conflict will be, and who will be held accountable for any mistakes. Finding the answers to those questions is the goal of this endeavour. The goal of this research is to briefly investigate the nature and character of warfare with AI weapons before outlining the significance and evolution of AI weapons. This study finishes by outlining the responsibilities under international humanitarian law that states may consider as part of their evaluations of weapons utilizing AI-related technologies.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 23
  • 10.1007/s00146-022-01425-y
Autonomous weapon systems and jus ad bellum
  • Mar 19, 2022
  • AI & SOCIETY
  • Alexander Blanchard + 1 more

In this article, we focus on the scholarly and policy debate on autonomous weapon systems (AWS) and particularly on the objections to the use of these weapons which rest on jus ad bellum principles of proportionality and last resort. Both objections rest on the idea that AWS may increase the incidence of war by reducing the costs for going to war (proportionality) or by providing a propagandistic value (last resort). We argue that whilst these objections offer pressing concerns in their own right, they suffer from important limitations: they overlook the difficulties of calculating ad bellum proportionality; confuse the concept of proportionality of effects with the precision of weapon systems; disregard the ever-changing nature of war and of its ethical implications; mistake the moral obligation imposed by the principle of last resort with the impact that AWS may have on political decision to resort to war. Our analysis does not entail that AWS are acceptable or justifiable, but it shows that ad bellum principles are not the best set of ethical principles for tackling the ethical problems raised by AWS; and that developing adequate understanding of the transformations that the use of AWS poses to the nature of war itself is a necessary, preliminary requirement to any ethical analysis of the use of these weapons.

  • Research Article
  • Cite Count Icon 23
  • 10.1353/hrq.2016.0034
Human Rights and the use of Autonomous Weapons Systems (AWS) During Domestic Law Enforcement
  • Jan 1, 2016
  • Human Rights Quarterly
  • Christof Heyns

Much attention has been paid during the last couple of years to the emergence of autonomous weapons systems (AWS), weapon systems that allow computers, as opposed to human beings, to have increased control over decisions to use force. These discussions have largely centered on the use of such systems in armed conflict. However, it is increasingly clear that AWS are also becoming available for use in domestic law enforcement. This article explores the implications of international human rights law for this development. There are even stronger reasons to be concerned about the use of fully autonomous weapons systems--AWS without meaningful human control--in law enforcement than in armed conflict. Police officers--unlike their military counterparts--have a duty to protect the public. Moreover the judgments that are involved in the use of force under human rights standards require more personal involvement that those in the conduct of hostilities. Particularly problematic is the potential impact of fully autonomous weapons on the rights to bodily integrity (such as the right to life) and the right to dignity. Where meaningful human control is retained, machine autonomy can enhance human autonomy, but at the same time this means, higher standards of responsibility about the use of force should be applied because there is a higher level of human control. However, fully autonomous weapons entail no meaningful human control and, as a result, such weapons should have no role to play in law enforcement. Language: en

  • Research Article
  • 10.2139/ssrn.3349132
Toward the Special Computer Law of Targeting: 'Fully Autonomous' Weapons Systems and the Proportionality Test
  • Apr 2, 2019
  • SSRN Electronic Journal
  • Masahiro Kurosaki

One of the implications of “fully autonomous” weapons systems (AWS) as an independent decision-maker in the targeting process is that a human-centered paradigm should never be taken for granted. Indeed, they could allow a LOAC debate immune from that paradigm all the more because the underlying “principle of human dignity” has failed to offer convincing reasons for its propriety in international legal discourse. Furthermore, the history of LOAC tells us that the existing human-centered approach to the proportionality test—the commander-centric approach—is, albeit strongly supported and developed by states and international criminal jurisprudence, particularly since the end of the World War II, nothing more than a product of the time. So long as fully AWS exhibit the potential for better contribution to the LOAC goals to protect the victims of armed conflict than human soldiers, one could thus seek an alternative computer-centered approach to the law of targeting—a subset of LOAC—tailored to the defining characteristics of fully AWS in a manner to maximize their potential as well as to make the law more responsive to the needs of ever-changing battlespaces. With this in mind, this chapter aims to relativize the absoluteness of the existing human-centered approach to the proportionality test—which is not to deny the role of humans in the overall regulations of fully AWS whatsoever—and then, away from that approach, to propose an alternative one dedicated to fully AWS for their better regulation in response to the demands of changing times.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 73
  • 10.1007/s10676-018-9494-0
Autonomous weapons systems, killer robots and human dignity
  • Dec 6, 2018
  • Ethics and Information Technology
  • Amanda Sharkey

One of the several reasons given in calls for the prohibition of autonomous weapons systems (AWS) is that they are against human dignity (Asaro in Int Rev Red Cross 94(886):687–709, 2012; Docherty in Shaking the foundations: the human rights implications of killer robots, Human Rights Watch, New York, 2014; Heyns in S Afr J Hum Rights 33(1):46–71, 2017; Ulgen in Human dignity in an age of autonomous weapons: are we in danger of losing an ‘elementary consideration of humanity’? 2016). However there have been criticisms of the reliance on human dignity in arguments against AWS (Birnbacher in Autonomous weapons systems: law, ethics, policy, Cambridge University Press, Cambridge, 2016; Pop in Autonomous weapons systems: a threat to human dignity? 2018; Saxton in (Un)dignified killer robots? The problem with the human dignity argument, 2016). This paper critically examines the relationship between human dignity and AWS. Three main types of objection to AWS are identified; (i) arguments based on technology and the ability of AWS to conform to international humanitarian law; (ii) deontological arguments based on the need for human judgement and meaningful human control, including arguments based on human dignity; (iii) consequentialist reasons about their effects on global stability and the likelihood of going to war. An account is provided of the claims made about human dignity and AWS, of the criticisms of these claims, and of the several meanings of ‘dignity’. It is concluded that although there are several ways in which AWS can be said to be against human dignity, they are not unique in this respect. There are other weapons, and other technologies, that also compromise human dignity. Given this, and the ambiguities inherent in the concept, it is wiser to draw on several types of objections in arguments against AWS, and not to rely exclusively on human dignity.

  • Research Article
  • Cite Count Icon 13
  • 10.1093/ips/olz023
Norms Are What Machines Make of Them: Autonomous Weapons Systems and the Normative Implications of Human-Machine Interactions
  • Sep 25, 2019
  • International Political Sociology
  • Hendrik Huelss

The emergence of autonomous weapons systems (AWS) is increasingly in the academic and public focus. Research largely focuses on the legal and ethical implications of AWS as a new weapons category set to revolutionize the use of force. However, the debate on AWS neglects the question of what introducing these weapons systems could mean for how decisions are made. Pursuing this from a theoretical-conceptual perspective, the article critically analyzes what impact AWS can have on norms as standards of appropriate action. The article draws on the Foucauldian “apparatus of security” to develop a concept that accommodates the role of security technologies for the conceptualization of norms guiding the use of force. It discusses to what extent a technologically mediated construction of a normal reality emerges in the interplay of machinic and human agency and how this leads to the development of norms. The article argues that AWS provide a specific construction of reality in their operation and thereby define procedural norms that tend to replace the deliberative, normative-political decision on when, how, and why to use force. The article is a theoretical-conceptual contribution to the question of why AWS matter and why we should further consider the implications of new arrangements of human-machine interactions in IR.

  • Single Book
  • Cite Count Icon 1
  • 10.1093/oso/9780190495657.003.0009
What Is the Moral Problem with Killer Robots?
  • Nov 23, 2017
  • Susanne Burri

An autonomous weapon system (AWS) is a weapons system that, “once activated, can select and engage targets without further intervention by a human operator” (US Department of Defense directive 3000.09, November 21, 2012). Militaries around the world are investing substantial amounts of money and effort into the development of AWS. But the technology has its vocal opponents, too. This chapter argues against the idea that a targeting decision made by an AWS is always morally flawed simply because it is a targeting decision made by an AWS. It scrutinizes four arguments in favor of this idea and argues that none of them is convincing. It also presents an argument in favor of developing autonomous weapons technology further. The aim of this chapter is to dispel one worry about AWS, to keep this worry from drawing attention away from the genuinely important issues that AWS give rise to.

  • Research Article
  • Cite Count Icon 8
  • 10.1080/03932729.2020.1864995
In Search of the ‘Human Element’: International Debates on Regulating Autonomous Weapons Systems
  • Jan 2, 2021
  • The International Spectator
  • Daniele Amoroso + 1 more

The ‘weaponisation’ of artificial intelligence and robotics, especially their convergence in autonomous weapons systems (AWS), is a matter of international concern. Debates on AWS have revolved around (i) the identification of hallmarks of AWS with respect to other weapons; (ii) what it is that makes AWS destructive force especially troublesome from a normative standpoint; and (iii) steps the international community can take to allay these concerns. Of particular concern is the need to preserve the ‘human element’ in the use of force. A differentiated approach to this latter issue, which is also principled and prudential, may pave the way to a legally binding instrument to regulate AWS by establishing meaningful human control over all weapons systems.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.