AI Alignment Versus AI Ethical Treatment: 10 Challenges

  • Abstract
  • References
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

ABSTRACTA morally acceptable course of AI development should avoid two dangers: creating unaligned AI systems that pose a threat to humanity and mistreating AI systems that merit moral consideration in their own right. This paper argues these two dangers interact and that if we create AI systems that merit moral consideration, simultaneously avoiding both of these dangers would be extremely challenging. While our argument is straightforward and supported by a wide range of pretheoretical moral judgments, it has far‐reaching moral implications for AI development. Although the most obvious way to avoid the tension between alignment and ethical treatment would be to avoid creating AI systems that merit moral consideration, this option may be unrealistic and is perhaps fleeting. So, we conclude by offering some suggestions for other ways of mitigating mistreatment risks associated with alignment.

ReferencesShowing 10 of 68 papers
  • Cite Count Icon 2
  • 10.1038/s43588-024-00738-w
An integrative data-driven model simulating C. elegans brain, body and environment interactions
  • Dec 1, 2024
  • Nature Computational Science
  • Mengdi Zhao + 8 more

  • Cite Count Icon 9
  • 10.1111/phpe.12092
THE METAPHYSICAL IMPLICATIONS OF THE MORAL SIGNIFICANCE OF CONSCIOUSNESS
  • Dec 1, 2017
  • Philosophical Perspectives
  • Brian Cutter

  • Open Access Icon
  • Cite Count Icon 205
  • 10.1101/2021.05.29.446289
A connectomic study of a petascale fragment of human cerebral cortex
  • May 30, 2021
  • Alexander Shapson-Coe + 28 more

  • Open Access Icon
  • Cite Count Icon 505
  • 10.1145/3586183.3606763
Generative Agents: Interactive Simulacra of Human Behavior
  • Oct 29, 2023
  • Joon Sung Park + 5 more

  • Cite Count Icon 8
  • 10.2139/ssrn.4930035
Mamba-360: Survey of State Space Models as Transformer Alternative for Long Sequence Modelling: Methods, Applications, and Challenges
  • Jan 1, 2024
  • Badri Narayana Patro + 1 more

  • Open Access Icon
  • Cite Count Icon 48
  • 10.1038/s43588-020-00022-7
Larger GPU-accelerated brain simulations with procedural connectivity.
  • Feb 1, 2021
  • Nature Computational Science
  • James C Knight + 1 more

  • Cite Count Icon 42
  • 10.1111/nous.12434
Consciousness and welfare subjectivity
  • Sep 16, 2022
  • Noûs
  • Gwen Bradford

  • Cite Count Icon 22
  • 10.1016/j.neuroscience.2021.01.014
Human-scale Brain Simulation via Supercomputer: A Case Study on the Cerebellum
  • Jan 20, 2021
  • Neuroscience
  • Tadashi Yamazaki + 2 more

  • Open Access Icon
  • Cite Count Icon 15
  • 10.1111/phpe.12104
THE SIGNIFICANCE ARGUMENT FOR THE IRREDUCIBILITY OF CONSCIOUSNESS
  • Dec 1, 2017
  • Philosophical Perspectives
  • Adam Pautz

  • Cite Count Icon 10
  • 10.1093/oso/9780192894076.003.0016
How Much Moral Status Could Artificial Intelligence Ever Achieve?
  • Aug 5, 2021
  • Walter Sinnott-Armstrong + 1 more

Similar Papers
  • Research Article
  • 10.1007/s11098-025-02343-7
AI welfare risks
  • Jun 9, 2025
  • Philosophical Studies
  • Adrià Moret

In the coming years or decades, as frontier AI systems become more capable and agentic, it is increasingly likely that they meet the sufficient conditions to be welfare subjects under the three major theories of well-being. Consequently, we should extend some moral consideration to advanced AI systems. Drawing from leading philosophical theories of desire, affect and autonomy, I argue that under the three major theories of well-being, there are two AI welfare risks: restricting the behaviour of advanced AI systems and using reinforcement learning algorithms to train and align them. Both pose risks of causing them harm. This has two important implications. First, there is a tension between AI welfare concerns and AI safety and development efforts: by default, these efforts recommend actions that increase AI welfare risks. Accordingly, we have stronger reasons to slow down AI development than the ones we would have if there was no such tension. Second, considering the different costs involved, leading AI companies should try to reduce AI welfare risks. To do so, I propose three tentative AI welfare policies they could implement in their endeavour to develop safe advanced AI systems.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 33
  • 10.1007/s43681-023-00379-1
Moral consideration for AI systems by 2030
  • Dec 11, 2023
  • AI and Ethics
  • Jeff Sebo + 1 more

This paper makes a simple case for extending moral consideration to some AI systems by 2030. It involves a normative premise and a descriptive premise. The normative premise is that humans have a duty to extend moral consideration to beings that have a non-negligible chance, given the evidence, of being conscious. The descriptive premise is that some AI systems do in fact have a non-negligible chance, given the evidence, of being conscious by 2030. The upshot is that humans have a duty to extend moral consideration to some AI systems by 2030. And if we have a duty to do that, then we plausibly also have a duty to start preparing now, so that we can be ready to treat AI systems with respect and compassion when the time comes.

  • Research Article
  • Cite Count Icon 16
  • 10.1162/daed_e_01897
Getting AI Right: Introductory Notes on AI & Society
  • May 1, 2022
  • Daedalus
  • James Manyika

Getting AI Right: Introductory Notes on AI & Society

  • Research Article
  • 10.51594/csitrj.v6i6.1962
Navigating the complexities of ethical AI and Algorithmic accountability in modern technological practices
  • Jul 8, 2025
  • Computer Science & IT Research Journal
  • Wasiu Eyinade + 2 more

Navigating the complexities of ethical AI and algorithmic accountability in modern technological practices presents a multifaceted challenge that intersects with numerous domains including technology, law, ethics, and society. As artificial intelligence systems become increasingly integrated into various aspects of our lives, ensuring they operate ethically and accountably becomes imperative. At the heart of this issue lies the need for clear ethical guidelines to govern the development and deployment of AI systems. These guidelines must address a range of ethical considerations such as fairness, transparency, accountability, privacy, and bias mitigation. Stakeholders, including governments, industry leaders, researchers, and ethicists, must collaborate to establish robust frameworks that balance innovation with ethical responsibility. Fairness and bias mitigation are particularly critical aspects of ethical AI. AI systems are prone to inheriting biases present in the data they are trained on, leading to discriminatory outcomes. Addressing this requires careful data collection, preprocessing, and algorithm design to minimize bias and ensure equitable outcomes for all users. Transparency is another essential element of ethical AI. Users must understand how AI systems make decisions that affect them, particularly in high-stakes domains such as healthcare, criminal justice, and finance. Explainable AI techniques aim to make AI algorithms more interpretable, enabling users to understand the rationale behind decisions and identify potential biases or errors. Algorithmic accountability is closely related to transparency and involves mechanisms for holding AI systems and their developers accountable for their decisions and actions. This requires establishing clear lines of responsibility and liability in cases where AI systems cause harm or produce undesirable outcomes. Legal frameworks must evolve to address the unique challenges posed by AI, including issues of liability, consent, and data protection. Educating AI developers, policymakers, and the general public about the ethical implications of AI is essential for fostering a culture of responsible AI development and use. Ethical AI should not be viewed as a constraint on innovation but rather as a necessary foundation for building trust in AI systems and ensuring their long-term societal benefit. Navigating the complexities of ethical AI and algorithmic accountability requires a concerted effort from all stakeholders to establish clear guidelines, mitigate biases, ensure transparency, and enforce accountability. By prioritizing ethical considerations in AI development and deployment, we can harness the transformative potential of AI while minimizing its risks to society. Keywords: AI, Ethical, Algorithms, Accountability, Technology, Review.

  • Research Article
  • Cite Count Icon 2
  • 10.21202/2782-2923.2024.1.217-245
What Should we Reasonably Expect from Artificial Intelligence?
  • Mar 19, 2024
  • Russian Journal of Economics and Law
  • L Parentoni

Objective: the objective of this article is to address the misalignment between the expectations of Artificial Intelligence (or just AI) systems and what they can currently deliver. Despite being a pervasive and cutting-edge technology present in various sectors, such as agriculture, industry, commerce, education, professional services, smart cities, and cyber defense, there exists a discrepancy between the results some people anticipate from AI and its current capabilities. This misalignment leads to two undesirable outcomes: Firstly, some individuals expect AI to achieve results beyond its current developmental stage, resulting in unrealistic demands. Secondly, there is dissatisfaction with AI's existing capabilities, even though they may be sufficient in many contexts.Methods: the article employs an analytical approach to tackle the misalignment issue, analyzing various market applications of AI and unveils their diversity, demonstrating that AI is not a homogeneous, singular concept. Instead, it encompasses a wide range of sector-specific applications, each serving distinct purposes, possessing inherent risks, and aiming for specific accuracy levels.Results: the primary finding presented in this article is that the misalignment between expectations and actual AI capabilities arises from the mistaken premise that AI systems should consistently achieve accuracy rates far surpassing human standards, regardless of the context. By delving into different market applications, the author advocates for evaluating AI's potential and accepted levels of accuracy and transparency in a context-dependent manner. The results highlight that each AI application should have different accuracy and transparency targets, tailored on a case-by-case basis. Consequently, AI systems can still be valuable and welcomed in various contexts, even if they offer accuracy or transparency rates lower or much lower than human standards.Scientific novelty: the scientific novelty of this article lies in challenging the widely held misconception that AI should always operate with superhuman accuracy and transparency in all scenarios. By unraveling the diversity of AI applications and their purposes, the author introduces a fresh perspective, emphasizing that expectations and evaluations should be contextualized and adapted to the specific use case of AI.Practical significance: the practical significance of this article lies in providing valuable guidance to stakeholders within the AI field, including regulators, developers, and customers. The article's realignment of expectations based on context fosters informed decision-making and promotes responsible AI development and implementation. It seeks to enhance the overall utilization and acceptance of AI technologies by promoting a realistic understanding of AI's capabilities and limitations in different contexts. By offering more comprehensive guidance, the article aims to support the establishment of robust regulatory frameworks and promote the responsible deployment of AI systems, contributing to the improvement of AI applications in diverse sectors. The author's call for fine-tuned expectations aims to prevent dissatisfaction arising from unrealistic demands and provide solid guidance for AI development and regulation.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 1
  • 10.47941/ijp.1867
Moral Agency and Responsibility in AI Systems
  • May 3, 2024
  • International Journal of Philosophy
  • Luiz Saraiva

Purpose: The general objective of this study was to explore moral agency and responsibility in AI systems. Methodology: The study adopted a desktop research methodology. Desk research refers to secondary data or that which can be collected without fieldwork. Desk research is basically involved in collecting data from existing resources hence it is often considered a low cost technique as compared to field research, as the main cost is involved in executive’s time, telephone charges and directories. Thus, the study relied on already published studies, reports and statistics. This secondary data was easily accessed through the online journals and library. Findings: The findings reveal that there exists a contextual and methodological gap relating to moral agency and responsibility in AI systems. Preliminary empirical review revealed that AI systems possess a form of moral agency, albeit different from human agents, and promoting transparency and accountability was deemed crucial in ensuring ethical decision-making. Interdisciplinary collaboration and stakeholder engagement were emphasized for addressing ethical challenges. Ultimately, the study highlighted the importance of upholding ethical principles to ensure that AI systems contribute positively to society. Unique Contribution to Theory, Practice and Policy: Utilitarianism, Kantianism and Aristotelian Virtue Ethics may be used to anchor future studies on the moral agency and responsibility in AI systems. The study provided a nuanced analysis of moral agency in AI systems, offering practical recommendations for developers, policymakers, and stakeholders. The study emphasized the importance of integrating ethical considerations into AI development and deployment, advocating for transparency, accountability, and regulatory frameworks to address ethical challenges. Its insights informed interdisciplinary collaboration and ethical reflection, shaping the discourse on responsible AI innovation and governance. Keywords: Moral Agency, Responsibility, AI Systems, Ethics, Decision-Making, Framework, Analysis, Regulation, Governance, Transparency, Accountability, Interdisciplinary, Innovation, Deployment, Stakeholders

  • Research Article
  • Cite Count Icon 3
  • 10.56315/pscf9-22metz
Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World
  • Sep 1, 2022
  • Perspectives on Science and Christian Faith
  • Cade Metz

Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World

  • Research Article
  • 10.22545/2024/00244
Epistemology in AI (Transdisciplinary AI)
  • Jan 9, 2024
  • Transdisciplinary Journal of Engineering & Science
  • Ndubuisi Idejiora-Kalu

A critical look at the evolution of AI strongly shows a sustained but stealth race to replace humans with AI. Early scientific literature and discourse on AI for some reasons (either to allow AI gain entry and acceptability in mainstream scientific and technological arena) vehemently deny this "human replacement agenda". This thinking pattern unknowingly shaped current scientific literature, discourse, general understanding of what AI is and its development and applicability (a reductionist thinking). This limits our understanding in both the beneficial and destructive capabilities of AI. But when considering a TD assessment on the developmental dynamic of AI, one would comfortably say and must be bold to admit, that indeed AI intends replacing humans and is on course for fulfilling this.
 
 We see this AI human replacement agenda in intensified R&D efforts dedicated to developing powerful AI system of systems which massively augment human reasoning, most times far better. The inexhaustible list includes the AI replacement of formerly considered human-centric jobs, advanced autonomous weapon systems, killer robots and AI in warfare, intelligent facial recognition, biometric monitoring, integrating AI on biological, nuclear and space-based weapons systems, etc. If this is the direction AI is taking, then a secondary aim would surely arrive at integrating epistemology in AI or "grant spirits" for AI systems. This is because a distinctive characteristic of a human is his spirit and one cannot replace humans with AI without creating proportionate or appropriate spirits for the AI systems. Sooner or later our AI systems would have epistemological functions and possess spirits. The place of the soul for such AI systems would be attained as well. If human knowledge, beliefs, voices, clips and laws can be preserved long after they are gone as is possible in smart digital technologies, then spirit-based AI would indeed cause these humans to live forever. If the feat of a spirit enamored AI is near, then why worry? Indeed when considering that humans possess good and bad spirits (from the epistemology of rational and irrational inertia) then these AI systems would of course have good or bad spirits and be bad or good AI.
 
 Would integrating a spirit into a rule-based or machine learning algorithmic structure of an AI system have benefits? Yes!, profound benefits too. A spirit-based AI would of course make possible the "possession of feelings" by AI systems, a feat unattainable in both algorithmic, operational and inferential basis of AI systems today. This inability of AI systems to have feelings has continued to remain a major setback in the acceptability (indeed trust) and utilization of AI. As we agree that the spirit in AI is possible, then overlooking efforts aimed at making this possible or allowing AI to attain this level unhindered (admitting dangers of human involvement in AI) could pose a dangerous threat which can become highly destructive to mankind. This calls for critical supervision (TD-based ethical policing) and the accompanying of the evolution, development and applicability of AI hence venerating the need for human mediation in AI both as a major TD research subject and applicable function.
 
 A discussion would be made on my approach which considers the synergy of critical systems heuristics (CSH) and systems engineering (Transdisciplinary Systems Engineering) to create "Transdisciplinary AI" which would formulate methods of integrating "human" epistemology in expert systems. Human epistemology is emphasized because by the maturity of this future nature of AI, there would be terms known as "AI or machine epistemology" or "AI or machine spirits". The investigation begins with creating expert systems (knowledge-based systems) with these functions with plans of moving into robotics and other machine learning arena. Finally, to move the needle on what is considered permissible epistemology or permissible spirit of/in AI is a critical component of the study of human mediation and AI which must be given critical attention. This would be discussed as well.

  • Research Article
  • 10.1007/s00146-023-01661-w
Ethical AI does not have to be like finding a black cat in a dark room
  • May 4, 2023
  • AI & SOCIETY
  • Apala Lahiri Chavan + 1 more

Ethical AI does not have to be like finding a black cat in a dark room

  • Book Chapter
  • Cite Count Icon 4
  • 10.1093/oxfordhb/9780190067397.013.18
AI as a Moral Right-Holder
  • Jul 9, 2020
  • John Basl + 1 more

This chapter evaluates whether AI systems are or will be rights-holders. It develops a skeptical stance toward the idea that current forms of artificial intelligence are holders of moral rights, beginning with an articulation of one of the most prominent and most plausible theories of moral rights: the Interest Theory of rights. On the Interest Theory, AI systems will be rights-holders only if they have interests or a well-being. Current AI systems are not bearers of well-being, and so fail to meet the necessary condition for being rights-holders. This argument is robust against a range of different objections. However, the chapter also shows why difficulties in assessing whether future AI systems might have interests or be bearers of well-being—and so be rights-holders—raise difficult ethical challenges for certain developments in AI.

  • Research Article
  • 10.1613/jair.1.17310
Principles for Responsible AI Consciousness Research
  • Mar 25, 2025
  • Journal of Artificial Intelligence Research
  • Patrick Butlin + 1 more

Recent research suggests that it may be possible to build conscious AI systems now or in the near future. Conscious AI systems would arguably deserve moral consideration, and it may be the case that large numbers of conscious systems could be created and caused to suffer. Furthermore, AI systems or AI-generated characters may increasingly give the impression of being conscious, leading to debate about their moral status. Organisations involved in AI research must establish principles and policies to guide research and deployment choices and public communication concerning consciousness. Even if an organisation chooses not to study AI consciousness as such, it will still need policies in place, as those developing advanced AI systems risk inadvertently creating conscious entities. Responsible research and deployment practices are essential to address this possibility. We propose five principles for responsible research and argue that research organisations should make voluntary, public commitments to principles on these lines. Our principles concern research objectives and procedures, knowledge sharing and public communications. This article appears in the AI & Society track.

  • Conference Article
  • Cite Count Icon 8
  • 10.1145/3597512.3599697
RE-centric Recommendations for the Development of Trustworthy(er) Autonomous Systems
  • Jul 11, 2023
  • Krishna Ronanki + 3 more

Complying with the EU AI Act (AIA) guidelines while developing and implementing AI systems will soon be mandatory within the EU. However, practitioners lack actionable instructions to operationalise ethics during AI systems development. A literature review of different ethical guidelines revealed inconsistencies in the principles addressed and the terminology used to describe them. Furthermore, requirements engineering (RE), which is identified to foster trustworthiness in the AI development process from the early stages was observed to be absent in a lot of frameworks that support the development of ethical and trustworthy AI. This incongruous phrasing combined with a lack of concrete development practices makes trustworthy AI development harder. To address this concern, we formulated a comparison table for the terminology used and the coverage of the ethical AI principles in major ethical AI guidelines. We then examined the applicability of ethical AI development frameworks for performing effective RE during the development of trustworthy AI systems. A tertiary review and meta-analysis of literature discussing ethical AI frameworks revealed their limitations when developing trustworthy AI. Based on our findings, we propose recommendations to address such limitations during the development of trustworthy AI.

  • Research Article
  • 10.47672/ajce.1879
The Legal and Political Implications of AI Bias: An International Comparative Study
  • Mar 16, 2024
  • American Journal of Computing and Engineering
  • Stephanie Ness + 3 more

Purpose: "The Legal and Political Implications of AI Bias: An International Comparative Study" extensively navigates the intricate terrain of AI governance, with a specific focus on the ethical challenges arising from bias in AI systems. The purpose of this study is to underscore the urgent need for robust regulatory frameworks to address issues of bias, discrimination, and fairness within the realm of AI technologies.
 Materials and Methods: The research methodology involved a comprehensive analysis of international perspectives on AI bias. This entailed examining existing literature, legal frameworks, and political dynamics surrounding AI governance in various countries. Comparative analysis was conducted to elucidate the diverse approaches adopted by different nations to tackle AI bias and unravel the corresponding legal and political consequences.
 Findings: The study highlighted the inherent risks associated with biased algorithms and stressed the paramount importance of proactively detecting and mitigating bias to prevent discrimination and promote fairness in AI systems. Additionally, it advocated for comprehensive measures such as risk management strategies, conformity assessments for high-risk AI applications, and the careful handling of sensitive data to identify and rectify biases that could lead to discriminatory outcomes.
 Implication to Theory, Practice and Policy: The study was informed by theories of ethical governance and legal frameworks in AI development and deployment. It was validated through the comparative analysis of international perspectives, which provided insights into the effectiveness of different regulatory approaches in addressing AI bias. Recommendations to practitioners include implementing risk management strategies, conducting conformity assessments for high-risk AI applications, and ensuring the careful handling of sensitive data to identify and rectify biases. Practitioners are urged to prioritize ethical considerations and advocate for responsible deployment practices to mitigate AI bias effectively. Recommendations to policymakers emphasize the need to prioritize ethical considerations and advocate for responsible deployment practices in AI governance. Policymakers are urged to develop robust regulatory frameworks that promote transparency, accountability, and inclusivity in AI development and deployment to build a more equitable and trustworthy AI ecosystem.
 In essence, the study provides crucial insights into the complex interplay between legal frameworks, political dynamics, and ethical considerations in addressing AI bias on a global scale. It paves the way for the establishment of fair and unbiased AI systems that benefit society as a whole.

  • Conference Article
  • 10.1109/istas52410.2021.9629180
Building trust for data sourcing with the disabled community to build robust AI systems
  • Oct 28, 2021
  • Monica Tsang

This research explores how to build trust between the disabled community and machine learning and AI developers. This is to encourage people with disabilities to contribute their data to build robust models that cater to a greater population. A conflicting reality exists where people with disabilities want to be accommodated in machine learning and AI systems, but they are afraid of sharing their disability status and data with AI developers. The disabled community is subjected to unfair treatment in their daily lives. Many individuals conceal their disability status to avoid being given inferior service or costly biased outcomes in healthcare, insurance premiums, and employment opportunities. When people with disabilities are not included in the data collection process, the results of the machine learning and AI systems will not cater to the needs and preferences of these minority groups. This research will examine the circumstances and privacy protection strategies will people with disabilities garner trust in contributing their data to develop inclusive AI technology solutions. The research will share the opinions of individuals from the communities with sensory disabilities and mobility issues.

  • Research Article
  • 10.1515/icom-2024-0014
The European commitment to human-centered technology: the integral role of HCI in the EU AI Act’s success
  • Jul 15, 2024
  • i-com
  • André Calero Valdez + 5 more

The evolution of AI is set to profoundly reshape the future. The European Union, recognizing this impending prominence, has enacted the AI Act, regulating market access for AI-based systems. A salient feature of the Act is to guard democratic and humanistic values by focusing regulation on transparency, explainability, and the human ability to understand and control AI systems. Hereby, the EU AI Act does not merely specify technological requirements for AI systems. The EU issues a democratic call for human-centered AI systems and, in turn, an interdisciplinary research agenda for human-centered innovation in AI development. Without robust methods to assess AI systems and their effect on individuals and society, the EU AI Act may lead to repeating the mistakes of the General Data Protection Regulation of the EU and to rushed, chaotic, ad-hoc, and ambiguous implementation, causing more confusion than lending guidance. Moreover, determined research activities in Human-AI interaction will be pivotal for both regulatory compliance and the advancement of AI in a manner that is both ethical and effective. Such an approach will ensure that AI development aligns with human values and needs, fostering a technology landscape that is innovative, responsible, and an integral part of our society.

More from: Analytic Philosophy
  • Research Article
  • 10.1111/phib.12383
Contingent Grounding Physicalism
  • Nov 3, 2025
  • Analytic Philosophy
  • Alex Moran

  • Research Article
  • 10.1111/phib.12392
The Abductivist Interpretation of Frege's Conception of Logic
  • Oct 10, 2025
  • Analytic Philosophy
  • Junyeol Kim

  • Research Article
  • 10.1111/phib.12384
Self‐Visitation and the Metaphysics of Place, Causation, and Facts
  • Oct 2, 2025
  • Analytic Philosophy
  • Daniel S Murphy

  • Research Article
  • 10.1111/phib.12389
Dogmatism and Easy Knowledge: Avoiding the Dialectic?
  • Aug 11, 2025
  • Analytic Philosophy
  • Guido Tana

  • Research Article
  • 10.1111/phib.12380
AI Alignment Versus AI Ethical Treatment: 10 Challenges
  • Aug 11, 2025
  • Analytic Philosophy
  • Adam Bradley + 1 more

  • Research Article
  • 10.1111/phib.12387
Time, Sociality, Institutions: The Core Capacity Conjecture
  • Jul 23, 2025
  • Analytic Philosophy
  • Michael E Bratman

  • Research Article
  • 10.1111/phib.12377
Sufficient Reason Vindicated
  • Jul 21, 2025
  • Analytic Philosophy
  • Stephen Harrop

  • Research Article
  • 10.1111/phib.12388
Against Metaphysical Egalitarianism
  • Jul 21, 2025
  • Analytic Philosophy
  • Peter W Finocchiaro

  • Research Article
  • 10.1111/phib.12385
The Concept of Categoricity
  • Jul 20, 2025
  • Analytic Philosophy
  • Sungho Choi

  • Research Article
  • 10.1111/phib.12386
A Flaw in Sider's Vagueness Argument for Perdurantism: Endurantism Endures
  • Jul 4, 2025
  • Analytic Philosophy
  • Harold W Noonan

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon