Articles published on Central Claims
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
2163 Search results
Sort by Recency
- Research Article
- 10.1016/j.biosystems.2026.105755
- Mar 12, 2026
- Bio Systems
- Ian Todd
Coherence time in biological oscillator assemblies bounds the rate of state registration.
- Research Article
- 10.1177/14407833261423232
- Mar 10, 2026
- Journal of Sociology
- Yannick Rumpala
The rise of technocapitalism profoundly reshapes societal structures, sparking critical debate and demanding new analytical frameworks to grasp its multifaceted implications. This article argues that cyberpunk, far from mere speculative fiction, can function as a retrospective experimental laboratory for social theory. Its central claim is not that sociological theory simply enriches literary interpretation, but that literature – in this case, the foundational cyberpunk works of the 1980s – provides conceptual resources for apprehending technocapitalism in its current forms. Rather than focusing on detailed case studies of individual texts, the article develops a synoptic framework based on four recurring ‘refractions’ – existential, social, cultural and moral – that consistently structure the genre's portrayal of ‘high tech, low life’. By situating these fictional worlds alongside contemporary debates (from Kellner's work on technocapitalism to the effects of an era dominated by widespread uncertainty and systematic precarity, as analysed by Bauman), the article demonstrates how cyberpunk illuminates the paradoxes of technological advancement accompanied by social disintegration. Its contribution is twofold: first, to show how cyberpunk fiction remains heuristically valuable for understanding precariousness, marginality and moral deregulation under technocapitalism today; and second, to position literature as a resource for social theory, offering imaginative yet analytically illuminating insights into contemporary civilizational challenges.
- Research Article
- 10.1038/s41598-026-38164-9
- Mar 10, 2026
- Scientific reports
- Hsiao-Ya Tsai + 7 more
Several observational studies have indicated a greater risk of cardiovascular diseases in cold weather. Cold-inducible RNA-binding protein (CIRP) is highly conserved and upregulated in response to various cellular stressors. Although CIRP provides protective effects against cold exposure, extracellular CIRP (eCIRP) is a danger-associated molecular pattern that can trigger inflammatory response. We aimed to explore the association between cold temperature and AD development, decipher the mechanistic links between cold stress and vascular cell injury, and provide a new therapeutic approach for cold stress-induced vascular emergencies. We used daily meteorological records obtained from the Taiwan Central Weather Administration and health insurance claims from the National Health Research Institute to examine the association between cold temperature and AD development. A murine AD model was established via treatment with the irreversible lysyl oxidase inhibitor 3-aminopropionitrile fumarate (BAPN). The mice were subjected to acute cold exposure (ACE) at 4 ± 1°C. We found that cold stress and exogenous CIRP induced vascular inflammation and the overexpression of matrix metalloproteinase-2 through the Toll-like receptor 4 (TLR4) pathway in endothelial cells in vitro. There was a greater risk of AD at cold temperatures. ACE increased the aortic arch diameter and circulating CIRP and interleukin-6 levels in BAPN-fed, AD-susceptible mice. Exogenous recombinant CIRP exacerbated AD in BAPN-treated mice. C23, a competitive CIRP antagonist, ameliorated ACE-exacerbated AD in BAPN-treated mice. In conclusion, cold temperatures are associated with AD development in a subtropical/tropical monsoon climate. Cold stress could exacerbate AD development through the eCIRP/TLR4 pathway. Blocking eCIRP prevented cold-induced exacerbation of AD in AD-susceptible mice.
- Research Article
- 10.55041/ijsrem57331
- Mar 9, 2026
- International Journal of Scientific Research in Engineering and Management
- Dr Pr Chandra Reddy + 1 more
Abstract: In Existentialism Is a Humanism, Jean‑Paul Sartre outlines the fundamental principles, the alpha and omega, of existentialist philosophy. He begins his iconic lecture with the central claim that existence precedes essence, asserting that human beings first exist and only later define their essence through their choices, attitudes, and actions. According to Sartre, individuals must therefore assume full responsibility for the decisions they make, leaving no room for predetermined values or moral codes imposed by society or religion. In this sense, existentialism emphasizes subjective freedom and moral autonomy. Sartre further argues that existentialism does not lead to despair or nihilism. On the contrary, it empowers individuals by making them the creators of their own destiny. By declaring that life has no inherent or pre-given meaning, Sartre does not suggest that life is futile; rather, he insists that each person must create meaning through authentic commitment and action. Freedom thus becomes the central principle of existentialism, yet this freedom is inseparable from immense responsibility. Every individual, through their choices, implicitly affirms values not only for themselves but for humanity as a whole. Responding to critics who accuse existentialism of promoting moral chaos, Sartre defends the deeply humanistic character of his philosophy. He emphasizes existentialism’s concern with human reality, personal freedom, and ethical responsibility. Rather than endorsing moral nihilism, Sartre argues that existentialism calls for a profound ethical awareness grounded in individual responsibility rather than externally imposed moral systems. Consequently, it encourages individuals to develop an authentic moral code through conscious choice and responsible action. Key words: Existentialism, Humanism, Existence precedes essence, Freedom, Responsibility, Anguish, Abandonment, Despair, Subjectivity, Authenticity, Self-creation, Individuality, Moral responsibility, Consciousness, Nothingness
- Research Article
- 10.63283/irj.04.01/03
- Mar 1, 2026
- AL-ĪMĀN Research Journal
- Zarqa Zulifqar* + 1 more
This research paper explores the revolutionary yet controversial hypothesis proposed by Japanese researcher Dr. Masaru Emoto, which suggests that water possesses the ability to respond to human consciousness, intentions, words, and vibrational frequencies. Dr. Emoto’s central claim is that water molecules act as a "biological computer," capable of storing and reflecting environmental energy. By utilizing high-speed microscopic photography of frozen water crystals, he demonstrated positive stimuli such as the words "love" and "gratitude" or classical music result - aesthetically in symmetrical hexagonal structures. Conversely, negative stimuli like hateful words or heavy metal music produce chaotic and fragmented formations. This study adopts a dual-perspective approach to evaluate Emoto’s work. On one hand, supporters from the fields of metaphysics and alternative medicine utilize principles of quantum mechanics, such as the "Observer Effect" and the concept of "Hado" (vibrational energy), to validate the possibility of a mind-matter interaction. On the other hand, the mainstream scientific community critiqued his methodology, citing a lack of double-blind protocols and "selection bias," where specific images were allegedly chosen to support a predetermined narrative Special attention is given to Emoto’s experiments on Zamzam water, where he observed a unique double-hexagonal crystalline structure that remained stable even under significant dilution. This finding is analyzed within a spiritual framework, suggesting a scientific basis for the effects of Quranic recitation and prayer. Through an examination of 30 scholarly and metaphysical references, this paper concludes that while Emoto’s theory remains outside the boundaries of classical molecular biology, it has pioneered a significant dialogue on Water Memory." It suggests that water may serve as a bridge between the physical and the metaphysical, urging a deeper exploration of the subtle energies that govern our reality.
- Research Article
- 10.3138/uhr-2024-0029
- Mar 1, 2026
- Urban History Review
- Frédéric Mercure Jolette
A colorful figure, Ernest Crépeault won five elections—two of them by acclamation—and ruled the city of Anjou from its founding in 1956 until 1973, a period during which the population grew from just over 2,000 people to more than 34,000. Ernest Crépeault’s regime consolidated power in a monopolistic manner by allying itself with a range of actors and actively discouraging political participation. This article examines the mechanisms through which political participation was curtailed and identifies the actors who benefited from this process. Revisiting the history of Anjou, it shows how a form of civic incapacity took shape—understood here as the erosion of citizen participation and civic vigilance in municipal affairs. This case study speaks directly to current debates in urban history on local democracy, municipal apoliticism, and political participation in suburban contexts. Drawing on existing scholarship, I argue that recently developed, rapidly growing suburbs constitute a particularly favourable context for depoliticization. However, I also contend that this depoliticization was neither natural nor inevitable, but rather the result of actions and discourses advanced by elites with a stake in political quiescence. The article’s central claim is that rapid suburban growth was decisive in enabling the formation of Ernest Crépeault’s political monopoly. The Crépeault regime mobilized a rhetoric rooted in the ideal of a peaceful and autonomous suburb in order to discourage political engagement, while exploiting rapid land development to secure alliances with actors both within and beyond Anjou, thereby sustaining its hold on power for seventeen years.
- Research Article
- 10.54103/2036-461x/30760
- Feb 27, 2026
- Cinéma & Cie. Film and Media Studies Journal
- Giorgio Avezzù + 2 more
In this article, based on the evidence brought by TRAFFIC - Tracing American and Foreign Funds in Italian Cinema (1945-1962), a research project we were involved in the last few years and focused on the Italian case, we advance a set of methodological and operational hypotheses for the study of international relations in post-war cinema, developed in dialogue with the contributions collected in this issue. Our central claim is that a multipolar model of productive, distributive, and cultural relations was in place, in the years surrounding the Second World War, with timelines and durations that varied across geographical contexts and in relation to the specific challenges faced by different film industries. This model places under strain a fixed conception of centre-periphery relations, already questioned in transnational approaches to European cinema, as well as a monolithic understanding of the cultural and industrial dominance of American cinema, which nevertheless remained the key reference point in the global system.After having summarized the scientific discussion on transnational cinemas to determine which elements can be retained and what new tools are needed to outline and study what we consider a multipolar system, in the following section we delve into the Italian case, focusing particularly on the trade association ANICA. During the post-war period, ANICA was structured as an interface between different systems, managing current practices such as export and co-production instructions and film credit guarantees, as well as strategic actions such as defining agreements and conducting periodic revisions. Two specific examples relating to the definition of exchange and co-production agreements with the film industries of Mexico and Yugoslavia illustrate ANICA’s concrete functioning in relation to other national and foreign entities, including public and private stakeholders. Finally, we reflect on the concept of borders as a key element in the relationship between film systems and infrastructures.
- Research Article
- 10.1080/02680939.2026.2634024
- Feb 22, 2026
- Journal of Education Policy
- Sandra Leaton Gray
ABSTRACT This article revisits David Hargreaves’ 1996 influential critique of educational research as fragmented, arguing that while fragmentation has diminished, a new form of institutional constraint has emerged. His central claim concerned the absence of a national machinery to coordinate research priorities, a gap that contributed to fragmentation and limited circulation between research and practice. Drawing on Bernstein’s concepts of symbolic control and the recontextualisation field, the paper analyses how justice-oriented research, centred on themes such as equity, decolonisation, and sustainability, has subsequently been absorbed into academic and policy infrastructures through genres marked by procedural defensibility and institutional alignment. I explore how citation economies, funding regimes, and audit logics reward a stylised criticality that is symbolically potent yet analytically constrained. Revisiting the legacy of initiatives such as the UK’s Teaching and Learning Research Programme (TLRP) and later, England’s Educational Endowment Foundation (EEF), and situating recent trends within broader regimes of epistemic governance, I argue for a reinvigoration of structural critique and epistemic pluralism in the constitution of educational research. Justice, I contend, must function as a site of inquiry rather than a genre of institutional performance.
- Research Article
- 10.46222/pharosjot.107.214
- Feb 13, 2026
- Pharos Journal of Theology
- Fernanda Putra Adela + 1 more
This article examines distinct theological frameworks of Al-Farabi and Al-Ghazali, whose arguments significantly contributed to the evolution of Islamic philosophy. Employing a library-based qualitative comparative research lens, the study uncovers the differing conceptions of God, Al-Farabi’s rationalist God of Necessary Existence and Al-Ghazali’s voluntarist God of Absolute Will, and how each conceived of creation, causality, knowledge, and eschatology. Al-Farabi, drawing from Aristotle and Neoplatonism, envisioned a cosmos rationally organised and emanating from the self-contemplation of God, where reason and revelation meet harmoniously as the expressions of a singular truth. Al-Ghazali, following Ash’arite and Sufi teachings, argues for the divine’s unbounded power and freedom in creation ex nihilo, the causation of events as occasionalist, and the dominance of revelation over reason, in Al Ghazali’s critique of the falasifa in Tahafut al-Falasifah, a new boundary of philosophical thought and Islamic orthodoxy. The central claim of this study is that the conflict between each other’s ideas is not simply a clash of doctrines, but a deeper conflict between two opposing metaphysical worldviews—rational necessity and divine volition. These worlds profoundly changed the course of Islamic philosophy, kalam, and mysticism, as well as contemporary debates in Islamic philosophy, addressing the integration of reason and revelation. This article contributes originally by reframing the al-Fārābī - al-Ghazālī debate as a paradigmatic metaphysical conflict between rational necessity and divine volition, rather than a merely doctrinal disagreement.
- Research Article
- 10.1177/17416590261417217
- Feb 11, 2026
- Crime, Media, Culture: An International Journal
- Andy Bennett
Since its initial publication in 1976, Resistance Through Rituals has become one of the most influential – and criticised – texts in the academic study of youth culture. Criticisms fall into two broad camps: the perceived shortcomings of the study itself, notably its lack of attention to issues of gender and ethnicity, its Anglo-centric focus and the absence of empirical data to support its central claims vis a vis the significance of subculture as site of working-class youth resistance; criticisms of the concept of subculture itself and the proposing of new conceptual frameworks, notably, scene, lifestyle and neo-tribe. Beyond such criticism, however, there are also other questions to consider in assessing the 50-year legacy of Resistance Through Rituals . How has the concept of youth culture itself changed over the last 50 years? How have such changes been influenced by factors such as post-industrialisation, digital technologies and shifts in understandings about age and age-appropriate behaviour? Taking such considerations into account, this article will discuss the extent to which Resistance Through Rituals reads in the current context as a largely historical text and, conversely, what aspects of the work continue to have relevance (or perhaps revived relevance) for the study of youth in a contemporary context?
- Research Article
- 10.1111/nin.70083
- Feb 6, 2026
- Nursing inquiry
- Junguo Zhang
The rapid expansion of digital health technologies-including telemedicine, wearable devices, and AI-driven diagnostics-has transformed healthcare practices by mediating how care is delivered, perceived, and experienced. While ethical debates in digital health often focus on data governance, privacy, and equitable access, less attention has been paid to digitally mediated healthcare as an embodied practice, particularly in nursing contexts. This article develops a nursing-phenomenological framework for digital health ethics by focusing on how digital technologies mediate patients' embodied experiences of illness and care. Drawing on Merleau-Ponty's phenomenology of the lived body, the analysis examines how digitally mediated care reshapes bodily schema, perception, and presence in clinical encounters. Integrating Don Ihde's postphenomenology, the article further shows how technologies actively mediate perception, co-constitute embodiment, and generate multiple stabilities that shape clinical interpretation, patient engagement, and nursing practice. The central claim is that digital health ethics must attend not only to abstract bioethical principles but also to the embodied and technologically mediated conditions under which care is experienced and enacted. By clarifying the experiential and technological dimensions of digitally mediated healthcare, this approach offers a situated ethical lens with practical implications for nursing practice, technological design, and health policy.
- Research Article
- 10.5325/complitstudies.63.1.0122
- Feb 3, 2026
- Comparative Literature Studies
- Georg Wink
ABSTRACT While Bolsonaro’s presidency (2019–2022) has received much scholarly attention, almost nobody has addressed this exceptional period in a work of fiction. The writer and political scientist B. Kucinski is an exception. His novels A nova ordem (The New Order), published in June 2019, six months into Bolsonaro’s presidency, and O colapso da nova ordem (The Collapse of the New Order), published in August 2022, three months before Bolsonaro’s failed re-election, offer literary assessments of this fraught moment in Brazil’s history. This article examines how this fictionalization perceives and treats the phenomenon of the Brazilian Far-Right shift. It argues that perception is determined by the experience of twentieth-century forms of Latin American state authoritarianism (“military dictatorship”), while literary treatment follows the scheme of a classical totalitarian dystopia. This fictionalization ignores or dismisses central claims of the global New-Right’s ideology that are shared by the Brazilian movement. These are inspired by antimodernist ideologies and skepticism toward the state, as expressed, for example, in diagnoses like “cultural Marxist control” and “death culture of abortionism.” This discussion explores how Kucinski’s perspective affects and possibly limits the political function of dystopian fiction.
- Research Article
- 10.1037/amp0001676
- Feb 1, 2026
- The American psychologist
- Fabian Hutmacher
In our recent article, we argued that psychological concepts are inherently vague and that this vagueness cannot be circumvented. While the commentary by Ng and Litson (2026) raises important issues, it rests on a misinterpretation of our central claim. Here, we clarify our position by distinguishing vagueness from arbitrariness, imprecision, and ambiguity and explain why there is no contradiction between accepting the vagueness of psychological concepts and striving for greater conceptual clarity. (PsycInfo Database Record (c) 2026 APA, all rights reserved).
- Research Article
- 10.37547/tajssei/volume08issue02-13
- Feb 1, 2026
- The American Journal of Social Science and Education Innovations
- Platova Kseniia Sergeevna
This study examines a set of theoretical–methodological and practice-oriented foundations of correctional and pedagogical activity within the framework of inclusive education. A central claim is the principled necessity to reconsider and update educational approaches to teaching learners with hearing and speech impairments, taking into account the combined pressure of a systemic humanitarian crisis and the accelerated digital transformation of the educational environment. The aim of the research is to identify, organize, and conceptually systematize the most effective strategies of multidisciplinary support for persons with special educational needs, with an emphasis on reducing the severity of communicative limitations and removing factors that hinder full participation and interaction in the learning process. The methodological framework is built through an integrated design: a systematized review of current academic sources is combined with a comparative analysis of statistical datasets published by the Ministry of Education and Science of Ukraine, and supplemented by a case-study approach implemented on the materials of leading rehabilitation institutions in Dnipropetrovsk oblast. The results indicate a meaningful effectiveness of introducing Ukrainian Sign Language and assistive digital solutions into general education practices, which becomes visible in improved academic outcomes and in strengthened mechanisms of social adaptation. The concluding section formulates applied recommendations aimed at deepening the inclusive competence of teaching staff and at institutional expansion of cross-sector collaboration. The substantive findings and analytical generalizations carry high practical value for specialists in special education, speech and language practice, educational management, and for the research community working on inclusive didactics and social rehabilitation.
- Research Article
- 10.36131/cnfioritieditore20260111
- Feb 1, 2026
- Clinical neuropsychiatry
- Stephen W Porges
A recent critique advanced by Grossman et al. (2026, this issue) argues that Polyvagal Theory is scientifically untenable, asserting that its core claims regarding autonomic organization, respiratory sinus arrhythmia (RSA), and evolutionary framing are inconsistent with established neurophysiology. The present paper evaluates these assertions not by disputing individual claims in isolation, but by examining whether the critique engages Polyvagal Theory as it is articulated in the peer-reviewed literature and whether it meets the epistemic standards required for scientific refutation. Rather than responding sequentially to individual objections, the analysis clarifies the theory's conceptual foundations, scope, and explicit conditions of falsifiability as a systems-level, pathway-specific framework of autonomic state regulation. It demonstrates that the critique repeatedly evaluates a reconstructed proxy of the theory shaped by persistent category errors, including conflation of neuroanatomy with neurophysiology, reduction of theory to measurement, and substitution of phylogenetic continuity for functional organization. These structural misrepresentations propagate across methodological, neurophysiological, evolutionary, and developmental domains, precluding meaningful empirical adjudication. Across these domains, the paper shows that disagreements concerning RSA metrics, comparative anatomy, or evolutionary framing do not engage the theory's specified mechanisms or demonstrate conditions under which its predictions would fail. Where disagreement exists, it reflects differences in measurement preference, level of analysis, or theoretical framing rather than evidence against the theory's organizing principles. An appendix presents a historical audit showing that several central claims reiterated in the critique were identified in the literature nearly two decades earlier as mischaracterizations of Polyvagal Theory. Their continued repetition without substantive modification reflects a persistent failure of representational uptake rather than unresolved empirical controversy. It is concluded that the charge of scientific untenability does not apply to Polyvagal Theory as formulated, but instead reflects a critique that fails to engage the theory on its own terms. Productive scientific discourse requires representational fidelity, appropriate alignment of levels of analysis, and responsiveness to theoretical and empirical clarification ‒ criteria essential to theory evaluation but not met in the critique under review.
- Research Article
- 10.1007/s10677-025-10533-9
- Jan 21, 2026
- Ethical Theory and Moral Practice
- Adam John Andreotta
Abstract Debates about the nature of free will, and whether human beings have it, are some of the most famous and longstanding in philosophy. In this paper, I argue that the term “free will” does not serve these debates well. This is not to say that the debates themselves are unimportant; on the contrary, they are some of the most interesting and personal in philosophy. My central claim is that the term “free will” is associated with problematic connotations that complicate these debates, which concern important issues in ethics, metaphysics, epistemology and several other fields. Fortunately, the term is unnecessary: we already possess the vocabulary needed to discuss these important issues. Moreover, I argue that a set of replacement questions can be formulated to permit a more precise treatment of these issues.
- Research Article
- 10.1365/s43439-025-00167-z
- Jan 19, 2026
- International Cybersecurity Law Review
- Fabian Teichmann
Abstract Ransomware has matured into a transnational, profit-driven threat that exploits asymmetries in cyber resilience and legal frameworks. This article evaluates whether moving toward a legal prohibition on ransom payments is a viable and proportionate strategy. Using a comparative analysis of developments between 2023 and 2025 in the United States, United Kingdom, European Union and selected third countries (notably Australia), it maps emerging policy instruments—targeted payment bans, payment-preclearance/approval regimes, mandatory incident and payment reporting, and sanctions-based constraints—and assesses their interaction with existing international law, including the Budapest Convention and the draft UN Cybercrime Convention. The analysis highlights the deterrent rationale of “starving the business model” through reduced payment flows, while scrutinizing material risks: displacement effects across jurisdictions, potential under-reporting, operational harm to essential services, and disproportionate burdens on SMEs. It further addresses human-rights and due-process concerns (necessity, proportionality, narrowly tailored exemptions), conflicts of laws in multinational incidents, and the role of crypto-asset controls and coordinated law-enforcement disruption. The article proposes an incremental, internationally coordinated roadmap: (i) harmonized bans for governments and critical infrastructure; (ii) universal, time-bound reporting of incidents and payments; (iii) strengthened cross-border MLA/extradition and asset-freezing for ransomware proceeds; (iv) victim-support mechanisms (decryption sharing, recovery funding) to avoid perverse incentives; and (v) norms addressing safe havens. The central claim is that a payment ban can be effective only as part of a comprehensive framework that couples legal constraints with resilience, transparency, and multilateral enforcement; under those conditions, a converging international norm of refusing to pay cyber ransoms is both legally tenable and strategically advantageous.
- Research Article
- 10.1177/03010066251410899
- Jan 14, 2026
- Perception
- Keiyu Niikuni + 1 more
Previous research has demonstrated that words associated with brightness (e.g., "sun") elicit smaller pupil diameters than those related to darkness (e.g., "night"). The present study aimed to determine whether these language-induced pupillary responses are driven by the luminance of the mentally simulated content-referred to here as sensory interpretation-or by the conceptual brightness linked to the words' emotional valence, termed emotional interpretation. To address this question, we utilized the Japanese adjectives akarui and kurai, which can denote both luminance, as in the noun phrase akarui/kurai gamen ("bright/dark screen"), and emotional valence, as in akarui/kurai seikaku ("cheerful/gloomy personality"). Participants were presented with noun phrases composed of these adjectives and various nouns (akarui/kurai + noun). A significant main effect of the adjective indicated that phrases containing akarui yielded smaller pupil diameters than those containing kurai. Furthermore, although the interaction effect did not reach significance, the adjective effect was observed only when the adjectives conveyed luminance, not when they conveyed emotional valence. These findings suggest that sensory, rather than emotional, interpretation better explains language-induced changes in pupil size. The use of pupillometry as a measure of perceptual simulation offers more direct and compelling evidence in support of the central claim of embodied language theories: that during language comprehension, readers and listeners spontaneously generate sensorimotor simulations of the described content. Future studies are warranted to examine whether these findings extend to sentence- and discourse-level processing, as well as to simulations of information conveyed implicitly or indirectly through language.
- Research Article
- 10.52096/usbd.10.42.09
- Jan 12, 2026
- International Journal of Social Sciences
- Fikret Erkan
The increasing integration of artificial intelligence (AI)–based decision-support systems into judicial processes compels a fundamental re-examination of judicial impartiality and constitutional legitimacy. While existing scholarship has primarily addressed either human cognitive bias or algorithmic discrimination as separate phenomena, this article argues that judicial bias in the digital age must be understood as a structurally layered constitutional problem emerging from the interaction between human cognition and algorithmic authority. First, the study systematizes major types of cognitive bias affecting judicial reasoning, demonstrating that impartiality is not a natural attribute of adjudication but a fragile institutional ideal requiring continuous protection. Second, it analyzes algorithmic bias as a socio-technical phenomenon rooted in historical data inequalities, model architecture, and optimization objectives. Contrary to the widespread narrative of computational neutrality, algorithmic systems embed normative choices that may conflict with equality, due process, and non-discrimination principles. The article advances a central theoretical claim: the rise of algorithmic decision-support systems does not merely introduce new technical risks but transforms the source of judicial authority itself. When statistical accuracy begins to compete with legal justification as a legitimacy foundation, the rational-legal structure of adjudication is normatively destabilized. Statistical prediction cannot substitute constitutional reasoning. To address this transformation, the article proposes an original four-layered framework—the Erkan Constitutional Algorithmic Safeguard Model (ECASM). The model conceptualizes algorithmic systems not as neutral administrative tools but as constitutional risk-producing instruments subject to multi-layered oversight: normative compatibility review, transparency and explainability requirements, institutional accountability safeguards, and behavioral integrity controls. By integrating constitutional law, human rights doctrine, judicial theory, behavioral science, and algorithmic governance, the model provides a structured safeguard architecture against the delegation of judicial authority to statistical systems.
- Research Article
- 10.14573/altex.2601011
- Jan 1, 2026
- ALTEX
- Ronit Mohapatra + 1 more
Reporting standards have proliferated across biomedicine, yet incomplete methods reporting remains routine - less because the community doubts the value of transparency, but rather because compliance checking is tedious, inconsistently enforced, and poorly integrated into everyday writing and review. As a sequel to the Good In Vitro Reporting Standards (GIVReSt) argument that better reporting is essential infrastructure, this article explores a pragmatic next step: translating standards from static checklists into interactive, always-on guidance. We describe the development of three specialized "compliance copilots" built as custom GPT-based assistants - one aligned with the emerging GIVReSt, one reflecting the established ToxRTool reliability framework, and one mapped to ARRIVE for animal studies. The tools are designed to point to specific text evidence, flag missing essential information, and provide actionable suggestions while the manuscript is being written. Early benchmarking against expert assessments suggests that this approach can approx-imate human judgments for many checklist items in a fraction of the time and with high consistency. We also highlight why "strict" versus "lenient" interpretations matter, and why these systems should be framed as decision-support, not decision-makers. The central claim is cultural, not technical: arti-ficial intelligence (AI) will matter most when it makes rigorous reporting the path of least resistance, turning standards into routine practice rather than aspirational add-ons.