Discovery Logo
Sign In
Paper
Search Paper
Cancel
Pricing Sign In
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Chat PDF iconChat PDF Star Left icon
  • Citation Generator iconCitation Generator
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link
  • Paperpal iconPaperpal
    External link
  • Mind the Graph iconMind the Graph
    External link
  • Journal Finder iconJournal Finder
    External link
Discovery Logo menuClose menu
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Chat PDF iconChat PDF Star Left icon
  • Citation Generator iconCitation Generator
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link
  • Paperpal iconPaperpal
    External link
  • Mind the Graph iconMind the Graph
    External link
  • Journal Finder iconJournal Finder
    External link

Related Topics

  • Artificial Intelligence Technology
  • Artificial Intelligence Technology
  • Artificial Intelligence Learning
  • Artificial Intelligence Learning
  • Artificial Intelligence
  • Artificial Intelligence

Articles published on Artificial Intelligence Systems

Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
12206 Search results
Sort by
Recency
  • New
  • Research Article
  • 10.1080/1478601x.2026.2624489
The effect of prompt framing on AI-generated sentencing recommendations: a research note
  • Feb 5, 2026
  • Criminal Justice Studies
  • Gustavo S Mesch

ABSTRACT The rapid integration of artificial intelligence (AI) systems into societal domains particularly the legal and criminal justice decision-making demands scrutinity of potential biases in outputs. AI tools assist predictive policing, risk assessment, sentencing recommendations and legal research. This requires ah examination of potential sources of bias in AI systems’ responses and recommendations. This study investigates prompt framing’s impact on AI sentencing recommendations and offender community threat perceptions. We systematically tested six leading AI models – Copilot, Gemini, GPT, Grok, Mistral, and Perplexity – using identical case scenarios of second-degree aggravated assault in a domestic violence contexts one featuring a male offender and one a female offender. The findings reveal that prompt framing shape AI outputs. Notably, we observed differential treatment based on offender gender, with female offenders consistently receiving lower sentencing recommendations and threat ratings despite the scenarios being factually identical. We discuss these findings in terms of the implications for the relevance of framing and the potential perpetuation of gender bias within AI systems.

  • New
  • Research Article
  • 10.2196/67717
Message Humanness as a Predictor of AI's Perception as Human: Secondary Data Analysis of the HeartBot Study.
  • Feb 3, 2026
  • JMIR AI
  • Haruno Suzuki + 5 more

Artificial intelligence (AI) chatbots have become prominent tools in health care to enhance health knowledge and promote healthy behaviors across diverse populations. However, factors influencing the perception of AI chatbots and human-AI interaction are largely unknown. This study aimed to identify interaction characteristics associated with the perception of an AI chatbot identity as a human versus an artificial agent, adjusting for sociodemographic status and previous chatbot use in a diverse sample of women. This study was a secondary analysis of data from the HeartBot trial in women aged 25 years or older who were recruited through social media from October 2023 to January 2024. The original goal of the HeartBot trial was to evaluate the change in awareness and knowledge of heart attack after interacting with a fully automated AI HeartBot chatbot. All participants interacted with HeartBot once. At the beginning of the conversation, the chatbot introduced itself as HeartBot. However, it did not explicitly indicate that participants would be interacting with an AI system. The perceived chatbot identity (human vs artificial agent), conversation length with HeartBot, message humanness, message effectiveness, and attitude toward AI were measured at the postchatbot survey. Multivariable logistic regression was conducted to explore factors predicting women's perception of a chatbot's identity as a human, adjusting for age, race or ethnicity, education, previous AI chatbot use, message humanness, message effectiveness, and attitude toward AI. Among 92 women (mean age 45.9, SD 11.9; range 26-70 y), the chatbot identity was correctly identified by two-thirds (n=61, 66%) of the sample, while one-third (n=31, 34%) misidentified the chatbot as a human. Over half (n=53, 58%) had previous AI chatbot experience. On average, participants interacted with the HeartBot for 13.0 (SD 7.8) minutes and entered 82.5 (SD 61.9) words. In multivariable analysis, only message humanness was significantly associated with the perception of chatbot identity as a human compared with an artificial agent (adjusted odds ratio 2.37, 95% CI 1.26-4.48; P=.007). To the best of our knowledge, this is the first study to explicitly ask participants whether they perceive an interaction as human or from a chatbot (HeartBot) in the health care field. This study's findings (role and importance of message humanness) provide new insights into designing chatbots. However, the current evidence remains preliminary. Future research is warranted to understand the relationship between chatbot identity, message humanness, and health outcomes in a larger-scale study.

  • New
  • Research Article
  • 10.62383/majelis.v3i1.1516
Status Kepemilikan Terhadap Karya Cipta Hasil Dari Pengagunaan Artificial Intelligence (AI) Chat GPT Ditinjau Dari UU No 28 Tahun 2014
  • Feb 3, 2026
  • Majelis: Jurnal Hukum Indonesia
  • Martha Tri Lestari

This study aims to examine the legal certainty of ownership of works produced by artificial intelligence (AI), specifically ChatGPT, from the perspective of Law Number 28 of 2014 concerning Copyright. The main focus of this research is to answer the question of whether works produced by AI can be copyrighted and to identify the legal challenges arising from the absence of explicit regulations in the Indonesian positive legal system. This study uses a normative juridical method with a statute approach and analysis of primary and supplementary legal materials. The study's findings indicate that, to date, there are no national regulations explicitly governing copyright recognition for works produced autonomously by AI systems. Based on the provisions of Article 1 number 3 of Law Number 28 of 2014, works must arise from human intellectual ability, therefore, AI products do not qualify as works potentially entitled to copyright protection. Therefore, legal reformulation through regulatory updates is needed to provide legal certainty and address challenges in the digital era, as well as prevent potential disputes in the national creative industry.

  • New
  • Research Article
  • 10.1108/jsit-07-2025-0306
The nature of agency: designing agentic systems using a biomimetic lens
  • Feb 3, 2026
  • Journal of Systems and Information Technology
  • Tegwen Malik + 5 more

Purpose This paper aims to explore how biomimetic principles can inform governance models for agentic artificial intelligence (AI) systems, autonomous, adaptive entities that challenge traditional oversight frameworks. It argues that nature-inspired governance offers a dynamic alternative to static, compliance-based models. Design/methodology/approach This study adopts a conceptual viewpoint approach. It synthesizes literature on AI governance, systems theory and biomimicry, applying thematic analysis to existing frameworks and mapping identified gaps to five natural principles: symmetry, fractals, cymatic feedback, self-organization and phase transitions. Findings Current governance frameworks lack mechanisms for managing emergent behaviors and distributed agency in agentic AI. The proposed biomimetic lens offers a conceptual scaffold for adaptative, decentralized governance aligned with ethical norms. Research limitations/implications No empirical validation is provided; future research should use simulation or design science to test biomimetic governance in real-world contexts. Practical implications This paper offers actionable guidance for policymakers and system designers to adaptive, resilient governance mechanisms into agentic AI architectures. Originality/value Introduces “Biomimic AI” as a novel paradigm for governing agentic systems, extending systems theory and responsible AI discourse through nature-inspired design logic.

  • New
  • Research Article
  • 10.2196/69985
Explainable AI Approaches in Federated Learning: Systematic Review.
  • Feb 3, 2026
  • JMIR AI
  • Titus Tunduny + 1 more

Artificial intelligence (AI) has, in the recent past, experienced a rebirth with the growth of generative AI systems such as ChatGPT and Bard. These systems are trained with billions of parameters and have enabled widespread accessibility and understanding of AI among different user groups. Widespread adoption of AI has led to the need for understanding how machine learning (ML) models operate to build trust in them. An understanding of how these models generate their results remains a huge challenge that explainable AI seeks to solve. Federated learning (FL) grew out of the need to have privacy-preserving AI by having ML models that are decentralized but still share model parameters with a global model. This study sought to examine the extent of development of the explainable AI field within the FL environment in relation to the main contributions made, the types of FL, the sectors it is applied to, the models used, the methods applied by each study, and the databases from which sources are obtained. A systematic search in 8 electronic databases, namely, Web of Science Core Collection, Scopus, PubMed, ACM Digital Library, IEEE Xplore, Mendeley, BASE, and Google Scholar, was undertaken. A review of 26 studies revealed that research on explainable FL is steadily growing despite being concentrated in Europe and Asia. The key determinants of FL use were data privacy and limited training data. Horizontal FL remains the preferred approach for federated ML, whereas post hoc explainability techniques were preferred. There is potential for development of novel approaches and improvement of existing approaches in the explainable FL field, especially for critical areas. OSF Registries 10.17605/OSF.IO/Y85WA; https://osf.io/y85wa.

  • New
  • Research Article
  • 10.1097/js9.0000000000004585
TabPFN-driven ternary classification of stage IA lung adenocarcinoma subtypes using AI-derived histogram features a retrospective multicenter cohort study.
  • Feb 3, 2026
  • International journal of surgery (London, England)
  • Guotian Pei + 9 more

Preoperative differentiation of precursor glandular lesions (PGL), minimally invasive (MIA), and invasive adenocarcinoma (IAC) in stage IA lung adenocarcinoma (LUAD) is critical for surgical planning but remains challenging due to overlapping CT features and interobserver variability. While existing artificial intelligence (AI) models focus predominantly on binary classification with limited multicenter validation, this study developed and validated a ternary classification framework using pretrained TabPFN and traditional machine learning (ML) algorithms based on AI-derived histogram features, benchmarking against intraoperative frozen section analysis. This multicenter retrospective study utilized preoperative CT scans from three institutions between September 2014 and October 2023. Data were divided into training, internal validation, and external test sets. Histogram features (n =26) were automatically extracted using a commercial AI system (InferRead CT Lung). TabPFN and five ML algorithms were trained with selected clinical and histogram features. Performance was evaluated by accuracy, macro-AUC, sensitivity, specificity, and Cohen's Kappa. Statistical comparisons included DeLong tests for AUC and chi-square for categorical variables. The cohort comprised 584 stage IA LUAD patients (mean age 57.9±11.0years; 386 female), divided into training/validation sets (n =412, center 1) and external test sets (n =114, center 2; n =58, center 3). TabPFN achieved macro-AUC of 0.781-0.911 and accuracy of 67.2-78.9% across external test sets, outperforming other ML algorithms. Of note, TabPFN achieved an overall better prediction accuracy compared to frozen section analysis on all test sets (internal: 92.3% vs 84.6%, P =0.503; external 1: 87.5% vs 75%, P =1.000; external 2: 67.2% vs 43.1%, P <0.001). Subgroup analysis revealed superior performance for mGGN lesions (85%) on both external test sets. TabPFN enables robust, generalizable ternary classification of LUAD subtypes, surpassing conventional ML and frozen section analysis. Its integration with automated histogram analysis offers a scalable solution for preoperative stratification of early-stage lung cancer.

  • New
  • Research Article
  • 10.1038/s41467-026-69212-7
An interpretable AI system reduces false-positive MRI diagnoses by stratifying high-risk breast lesions.
  • Feb 2, 2026
  • Nature communications
  • Yanting Liang + 18 more

Breast cancer diagnosis using magnetic resonance imaging remains limited by high false-positive rates and substantial inter-reader variability, especially for lesions classified as Breast Imaging Reporting and Data System (BI-RADS) category 4, often leading to unnecessary biopsies. Here we show that the BI-RADS 4 Lesions Analysis System (BL4AS), an artificial intelligence system powered by foundation models and leveraging the rich spatiotemporal information of dynamic contrast-enhanced MRI, addresses these diagnostic challenges. Developed on a multicenter dataset of 2,803 lesions from 2,686 female patients, BL4AS demonstrates robust performance with areas under the curve of 0.892-0.930 and significantly outperforms radiologists in specificity (0.889 versus 0.491). BL4AS-assisted interpretation significantly improves diagnostic accuracy for both senior and junior radiologists, reducing inter-reader variability by 24.5% and decreasing false-positive rates by 27.3%. BL4AS further stratifies lesions into subcategories (4 A, 4B and 4 C) for refined risk assessment, offering a practical tool for precision breast cancer management.

  • New
  • Research Article
  • 10.1080/08989621.2026.2623487
Emerging ethical duties in AI-mediated research: A case of data sovereignty in applying cross-national regulation
  • Feb 2, 2026
  • Accountability in Research
  • Ricardo Ayala + 1 more

ABSTRACT Background Artificial intelligence (AI) is reshaping research practices, yet its ethical implications remain under‑examined, particularly in cross‑national contexts. Objective To explore how AI integration into environmental science complicates informed consent, privacy and data sovereignty, and to identify the ethical duties that follow for researchers. Case context Drawing on a Chilean case study that adopts the European Union’s General Data Protection Regulation (GDPR) as a normative framework, we focus on everyday AI‑mediated tools embedded in research infrastructures (e.g., transcription, cloud services, meeting assistants) and the tensions they introduce. Findings AI intensifies –rather than replaces– ethical accountability, especially where legal protections are weak or infrastructures unequal. Algorithmic opacity constrains researcher autonomy and undermines data sovereignty. Conclusions A governance approach grounded in data sovereignty and researcher autonomy is required to safeguard consent, privacy, and accountability in AI‑mediated research. Implications for policy and practice We propose a revised model of ethical governance to support researchers working across fragmented regulations and opaque AI systems.

  • New
  • Research Article
  • 10.1016/j.cmpb.2025.109152
Explainable multimodal fusion for breast carcinoma diagnosis: A systematic review, open problems, and future directions.
  • Feb 1, 2026
  • Computer methods and programs in biomedicine
  • Mohammad Mehedi Hassan + 7 more

Explainable multimodal fusion for breast carcinoma diagnosis: A systematic review, open problems, and future directions.

  • New
  • Research Article
  • 10.1016/j.legalmed.2025.102764
Artificial intelligence in healthcare: Proposal for a new medico-legal methodology in medical liability.
  • Feb 1, 2026
  • Legal medicine (Tokyo, Japan)
  • Rossana Cecchi + 6 more

Artificial intelligence in healthcare: Proposal for a new medico-legal methodology in medical liability.

  • New
  • Research Article
  • 10.1016/j.artmed.2025.103320
Artificial intelligence in depression diagnostics: A systematic review of methodologies and clinical applications.
  • Feb 1, 2026
  • Artificial intelligence in medicine
  • Mahdi Ghorbankhani + 1 more

Artificial intelligence in depression diagnostics: A systematic review of methodologies and clinical applications.

  • New
  • Research Article
  • 10.1016/j.bios.2025.118152
Machine learning-driven prediction model of adult body weight based on integrated bioelectrical impedance and infrared imaging.
  • Feb 1, 2026
  • Biosensors & bioelectronics
  • Yinggang Zheng + 8 more

Machine learning-driven prediction model of adult body weight based on integrated bioelectrical impedance and infrared imaging.

  • New
  • Research Article
  • 10.1016/j.cmpb.2025.109203
INTELLI-PVA: Informative sample annotation-based contrastive active learning for cross-domain patient-ventilator asynchrony detection.
  • Feb 1, 2026
  • Computer methods and programs in biomedicine
  • Lingwei Zhang + 9 more

INTELLI-PVA: Informative sample annotation-based contrastive active learning for cross-domain patient-ventilator asynchrony detection.

  • New
  • Research Article
  • 10.1016/j.neubiorev.2025.106524
On biological and artificial consciousness: A case for biological computationalism.
  • Feb 1, 2026
  • Neuroscience and biobehavioral reviews
  • Borjan Milinkovic + 1 more

On biological and artificial consciousness: A case for biological computationalism.

  • New
  • Research Article
  • 10.1016/j.cmpb.2025.109175
Integration of quantum artificial intelligence in disease diagnosis: A review of methods and applications.
  • Feb 1, 2026
  • Computer methods and programs in biomedicine
  • Shobha Sharma + 2 more

Integration of quantum artificial intelligence in disease diagnosis: A review of methods and applications.

  • New
  • Research Article
  • Cite Count Icon 1
  • 10.1016/j.ijmedinf.2025.106141
Artificial intelligence in clinical trials: A comprehensive review of opportunities, challenges, and future directions.
  • Feb 1, 2026
  • International journal of medical informatics
  • David B Olawade + 5 more

Artificial intelligence in clinical trials: A comprehensive review of opportunities, challenges, and future directions.

  • New
  • Research Article
  • 10.55813/gaea/jessr/v6/n1/229
Riesgos de responsabilidad legal en el uso de chatbots e IA para asesoría financiera y tributaria en Ecuador
  • Jan 31, 2026
  • Journal of Economic and Social Science Research
  • Norma Del Rocío Toledo-Castillo + 3 more

The use of chatbots and artificial intelligence systems in financial and tax advisory services has increased significantly in recent years, offering operational advantages while raising new legal challenges related to liability. This study aims to systematically analyze the legal liability risks arising from the use of these technologies in professional advisory contexts. The research follows a qualitative approach based on a documentary review and legal doctrinal analysis of academic publications with verifiable digital identifiers, focusing on civil liability, regulation of artificial intelligence, and financial and tax applications. The results show that automated advisory services increase legal risks when implemented without effective human supervision, particularly in situations involving economic harm, regulatory noncompliance, and lack of transparency in decision making processes. The findings also reveal a fragmentation of liability among system developers, implementing entities, and supervising professionals, as well as significant regulatory gaps in the Latin American and Ecuadorian context. The discussion highlights that traditional legal frameworks are insufficient to address the complexity of automated advisory services. The study concludes that the development of specific regulatory models is required, incorporating transparency, human oversight, and clear allocation of responsibilities to ensure legal certainty and user protection.

  • New
  • Research Article
  • 10.58806/ijiissh.2026.v3i1n12
Artificial Intelligence in Islamic Fatwa and Shariah Advisory: Opportunities, Limitations, and the Imperative of Human-in-the-Loop Shariah Governance
  • Jan 31, 2026
  • International Journal of innovative inventions in Social Science and Humanities
  • Mufti Masum Billah + 1 more

The adoption of Artificial Intelligence (AI) and large language models by Islamic financial institutions is accelerating rapidly. While this offers great opportunities, it has also raised many serious concerns over Shariah compliance. This paper critically assesses the role of AI in Ifta and Shariah advisory services in Islamic banking and finance, with reference to Bangladesh's regulatory and ethical framework. Drawing on both classical and modern Usul al-Fiqh literature, this study confirms that the institution of ifta is founded on qualities essential to humanity. These are primarily taqwa (Allah-consciousness), the spiritual eye (firasah), the contextual meaning of urf and maslaha, and accountability before Allah. This is because they are essential for achieving the aims of the shari’ah. Finally, the view is also upheld that AI is deficient and cannot possess these basic human qualities. The classical conditions (being a Muslim, mukallaf, adil, thiqah, competent in ijtihad) for a mufti are not satisfied by AI systems. In addition, technical flaws. The factors of statistical hallucination, training data bias, incomplete digitization of classical sources, and the absence of real-time socio-cultural contextualization make AI-generated fatwas, produced independently, unreliable. They can also be disastrous if used in sensitive financial contracts. Nonetheless, the use of human-in-the-loop systems in the Ifta (Fatwa issuance) process can yield significant ancillary benefits, such as rapid retrieval of classical fatwas, comparative fiqh, drafting preliminary opinions, and improved Shariah audit efficiency. The author proposed “Shariah Governance Framework for Artificial Intelligence (AI) Integration” for the Islamic financial institutions (IFIs) of Bangladesh which mandates (a) human-mufti oversight at the final stage of decision-making of AI software, (b) regular ethical and bias audits, (c) mandatory disclosure when tools of AI are used, (d) a national oversight committee “AI-in-Ifta Oversight Committee” to be formed and operated under the Central Shariah Board (CSB), and (e) continuous and regular training on AI literacy of the Shariah advisors without undermining their original capacities of ijtihad. The study takes a step toward a balanced use of technology by proposing the “augmented ifta” model rather than the “automated ifta,” which aligns with the maqasid al-Shariah. Furthermore, the study contributes directly to the conference themes, namely AI &amp; Fintech Innovations, legal and regulatory innovations, and the ethical foundations of sustainable finance.

  • New
  • Research Article
  • 10.1186/s12909-026-08707-9
Health sciences students' attitudes toward artificial intelligence: predictors of ethical awareness, clinical decision-making, and public health perceptions-a cross-sectional study.
  • Jan 31, 2026
  • BMC medical education
  • Cihan Unal + 1 more

This study investigates health sciences students' attitudes toward artificial intelligence (AI) and the implications for ethical awareness, clinical decision-making, and public health. A cross-sectional survey was conducted between April 27 and May 15 2025, with 668 students from five departments at Gümüşhane University, employing the validated Artificial Intelligence Attitude Scale, which measures benefits, risks, and use, alongside 12 binary-response items assessing ethical, clinical, and public health judgments. Descriptive statistics, t-tests, ANOVA, and logistic regression analyses were applied. Findings indicate that students perceive AI as highly beneficial (M = 4.05) but also associate it with notable risks (M = 2.52; where lower scores indicate a higher level of perceived risk due to reverse coding). Logistic regression analyses revealed that risk perception (reverse-coded; higher scores indicating lower perceived risk) was the most consistent predictor across all dimensions. Specifically, students with lower perceived risk were significantly more likely to reject concerns regarding patient privacy (OR = 2.55, 95% CI [2.03-3.21], p < 0.001), dismiss the idea that relying on AI instead of human expertise is problematic (OR = 1.57, 95% CI [1.25-1.96], p < 0.001), and reject the notion that AI systems may harm public health (OR = 2.52, 95% CI [1.98-3.20], p < 0.001). While participants endorsed AI's potential in enhancing patient safety, chronic disease management, and preventive care, they expressed significant concerns about privacy, legal responsibility, and a potential weakening of patient-clinician communication. Gender, academic discipline, and prior AI use further differentiated attitudes. The results highlight a dual perception of AI as both an opportunity and a threat, emphasizing that successful integration in healthcare requires not only technical competence but also ethical, legal, and communicative safeguards.

  • New
  • Research Article
  • 10.31436/jop.v6i1.463
Role of Artificial Intelligence and Real-Time Clinical Decision Support System in Enhancing Antimicrobial Stewardship for Pneumonia Management: A Scoping Review
  • Jan 31, 2026
  • Journal of Pharmacy
  • Muhammad Jawad Hassan + 4 more

Antimicrobial resistance (AMR) is a major public health challenge globally, particularly in pneumonia where inappropriate antibiotic use is common, resulting in increased morbidity and mortality. Artificial intelligence (AI) and clinical decision support systems (CDSS) have emerged as key tools to enhance antimicrobial stewardship (AMS) practices and reduce AMR. This scoping review aims to present and map the current AI and real-time CDSS applications in AMS for pneumonia patients, focusing on their types used and associated outcomes. This scoping review was conducted according to Arksey and O’Malley methodological framework and reported according to the PRISMA-ScR checklist. Databases including PubMed, CINAHL, EMBASE, and Scopus, were searched between April and August 2025. Original studies published in English between 2015 and 2025 were included. Out of 505 identified articles, 11 eligible studies were analysed. The findings showed that AI and CDSS tools, when integrated with machine learning (ML) algorithms and large databases, enhance diagnostic accuracy, optimise antibiotic use, improve pathogen identification, enhance AMR detection, promote guideline adherence, and support treatment-related decisions, thereby reducing mortality, healthcare costs, and the overuse of broad-spectrum antibiotics. However, integrating these technologies into clinical workflows remains a challenge due to limited research in low- and middle-income countries, data quality issues, and associated ethical concerns. AI and the CDSS are promising technologies to enhance AMS, especially in pneumonia, with improved patient outcomes. Future research to validate these technologies in diverse settings, while addressing barriers to their implementation and ethical concerns, is needed to enhance AMS practices and reduce AMR globally.

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • .
  • .
  • .
  • 10
  • 1
  • 2
  • 3
  • 4
  • 5

Popular topics

  • Latest Artificial Intelligence papers
  • Latest Nursing papers
  • Latest Psychology Research papers
  • Latest Sociology Research papers
  • Latest Business Research papers
  • Latest Marketing Research papers
  • Latest Social Research papers
  • Latest Education Research papers
  • Latest Accounting Research papers
  • Latest Mental Health papers
  • Latest Economics papers
  • Latest Education Research papers
  • Latest Climate Change Research papers
  • Latest Mathematics Research papers

Most cited papers

  • Most cited Artificial Intelligence papers
  • Most cited Nursing papers
  • Most cited Psychology Research papers
  • Most cited Sociology Research papers
  • Most cited Business Research papers
  • Most cited Marketing Research papers
  • Most cited Social Research papers
  • Most cited Education Research papers
  • Most cited Accounting Research papers
  • Most cited Mental Health papers
  • Most cited Economics papers
  • Most cited Education Research papers
  • Most cited Climate Change Research papers
  • Most cited Mathematics Research papers

Latest papers from journals

  • Scientific Reports latest papers
  • PLOS ONE latest papers
  • Journal of Clinical Oncology latest papers
  • Nature Communications latest papers
  • BMC Geriatrics latest papers
  • Science of The Total Environment latest papers
  • Medical Physics latest papers
  • Cureus latest papers
  • Cancer Research latest papers
  • Chemosphere latest papers
  • International Journal of Advanced Research in Science latest papers
  • Communication and Technology latest papers

Latest papers from institutions

  • Latest research from French National Centre for Scientific Research
  • Latest research from Chinese Academy of Sciences
  • Latest research from Harvard University
  • Latest research from University of Toronto
  • Latest research from University of Michigan
  • Latest research from University College London
  • Latest research from Stanford University
  • Latest research from The University of Tokyo
  • Latest research from Johns Hopkins University
  • Latest research from University of Washington
  • Latest research from University of Oxford
  • Latest research from University of Cambridge

Popular Collections

  • Research on Reduced Inequalities
  • Research on No Poverty
  • Research on Gender Equality
  • Research on Peace Justice & Strong Institutions
  • Research on Affordable & Clean Energy
  • Research on Quality Education
  • Research on Clean Water & Sanitation
  • Research on COVID-19
  • Research on Monkeypox
  • Research on Medical Specialties
  • Research on Climate Justice
Discovery logo
FacebookTwitterLinkedinInstagram

Download the FREE App

  • Play store Link
  • App store Link
  • Scan QR code to download FREE App

    Scan to download FREE App

  • Google PlayApp Store
FacebookTwitterTwitterInstagram
  • Universities & Institutions
  • Publishers
  • R Discovery PrimeNew
  • Ask R Discovery
  • Blog
  • Accessibility
  • Topics
  • Journals
  • Open Access Papers
  • Year-wise Publications
  • Recently published papers
  • Pre prints
  • Questions
  • FAQs
  • Contact us
Lead the way for us

Your insights are needed to transform us into a better research content provider for researchers.

Share your feedback here.

FacebookTwitterLinkedinInstagram
Cactus Communications logo

Copyright 2026 Cactus Communications. All rights reserved.

Privacy PolicyCookies PolicyTerms of UseCareers