• All Solutions All Solutions Caret
    • Editage

      One platform for all researcher needs

    • Paperpal

      AI-powered academic writing assistant

    • R Discovery

      Your #1 AI companion for literature search

    • Mind the Graph

      AI tool for graphics, illustrations, and artwork

    • Journal finder

      AI-powered journal recommender

    Unlock unlimited use of all AI tools with the Editage Plus membership.

    Explore Editage Plus
  • Support All Solutions Support
    discovery@researcher.life
Discovery Logo
Paper
Search Paper
Cancel
Ask R Discovery Chat PDF
Explore

Feature

  • menu top paper My Feed
  • library Library
  • translate papers linkAsk R Discovery
  • chat pdf header iconChat PDF
  • audio papers link Audio Papers
  • translate papers link Paper Translation
  • chrome extension Chrome Extension

Content Type

  • preprints Preprints
  • conference papers Conference Papers
  • journal articles Journal Articles

More

  • resources areas Research Areas
  • topics Topics
  • resources Resources

Text Classification Research Articles

  • Share Topic
  • Share on Facebook
  • Share on Twitter
  • Share on Mail
  • Share on SimilarCopy to clipboard
Follow Topic R Discovery
By following a topic, you will receive articles in your feed and get email alerts on round-ups.
Overview
6313 Articles

Published in last 50 years

Related Topics

  • Document Classification
  • Document Classification
  • Text Categorization
  • Text Categorization
  • Keyword Extraction
  • Keyword Extraction

Articles published on Text Classification

Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
6199 Search results
Sort by
Recency
Open-world semi-supervised relation extraction.

Open-world semi-supervised relation extraction.

Read full abstract
  • Journal IconNeural networks : the official journal of the International Neural Network Society
  • Publication Date IconJun 1, 2025
  • Author Icon Diange Zhou + 3
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

TextBugger: an extended adversarial text attack on NLP-based text classification model

Recently, adversarial input highly negotiates the security concerns in deep learning (DL) techniques. The main motive to enhance the natural language processing (NLP) models is to learn attacks and secure against adversarial text. Presently, the antagonistic attack techniques face some issues like high error and traditional prevention approaches accurately secure data against harmful attacks. Hence, some attacks unable to increase more flaws of NLP models thereby introducing enhanced antagonistic mechanisms. The proposed article introduced an extended text adversarial generation method, TextBugger. Initially, preprocessing steps such as stop word (SR) removal, and tokenization are performed to remove noises from the text data. Then, various NLP models like Bi-directional encoder representations from transformers (BERT), robustly optimized BERT (ROBERTa), and extreme learning machine neural network (XLNet) models are analyzed for outputting hostile texts. The simulation process is carried out in the Python platform and a publicly available text classification attack database is utilized for the training process. Various assessing measures like success rate, time consumption, positive predictive value (PPV), Kappa coefficient (KC), and F-measure are analyzed with different TextBugger models. The overall success rate achieved by BERT, ROBERTa, and XLNet is about 98.6%, 99.7%, and 96.8% respectively.

Read full abstract
  • Journal IconIndonesian Journal of Electrical Engineering and Computer Science
  • Publication Date IconJun 1, 2025
  • Author Icon Sanjaikanth E Vadakkethil Somanathan Pillai + 4
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Bayesian Q-learning in multi-objective reward model for homophobic and transphobic text classification in low-resource languages: A hypothesis testing framework in multi-objective setting

Bayesian Q-learning in multi-objective reward model for homophobic and transphobic text classification in low-resource languages: A hypothesis testing framework in multi-objective setting

Read full abstract
  • Journal IconNatural Language Processing Journal
  • Publication Date IconJun 1, 2025
  • Author Icon Vivek Suresh Raj + 3
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

A text dataset of fire door defects for pre-delivery inspections of apartments during the construction stage.

A text dataset of fire door defects for pre-delivery inspections of apartments during the construction stage.

Read full abstract
  • Journal IconData in brief
  • Publication Date IconJun 1, 2025
  • Author Icon Seunghyeon Wang + 4
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Text classification of public online messages in civil aviation: A N-BM25 weighted word vectors method

Text classification of public online messages in civil aviation: A N-BM25 weighted word vectors method

Read full abstract
  • Journal IconInformation Sciences
  • Publication Date IconJun 1, 2025
  • Author Icon Sheng-Hua Xiong + 4
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Enhancing cross-lingual text classification through linguistic and interpretability-guided attack strategies

Enhancing cross-lingual text classification through linguistic and interpretability-guided attack strategies

Read full abstract
  • Journal IconInformation Systems
  • Publication Date IconJun 1, 2025
  • Author Icon Abdelmounaim Kerkri + 3
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Analysis of Multimodal Social Media Data Utilizing VIT Base 16 and GPT-2 for Disaster Response

Abstract Multimedia systems, such as social media platforms, play a crucial role in disseminating vital information during calamities. This information is shared in various formats such as images, text, videos, audio, etc. Therefore, it becomes important to have a system that can identify multimodal data to classify relevant information. This paper proposes a new age classification method for multimodal data using advanced and improved transformer models, such as Vision Transformer and Generative Pre-trained Transformer 2, for image and text classification, respectively. These models were combined using an ensemble model (Random Forest Classifier), achieving an accuracy of 84.66% on the multimodal data. Furthermore, the proposed model demonstrates higher prediction accuracy compared to traditional Convolutional Neural Network (CNN) models which have an accuracy of 71.43%, exceeding it by 13.23%. A comparison with convolutional models is conducted to underscore the advantages of transformer models and to substantiate the necessity of the experiment. Our proposed classification model using Vision Transformer and GPT-2, along with an ensemble model, can be replicated by researchers in disaster management, humanitarian aid organizations, and social media platforms looking to filter and prioritize information during emergencies.

Read full abstract
  • Journal IconArabian Journal for Science and Engineering
  • Publication Date IconMay 31, 2025
  • Author Icon Shilpa Gite + 8
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Deep Learning Architectures for Text Classification

Text classification is crucial in natural language processing applications such as sentiment analysis, topic tagging, and news categorization. This paper presents a comparative analysis of three deep learning architectures—LSTM, Bidirectional LSTM, and Character-level Convolutional Neural Networks (Char-CNN), for the task of news categorization using the AG News dataset. The models were trained using a unified preprocessing pipeline, including tokenization, padding, and label encoding. Performance was evaluated based on classification accuracy, training time, and learning stability across epochs. The results show that Bidirectional LSTM outperforms the standard LSTM in capturing long-range dependencies by leveraging both past and future context. The Character-level CNN demonstrates robust performance by learning morphological patterns directly from raw text, making it resilient to misspellings and out-of-vocabulary words. The trade- offs between model complexity, training time, and interpretability has also been analyzed. This study offers practical insights into model selection for real-world NLP applications and highlights the importance of architectural choices in deep learning-based text classification.

Read full abstract
  • Journal IconInternational Journal of Innovative Science and Research Technology
  • Publication Date IconMay 31, 2025
  • Author Icon Chitra Desai
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Digital Signature Tool: Build an Application to Sign and Verify Documents Using Cryptographic Algorithms

Abstract: The rise of offensive content on social media, encompassing both abusive language and inappropriate images, poses a significant threat to individuals and communities, often resulting in bullying or emotional harm. To address this challenge, researchers have explored supervised approaches and curated datasets to enable automatic detection of such content. This study proposes a comprehensive model that integrates both text and image classification techniques. For text, the model incorporates a modular cleaning phase, tokenization, three embedding methods, and eight classifiers. For image detection, computer vision techniques such as convolutional neural networks (CNNs) are employed to identify harmful or offensive visual content. Experimental results on a Twitter dataset demonstrate promising outcomes, with AdaBoost, SVM, and MLP achieving the highest F1-scores using the popular TF-IDF embedding method for text, while pre-trained CNN models like ResNet and EfficientNet show high accuracy in identifying offensive images. These findings highlight the effectiveness of combining advanced NLP and computer vision techniques for detecting offensive content on social media.

Read full abstract
  • Journal IconInternational Journal for Research in Applied Science and Engineering Technology
  • Publication Date IconMay 31, 2025
  • Author Icon Dr B Narsimha
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Multi-dimensional Constraint-based Test Case Generation and Evaluation Framework for Large Language Models

Addressing the challenges of complex test case design and insufficient coverage in functional testing of large language models, this paper presents a multi-dimensional constraint-based test case generation framework. The framework defines constraint rules across four dimensions: syntactic correctness, semantic consistency, task relevance, and boundary conditions, employing reinforcement learning methods to optimize the test case generation process. Through the design of reward function-based generation strategies, the system can automatically produce high-quality functional test samples covering core tasks including text classification, sentiment analysis, and machine translation. Experimental results demonstrate that test cases generated by this method achieve a 42% improvement in functional coverage compared to random generation methods and a 28% increase in defect detection rate. Further ablation experiments validate the effectiveness of each dimensional constraint, providing a systematic solution for large language model quality assurance.

Read full abstract
  • Journal IconInternational Journal of Emerging Technologies and Advanced Applications
  • Publication Date IconMay 30, 2025
  • Author Icon Xuebing Wang
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Krajšavar—an Algorithm for Recognizing English Abbreviations in Texts Related to Criminal Justice and Security

Abstract In the paper, we try to classify texts from the criminal justice and security field according to the classification for LSP (Language for Specific Purpose) texts prepared by Mikolič (2007) for the typology of tourism texts. Within that classification, we outline the current position held by the LSP field of criminal justice and security in Slovenia and the development of field-specific terminology. The classification of texts allows us to collect manually samples of English text types with respect to subcategories of criminal justice and security texts. From the texts obtained, we automatically extract abbreviations and expansions from the field of criminal justice and security. The scope of the paper is to discover insights into abbreviations from the field of criminal justice and security. Texts are filtered using an algorithm for the automatic recognition of abbreviations—Krajšavar—and a list of English abbreviations and expansions from the field of criminal justice and security is obtained and published in Termania dictionary editing mask.

Read full abstract
  • Journal IconInternational Journal of Lexicography
  • Publication Date IconMay 28, 2025
  • Author Icon Mojca Kompara Lukančič + 1
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Legal text classification in Korean sexual offense cases: from traditional machine learning to large language models with XAI insights

Legal text classification in Korean sexual offense cases: from traditional machine learning to large language models with XAI insights

Read full abstract
  • Journal IconArtificial Intelligence and Law
  • Publication Date IconMay 28, 2025
  • Author Icon Jeongmin Lee
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Telecom Fraud Recognition Based on Large Language Model Neuron Selection

In the realm of natural language processing (NLP), text classification constitutes a task of paramount significance for large language models (LLMs). Nevertheless, extant methodologies predominantly depend on the output generated by the final layer of LLMs, thereby neglecting the wealth of information encapsulated within neurons residing in intermediate layers. To surmount this shortcoming, we introduce LENS (Linear Exploration and Neuron Selection), an innovative technique designed to identify and sparsely integrate salient neurons from intermediate layers via a process of linear exploration. Subsequently, these neurons are transmitted to downstream modules dedicated to text classification. This strategy effectively mitigates noise originating from non-pertinent neurons, thereby enhancing both the accuracy and computational efficiency of the model. The detection of telecommunication fraud text represents a formidable challenge within NLP, primarily attributed to its increasingly covert nature and the inherent limitations of current detection algorithms. In an effort to tackle the challenges of data scarcity and suboptimal classification accuracy, we have developed the LENS-RMHR (Linear Exploration and Neuron Selection with RoBERTa, Multi-head Mechanism, and Residual Connections) model, which extends the LENS framework. By incorporating RoBERTa, a multi-head attention mechanism, and residual connections, the LENS-RMHR model augments the feature representation capabilities and improves training efficiency. Utilizing the CCL2023 telecommunications fraud dataset as a foundation, we have constructed an expanded dataset encompassing eight distinct categories that encapsulate a diverse array of fraud types. Furthermore, a dual-loss function has been employed to bolster the model’s performance in multi-class classification scenarios. Experimental results reveal that LENS-RMHR demonstrates superior performance across multiple benchmark datasets, underscoring its extensive potential for application in the domains of text classification and telecommunications fraud detection.

Read full abstract
  • Journal IconMathematics
  • Publication Date IconMay 27, 2025
  • Author Icon Lanlan Jiang + 6
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

On the Significance of Graph Neural Networks With Pretrained Transformers in Content‐Based Recommender Systems for Academic Article Classification

ABSTRACTRecommender systems are tools for interacting with large and complex information spaces by providing a personalised view of such spaces, prioritising items that are likely to be of interest to the user. In addition, they serve as a significant tool in academic research, helping authors select the most appropriate journals for their academic articles. This paper presents a comprehensive study of various journal recommender systems, focusing on the synergy of graph neural networks (GNNs) with pretrained transformers for enhanced text classification. Furthermore, we propose a content‐based journal recommender system that combines a pretrained Transformer with a Graph Attention Network (GAT) using title, abstract and keywords as input data. The proposed architecture enhances text representation by forming graphs from the Transformers' hidden states and attention matrices, excluding padding tokens. Our findings highlight that this integration improves the accuracy of the journal recommendations and reduces the transformer oversmoothing problem, with RoBERTa outperforming BERT models. Furthermore, excluding padding tokens from graph construction reduces training time by 8%–15%. Furthermore, we offer a publicly available dataset comprising 830,978 articles.

Read full abstract
  • Journal IconExpert Systems
  • Publication Date IconMay 27, 2025
  • Author Icon Jiayun Liu + 2
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Khmer News Classification in Low-Resource Settings: A comparative Analysis of Embedding Method

Text classification in low-resource languages like Khmer remains challenging due to linguistic complexity, limited annotated data, and noise from real-world applications. This study addresses these challenges by systematically comparing text embedding techniques for Khmer news classification. We evaluate traditional methods (TF-IDF with SVM) against state-of-the-art multilingual transformers (XLM-RoBERTa, LaBSE) using a self-collected dataset of 7,344 Khmer news articles across six categories—political, economic, entertainment, sport, technology, and life. The dataset intentionally retains noise (e.g., mixed-language text, unstructured formatting) to reflect practical scenarios. To address Khmer's lack of word boundaries, we employ word segmentation via khmer-nltk for traditional models, while transformer models leverage their inherent subword tokenization. Experiments reveal that transformer-based embeddings achieve superior performance, with XLM-RoBERTa and LaBSE attaining F1 scores of 94.2% and 94.3%, respectively, outperforming TF-IDF (93.3%). However, the "life" category proves challenging across all models (F1: 85.5–88.1%), likely due to semantic overlap with other categories. Our findings underscore the effectiveness of transformer architectures in capturing contextual nuances for low-resource languages, even with noisy data. This work offers insights for NLP researchers and practitioners, emphasizing the need for domain-specific adaptations and expanded datasets to improve performance in underrepresented languages.

Read full abstract
  • Journal IconJournal on Information Technologies & Communications
  • Publication Date IconMay 26, 2025
  • Author Icon Natt Korat + 2
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Evaluating rule-based and generative data augmentation techniques for legal document classification

Abstract Automated text classification is a fundamental research topic within the legal domain as it is the foundation for building many intelligent legal solutions. There is a scarcity of publicly available legal training data and these classification algorithms struggle to perform in low data scenarios. Text augmentation techniques have been proposed to enhance classifiers through artificially synthesised training data. In this paper we present and evaluate a combination of rule-based and advanced generative text augmentation methods designed to create additional training data for the task of classification of legal contracts. We introduce a repurposed CUAD contract dataset, modified for the task of document level classification, and compare a deep learning distilBERT model with an optimised support vector machine baseline for useful comparison of shallow and deep strategies. The deep learning model significantly outperformed the shallow model on the full training data (F1-score of 0.9738 compared to 0.599). We achieved promising improvements when evaluating the combined augmentation techniques on three reduced datasets. Augmentation caused the F1-score performance to increase by 66.6%, 17.5% and 2.6% for the 25%, 50% and 75% reduced datasets respectively, compared to the non-augmented baseline. We discuss the benefits augmentation can bring to low data regimes and the need to extend augmentation techniques to preserve key terms in specialised domains such as law.

Read full abstract
  • Journal IconKnowledge and Information Systems
  • Publication Date IconMay 26, 2025
  • Author Icon William Duffy + 7
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Optimizing n-gram lengths for cross-linguistic text classification: A comparative analysis of English and Arabic morphosyntactic structures

This paper investigates the impact of n-gram length on text classification in English and Arabic, two languages with different writing systems. The study aims to examine how language characteristics influence the optimal n-gram length for text classification. The English dataset comprises 4,450 articles categorized into business, technology, entertainment, sports, and politics, with 2,225 records used for training and 2,225 for testing. The Arabic dataset includes 5,000 randomly selected documents from a total of 111,728 documents. The findings indicate that for English text classification, 2-grams provide the best performance with a precision of 0.482, recall of 0.489, and F1 score of 0.472. In contrast, Arabic text classification achieves optimal performance with 6-grams, reaching an F1 score close to 0.85. These results highlight that language-dependent morphological and syntactic features can significantly affect the performance of n-gram-based models. This study provides valuable insights for enhancing language-sensitive text classification techniques, particularly for accurately and efficiently categorizing documents in different languages.

Read full abstract
  • Journal IconInternational Journal of ADVANCED AND APPLIED SCIENCES
  • Publication Date IconMay 25, 2025
  • Author Icon Boumedyen Shannaq
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Balanced Knowledge Transfer in MTTL-ClinicalBERT: A Symmetrical Multi-Task Learning Framework for Clinical Text Classification

Clinical text classification presents significant challenges in healthcare informatics due to inherent asymmetries in domain-specific terminology, knowledge distribution across specialties, and imbalanced data availability. We introduce MTTL-ClinicalBERT, a symmetrical multi-task transfer learning framework that harmonizes knowledge sharing across diverse medical specialties while maintaining balanced performance. Our approach addresses the fundamental problem of symmetry in knowledge transfer through three innovative components: (1) an adaptive knowledge distillation mechanism that creates symmetrical information flow between related medical domains while preventing negative transfer; (2) a bidirectional hierarchical attention architecture that establishes symmetry between local terminology analysis and global contextual understanding; and (3) a dynamic task-weighting strategy that maintains equilibrium in the learning process across asymmetrically distributed medical specialties. Extensive experiments on the MTSamples dataset demonstrate that our symmetrical approach consistently outperforms asymmetric baselines, achieving average improvements of 7.2% in accuracy and 6.8% in F1-score across five major specialties. The framework’s knowledge transfer patterns reveal a symmetric similarity matrix between specialties, with strongest bidirectional connections between cardiovascular/pulmonary and surgical domains (similarity score 0.83). Our model demonstrates remarkable stability and balance in low-resource scenarios, maintaining over 85% classification accuracy with only 30% of training data. The proposed framework not only advances clinical text classification through its symmetrical design but also provides valuable insights into balanced information sharing between different medical domains, with broader implications for symmetrical knowledge transfer in multi-domain machine learning systems.

Read full abstract
  • Journal IconSymmetry
  • Publication Date IconMay 25, 2025
  • Author Icon Qun Zhang + 2
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

A Deep Learning Approach to Classify AI-Generated and Human-Written Texts

The rapid advancement of artificial intelligence (AI) has introduced new challenges, particularly in the generation of AI-written content that closely resembles human-authored text. This poses a significant risk for misinformation, digital fraud, and academic dishonesty. While large language models (LLM) have demonstrated impressive capabilities across various languages, there remains a critical gap in evaluating and detecting AI-generated content in under-resourced languages such as Turkish. To address this, our study investigates the effectiveness of long short-term memory (LSTM) networks—a computationally efficient and interpretable architecture—for distinguishing AI-generated Turkish texts produced by ChatGPT from human-written content. LSTM was selected due to its lower hardware requirements and its proven strength in sequential text classification, especially under limited computational resources. Four experiments were conducted, varying hyperparameters such as dropout rate, number of epochs, embedding size, and patch size. The model trained over 20 epochs achieved the best results, with a classification accuracy of 97.28% and an F1 score of 0.97 for both classes. The confusion matrix confirmed high precision, with only 19 misclassified instances out of 698. These findings highlight the potential of LSTM-based approaches for AI-generated text detection in the Turkish language context. This study not only contributes a practical method for Turkish NLP applications but also underlines the necessity of tailored AI detection tools for low-resource languages. Future work will focus on expanding the dataset, incorporating other architectures, and applying the model across different domains to enhance generalizability and robustness.

Read full abstract
  • Journal IconApplied Sciences
  • Publication Date IconMay 15, 2025
  • Author Icon Ayla Kayabas + 3
Cite IconCite
Chat PDF IconChat PDF
Save

Using Large Language Models for Advanced and Flexible Labelling of Protocol Deviations in Clinical Development.

As described in ICH E3 Q&A R1 (International Council for Harmonisation. E3: Structure and content of clinical study reports-questions and answers (R1). 6 July 2012. Available from: https://database.ich.org/sites/default/files/E3_Q%26As_R1_Q%26As.pdf ): "A protocol deviation (PD) is any change, divergence, or departure from the study design or procedures defined in the protocol". A problematic area in human subject protection is the wide divergence among institutions, sponsors, investigators and IRBs regarding the definition of and the procedures for reviewing PDs. Despite industry initiatives like TransCelerate's holistic approach [Galuchie et al. in Ther Innov Regul Sci 55:733-742, 2021], systematic trending and identification of impactful PDs remains limited. Traditional Natural Language Processing (NLP) methods are often cumbersome to implement, requiring extensive feature engineering and model tuning. However, the rise of Large Language Models (LLMs) has revolutionised text classification, enabling more accurate, nuanced, and context-aware solutions [Nguyen P. Test classification in the age of LLMs. 2024. Available from: https://blog.redsift.com/author/phong/ ]. An automated classification solution that enables efficient, flexible, and targeted PD classification is currently lacking. We developed a novel approach using a large language model (LLM), Meta Llama2 [Meta. Llama 2: Open source, free for research and commercial use. 2023. Available from: https://www.llama.com/llama2/ ] with a tailored prompt to classify free-text PDs from Roches' PD management system. The model outputs were analysed to identify trends and assess risks across clinical programs, supporting human decision-making. This method offers a generalisable framework for developing prompts and integrating data to address similar challenges in clinical development. This approach flagged over 80% of PDs potentially affecting disease progression assessment, enabling expert review. Compared to months of manual analysis, this automated method produced actionable insights in minutes. The solution also highlighted gaps in first-line controls, supporting process improvement and better accuracy in disease progression handling during trials.

Read full abstract
  • Journal IconTherapeutic innovation & regulatory science
  • Publication Date IconMay 13, 2025
  • Author Icon Min Zou + 2
Open Access Icon Open Access
Cite IconCite
Chat PDF IconChat PDF
Save

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • .
  • .
  • .
  • 10
  • 1
  • 2
  • 3
  • 4
  • 5

Popular topics

  • Latest Artificial Intelligence papers
  • Latest Nursing papers
  • Latest Psychology Research papers
  • Latest Sociology Research papers
  • Latest Business Research papers
  • Latest Marketing Research papers
  • Latest Social Research papers
  • Latest Education Research papers
  • Latest Accounting Research papers
  • Latest Mental Health papers
  • Latest Economics papers
  • Latest Education Research papers
  • Latest Climate Change Research papers
  • Latest Mathematics Research papers

Most cited papers

  • Most cited Artificial Intelligence papers
  • Most cited Nursing papers
  • Most cited Psychology Research papers
  • Most cited Sociology Research papers
  • Most cited Business Research papers
  • Most cited Marketing Research papers
  • Most cited Social Research papers
  • Most cited Education Research papers
  • Most cited Accounting Research papers
  • Most cited Mental Health papers
  • Most cited Economics papers
  • Most cited Education Research papers
  • Most cited Climate Change Research papers
  • Most cited Mathematics Research papers

Latest papers from journals

  • Scientific Reports latest papers
  • PLOS ONE latest papers
  • Journal of Clinical Oncology latest papers
  • Nature Communications latest papers
  • BMC Geriatrics latest papers
  • Science of The Total Environment latest papers
  • Medical Physics latest papers
  • Cureus latest papers
  • Cancer Research latest papers
  • Chemosphere latest papers
  • International Journal of Advanced Research in Science latest papers
  • Communication and Technology latest papers

Latest papers from institutions

  • Latest research from French National Centre for Scientific Research
  • Latest research from Chinese Academy of Sciences
  • Latest research from Harvard University
  • Latest research from University of Toronto
  • Latest research from University of Michigan
  • Latest research from University College London
  • Latest research from Stanford University
  • Latest research from The University of Tokyo
  • Latest research from Johns Hopkins University
  • Latest research from University of Washington
  • Latest research from University of Oxford
  • Latest research from University of Cambridge

Popular Collections

  • Research on Reduced Inequalities
  • Research on No Poverty
  • Research on Gender Equality
  • Research on Peace Justice & Strong Institutions
  • Research on Affordable & Clean Energy
  • Research on Quality Education
  • Research on Clean Water & Sanitation
  • Research on COVID-19
  • Research on Monkeypox
  • Research on Medical Specialties
  • Research on Climate Justice
Discovery logo
FacebookTwitterLinkedinInstagram

Download the FREE App

  • Play store Link
  • App store Link
  • Scan QR code to download FREE App

    Scan to download FREE App

  • Google PlayApp Store
FacebookTwitterTwitterInstagram
  • Universities & Institutions
  • Publishers
  • R Discovery PrimeNew
  • Ask R Discovery
  • Blog
  • Accessibility
  • Topics
  • Journals
  • Open Access Papers
  • Year-wise Publications
  • Recently published papers
  • Pre prints
  • Questions
  • FAQs
  • Contact us
Lead the way for us

Your insights are needed to transform us into a better research content provider for researchers.

Share your feedback here.

FacebookTwitterLinkedinInstagram
Cactus Communications logo

Copyright 2025 Cactus Communications. All rights reserved.

Privacy PolicyCookies PolicyTerms of UseCareers