• All Solutions All Solutions
    • Editage

      One platform for all researcher needs

    • Paperpal

      AI-powered academic writing assistant

    • R Discovery

      Your #1 AI companion for literature search

    • Mind the Graph

      AI tool for graphics, illustrations, and artwork

    Unlock unlimited use of all AI tools with the Editage Plus membership.

    Explore Editage Plus
  • Support All Solutions
    discovery@researcher.life
Discovery Logo
Paper
Search Paper
Cancel
Ask R Discovery
Features
  • Top Papers
  • Library
  • audio papers link Audio Papers
  • translate papers link Paper Translation
  • translate papers link Chrome Extension
Explore

Content Type

  • Preprints
  • Conference Papers
  • Journal Articles

More

  • Research Areas
  • Topics
  • Resources

Sentence Semantic Similarity Research Articles

  • Share Topic
  • Share on Facebook
  • Share on Twitter
  • Share on Mail
  • Share on SimilarCopy to clipboard
Follow Topic R Discovery
By following a topic, you will receive articles in your feed and get email alerts on round-ups.
Overview
34 Articles

Published in last 50 years

Related Topics

  • Sentence Similarity
  • Sentence Similarity
  • Semantic Similarity
  • Semantic Similarity
  • Text Similarity
  • Text Similarity
  • Similar Words
  • Similar Words
  • Semantic Text
  • Semantic Text

Articles published on Sentence Semantic Similarity

Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
34 Search results
Sort by
Recency
Design of judicial public opinion supervision and intelligent decision-making model based on Bi-LSTM.

Fuzzy preference modeling in intelligent decision support systems aims to improve the efficiency and accuracy of decision-making processes by incorporating fuzzy logic and preference modeling techniques. While network public opinion (NPO) has the potential to drive judicial reform and progress, it also poses challenges to the independence of the judiciary due to the negative impact of malicious public opinion. To tackle this issue within the context of intelligent decision support systems, this study provides an insightful overview of current NPO monitoring technologies. Recognizing the complexities associated with handling large-scale NPO data and mitigating significant interference, a novel judicial domain NPO monitoring model is proposed, which centers around semantic feature analysis. This model takes into account time series characteristics, binary semantic fitting, and public sentiment intensity. Notably, it leverages a bidirectional long short-term memory (Bi-LSTM) network (S-Bi-LSTM) to construct a judicial domain semantic similarity calculation model. The semantic similarity values between sentences are obtained through the utilization of a fully connected layer. Empirical evaluations demonstrate the remarkable performance of the proposed model, achieving an accuracy rate of 85.9% and an F1 value of 87.1 on the test set, surpassing existing sentence semantic similarity models. Ultimately, the proposed model significantly enhances the monitoring capabilities of judicial authorities over NPO, thereby alleviating the burden on public relations faced by judicial institutions and fostering a more equitable execution of judicial power.

Read full abstract
  • PeerJ. Computer science
  • Nov 13, 2024
  • Heng Guo
Cite
Save

Scan pattern similarity predicts the semantic similarity of sentences across languages above and beyond their syntactic structures.

Scan pattern similarity predicts the semantic similarity of sentences across languages above and beyond their syntactic structures.

Read full abstract
  • Journal of Vision
  • Sep 15, 2024
  • Moreno I Coco + 3
Open Access
Cite
Save

A BERT-GRU Model for Measuring the Similarity of Arabic Text

Semantic Textual Similarity (STS) aims to assess the semantic similarity between two pieces of text. As a challenging task in natural language processing, various approaches for STS in high-resource languages, such as English, have been proposed. In this paper, we are concerned with STS in low resource languages such as Arabic. A baseline approach for STS is based on vector embedding of the input text and application of similarity metric on the embedding space. In this contribution, we propose a cross-encoder neural network (Cross-BERT-GRU) to handle semantic similarity of Arabic sentences that benefits from both the strong contextual understanding of BERT and the sequential modeling capabilities of GRU. The architecture begins by inputting the BERT word embeddings for each word into a GRU cell to model long-term dependencies. Then, max pooling and average pooling are applied to the hidden outputs of the GRU cell, serving as the sentence -pair encoder. Finally, a softmax layer is utilized to predict the degree of similarity. The experiment results show a Spearman correlation coefficient of around 0.9 and that Cross-BERT-GRU outperforms the other BERT models in predicting the semantic textual similarity of Arabic sentences. The experimentation results also indicate that the performance improves by integrating data augmentation techniques.

Read full abstract
  • JUCS - Journal of Universal Computer Science
  • Jun 28, 2024
  • Rakia Saidi + 2
Open Access
Cite
Save

Editorial

Dear Readers, It gives me great pleasure to announce the sixth regular issue of 2024. In this issue, 6 papers cover various topical aspects of computer science by 19 authors from 7 countries: Brazil, Chile, France, India, Saudi Arabia, Tunisia, and Turkey. In an ongoing effort to further strengthen our journal, I would like to expand the editorial board: If you are a tenured associate professor or above with a strong publication record, you are welcome to apply to join our editorial board. We are also interested in high-quality proposals for special issues on new topics and trends. As always, I would like to thank all authors for their sound research and the editorial board and our guest reviewers for their extremely valuable review effort and suggestions for improvement. These contributions, together with the generous support of the consortium members, help to maintain the quality of our journal.  In this regular issue, I am very pleased to introduce the following 6 accepted articles: George Marsicano, Edna Dias Canedo, Glauco V. Pedrosa, Cristiane S. Ramos, and Rejane M. da C. Figueiredo from Brazil look in their study into digital transformation of public services in a startup-based environment by 23 focus groups and 175 participants in total. Mauricio Solar and Pablo Aguirre from Chile discuss their research on 3D chest CT processes applying a ResNet-50 model to which a new dimension of information has been added, namely a simple autoencoder. In a collaborative work between researchers from Tunisia and France, Rakia Saidi, Fethi Jarray, and Didier Schwab propose in their article a cross-encoder neural network (Cross-BERT-GRU) to deal with the semantic similarity of Arabic sentences that benefits from both the strong contextual understanding of BERT and the sequential modeling capabilities of GRU. Also in a collaborative research between institutions from Tunisia and Saudi Arabia, Nozha Jlidi, Sameh Kouni, Olfa Jemai, and Tahani Bouchrika present their research on MediaPipe with GNN for human activity recognition. G.V.Vidya Lakshmi and S. Gopikrishnan from India look into missing values research for the IoT domain and in particular present IMD-MP technique that improves imputation accuracy for big data analysis in IoT applications based on spatial-temporal correlations. Last but not least, F. Didem Alay, Nagehan İlhan, and M. Tahir Güllüoğlu address in their article a comparative study of data mining methods for solar radiation and temperature forecasting models. Enjoy Reading Cordially,  Christian Gütl, Managing Editor Graz University of Technology, Graz, Austria

Read full abstract
  • JUCS - Journal of Universal Computer Science
  • Jun 28, 2024
  • Christian Gütl
Open Access
Cite
Save

What Do Self-Supervised Speech Models Know About Words?

Abstract Many self-supervised speech models (S3Ms) have been introduced over the last few years, improving performance and data efficiency on various speech tasks. However, these empirical successes alone do not give a complete picture of what is learned during pre-training. Recent work has begun analyzing how S3Ms encode certain properties, such as phonetic and speaker information, but we still lack a proper understanding of knowledge encoded at the word level and beyond. In this work, we use lightweight analysis methods to study segment-level linguistic properties—word identity, boundaries, pronunciation, syntactic features, and semantic features—encoded in S3Ms. We present a comparative study of layer-wise representations from ten S3Ms and find that (i) the frame-level representations within each word segment are not all equally informative, and (ii) the pre-training objective and model size heavily influence the accessibility and distribution of linguistic information across layers. We also find that on several tasks—word discrimination, word segmentation, and semantic sentence similarity—S3Ms trained with visual grounding outperform their speech-only counterparts. Finally, our task-based analyses demonstrate improved performance on word segmentation and acoustic word discrimination while using simpler methods than prior work.1

Read full abstract
  • Transactions of the Association for Computational Linguistics
  • Apr 12, 2024
  • Ankita Pasad + 3
Open Access
Cite
Save

A Comparative Analysis of Japanese Learners’ Translation Bias Using Neurosemantic Analysis

Abstract In today’s increasingly frequent cultural exchanges between China and Japan, accurate and error-free Japanese translation has become an inevitable choice for cross-cultural communication. In this paper, based on twin neural network and attention mechanism, BiLSTM model is combined with sentence semantic similarity matching algorithm to construct a Japanese translation bias sentence semantic similarity model. The Japanese corpus data were collected and preprocessed by Python technology, and the Japanese translation corpus database was searched and counted using Wordsmith and AntConc tools. For the Japanese learners’ translation bias in the Japanese translation process, a comparative analysis was carried out in several aspects, such as end-of-sentence modal expressions, consecutive translations, and word frequency effects. The study results show that the difference in the frequency distribution of Japanese learners’ modal expressions is only 4.66% compared with that of native speakers of Japanese. Still, the difference between the two is significant at the 1% level, and the difference in the frequency of Japanese learners’ use of the modal expression “yes” is 56 sentences per 10,000 sentences. The frequency of Japanese learners’ use of successive expressions was 30.1 percentage points higher than that of native speakers. The neural semantic analysis method combined with the Japanese translation corpus can clarify the translation bias of Japanese learners in the process of Japanese translation, which can provide a reference for enhancing the translation quality of Japanese learning.

Read full abstract
  • Applied Mathematics and Nonlinear Sciences
  • Jan 1, 2024
  • Zheng Cao
Open Access
Cite
Save

Summarization of Software Bug Report based on Sentence Semantic Similarity (SSBRSSS) Technique

In this work, sentence similarity between sentences of software bug report is computed. For this, two methods are utilized, Latent Semantic Analysis and Text Rank. Latent Semantic Analysis is used to compute semantic similarity between sentences of bug reports which infers deeper and hidden relation between words. From this, a pair of sentences with semantic similarity above a set threshold is selected and from one pair of sentences, only one sentence is selected. The remaining sentences are passed into TextRank algorithm and sentences with high similarity are further selected to generate a coherent summary. The proposed approach is evaluated on a newly constructed Apache Project Bug Report Corpus and existing Bug Report Corpus. The proposed approach is also compared with baseline approaches that mainly focus on only lexical similarity. The results when evaluated on Apache project Bug Report Corpus attains an average value of 80%, 72.57%, 76.05% and 76.57% in terms of precision, recall, F-score and pyramid precision respectively.

Read full abstract
  • Procedia Computer Science
  • Jan 1, 2024
  • Shubhra Goyal + 1
Open Access
Cite
Save

Sentence Semantic Similarity based Complex Network approach for Word Sense Disambiguation

Word Sense Disambiguation is a branch of Natural Language Processing(NLP) that deals with multi-sense words. The multi-sense words are referred to as the polysemous words. The term lexical ambiguity is introduced by the multi-sense words. The existing sense disambiguation module works effectively for single sentences with available context information. The word embedding plays a vital role in the process of disambiguation. The context-dependent word embedding model is used for disambiguation. The main goal of this research paper is to disambiguate the polysemous words by considering available context information. The main identified challenge of disambiguation is the ambiguous word without context information. The discussed complex network approach is disambiguating ambiguous sentences by considering the semantic similarities. The sentence semantic similarity-based network is constructed for disambiguating ambiguous sentences. The proposed methodology is trained with SemCor, Adaptive-Lex, and OMSTI standard lexical resources. The findings state that the discussed methodology is working fine for disambiguating large documents where the sense of ambiguous sentences is on the adjacent sentences.

Read full abstract
  • International Journal on Recent and Innovation Trends in Computing and Communication
  • Nov 2, 2023
  • Et Al Gopal Mohadikar
Open Access
Cite
Save

A quantum-like text representation based on syntax tree for fuzzy semantic analysis

To mine more semantic information between words, it is important to utilize the different semantic correlations between words. Focusing on the different degrees of modifying relations between words, this article provides a quantum-like text representation based on syntax tree for fuzzy semantic analysis. Firstly, a quantum-like text representation based on density matrix of individual words is generalized to represent the relationship of modification between words. Secondly, a fuzzy semantic membership function is constructed to discuss the different degrees of modifying relationships between words based on syntax tree. Thirdly, the tensor dot product is defined as the sentence semantic similarity by combining the operation rules of the tensor to effectively exploit the semantic information of all elements in the quantum-like sentence representation. Finally, extensive experiments on STS’12, STS’14, STS’15, STS’16 and SICK show that the provided model outperforms the baselines, especially for the data set containing multiple long-sentence pairs, which confirms there are fuzzy semantic associations between words.

Read full abstract
  • Journal of Intelligent & Fuzzy Systems
  • Jun 1, 2023
  • Yan Yu + 2
Cite
Save

A New Alignment Word-Space Approach for Measuring Semantic Similarity for Arabic Text

This work presents a new alignment word-space approach for measuring the similarity between two snipped texts. The approach combines two similarity measurement methods: alignment-based and vector space-based. The vector space-based method depends on a semantic net that represents the meaning of words as vectors. These vectors are lemmatized to enrich the search space. The alignment-based method generates an alignment word space matrix (AWSM) for the snipped texts according to the generated semantic word spaces. Finally, the degree of sentence semantic similarity is measured using some proposed alignment rules. Four experiments were carried out to evaluate the performance of the proposed approach, using two different datasets. The experimental results proved that applying the lemmatization process for the input text and the vector model has a better effect. The degree of correctness of the results reaches 0.7212 which is considered one of the best two results of the published Arabic semantic similarities.

Read full abstract
  • International Journal on Semantic Web and Information Systems
  • Mar 24, 2022
  • Shimaa Ismail + 2
Open Access
Cite
Save

Semantic Reasoning of Product Biologically Inspired Design Based on BERT

Bionic reasoning is a significant process in product biologically inspired design (BID), in which designers search for creatures and products that are matched for design. Several studies have tried to assist designers in bionic reasoning, but there are still limits. Designers’ bionic reasoning thinking in product BID is vague, and there is a lack of fuzzy semantic search methods at the sentence level. This study tries to assist designers’ bionic semantic reasoning in product BID. First, experiments were conducted to determine the designer’s bionic reasoning thinking in top-down and bottom-up processes. Bionic mapping relationships, including affective perception, form, function, material, and environment, were obtained. Second, the bidirectional encoder representations from transformers (BERT) pretraining model was used to calculate the semantic similarity of product description sentences and biological sentences so that designers could choose the high-ranked results to finish bionic reasoning. Finally, we used a product BID example to show the bionic semantic reasoning process and verify the feasibility of the method.

Read full abstract
  • Applied Sciences
  • Dec 18, 2021
  • Ze Bian + 4
Open Access
Cite
Save

Knowledge-based sentence semantic similarity: algebraical properties

Determining the extent to which two text snippets are semantically equivalent is a well-researched topic in the areas of natural language processing, information retrieval and text summarization. The sentence-to-sentence similarity scoring is extensively used in both generic and query-based summarization of documents as a significance or a similarity indicator. Nevertheless, most of these applications utilize the concept of semantic similarity measure only as a tool, without paying importance to the inherent properties of such tools that ultimately restrict the scope and technical soundness of the underlined applications. This paper aims to contribute to fill in this gap. It investigates three popular WordNet hierarchical semantic similarity measures, namely path-length, Wu and Palmer and Leacock and Chodorow, from both algebraical and intuitive properties, highlighting their inherent limitations and theoretical constraints. We have especially examined properties related to range and scope of the semantic similarity score, incremental monotonicity evolution, monotonicity with respect to hyponymy/hypernymy relationship as well as a set of interactive properties. Extension from word semantic similarity to sentence similarity has also been investigated using a pairwise canonical extension. Properties of the underlined sentence-to-sentence similarity are examined and scrutinized. Next, to overcome inherent limitations of WordNet semantic similarity in terms of accounting for various Part-of-Speech word categories, a WordNet “All word-To-Noun conversion” that makes use of Categorial Variation Database (CatVar) is put forward and evaluated using a publicly available dataset with a comparison with some state-of-the-art methods. The finding demonstrates the feasibility of the proposal and opens up new opportunities in information retrieval and natural language processing tasks.

Read full abstract
  • Progress in Artificial Intelligence
  • Aug 21, 2021
  • Mourad Oussalah + 1
Open Access
Cite
Save

Improved Chinese Sentence Semantic Similarity Calculation Method Based on Multi-Feature Fusion

In this paper, an improved long short-term memory (LSTM)-based deep neural network structure is proposed for learning variable-length Chinese sentence semantic similarities. Siamese LSTM, a sequence-insensitive deep neural network model, has a limited ability to capture the semantics of natural language because it has difficulty explaining semantic differences based on the differences in syntactic structures or word order in a sentence. Therefore, the proposed model integrates the syntactic component features of the words in the sentence into a word vector representation layer to express the syntactic structure information of the sentence and the interdependence between words. Moreover, a relative position embedding layer is introduced into the model, and the relative position of the words in the sentence is mapped to a high-dimensional space to capture the local position information of the words. With this model, a parallel structure is used to map two sentences into the same high-dimensional space to obtain a fixed-length sentence vector representation. After aggregation, the sentence similarity is computed in the output layer. Experiments with Chinese sentences show that the model can achieve good results in the calculation of the semantic similarity.

Read full abstract
  • Journal of Advanced Computational Intelligence and Intelligent Informatics
  • Jul 20, 2021
  • Liqi Liu + 2
Open Access
Cite
Save

Interpretable semantic textual similarity of sentences using alignment of chunks with classification and regression

The proposed work is focused on establishing an interpretable Semantic Textual Similarity (iSTS) method for a pair of sentences, which can clarify why two sentences are completely or partially similar or have some variations. This proposed interpretable approach is a pipeline of five modules that begins with the pre-processing and chunking of text. Further chunks of two sentences are aligned using a one–to–multi (1:M) chunk aligner. Thereafter, support vector, Gaussian Naive Bayes and k–Nearest Neighbours classifiers are then used to create a multiclass classification algorithm, and different class labels are used to define an alignment type. At last, a multivariate regression algorithm is developed to find the semantic equivalence of an alignment with a score (that ranges from 0 to 5). The efficiency of the proposed method is verified on three different datasets and also compared to other state–of–the–art interpretable STS (iSTS) methods. The evaluated results show that the proposed method performs better than other iSTS methods. Most importantly, the modules of the proposed iSTS method are used to develop a Textual Entailment (TE) method. It is found that, when we combined chunk level, alignment, and sentence level features the entailment results significantly improves.

Read full abstract
  • Applied Intelligence
  • Mar 8, 2021
  • Goutam Majumder + 3
Cite
Save

Comparative Analysis of Word Embeddings in Assessing Semantic Similarity of Complex Sentences

Semantic textual similarity is one of the open research challenges in the field of Natural Language Processing. Extensive research has been carried out in this field and near-perfect results are achieved by recent transformer-based models in existing benchmark datasets like the STS dataset and the SICK dataset. In this paper, we study the sentences in these datasets and analyze the sensitivity of various word embeddings with respect to the complexity of the sentences. We build a complex sentences dataset comprising of 50 sentence pairs with associated semantic similarity values provided by 15 human annotators. Readability analysis is performed to highlight the increase in complexity of the sentences in the existing benchmark datasets and those in the proposed dataset. Further, we perform a comparative analysis of the performance of various word embeddings and language models on the existing benchmark datasets and the proposed dataset. The results show the increase in complexity of the sentences has a significant impact on the performance of the embedding models resulting in a 10-20% decrease in Pearson's and Spearman's correlation.

Read full abstract
  • IEEE Access
  • Jan 1, 2021
  • Dhivya Chandrasekaran + 1
Open Access
Cite
Save

AraCap: A hybrid deep learning architecture for Arabic Image Captioning

Automatic captioning of images no only enrich multimedia content with descriptive features, but also helps in detecting patterns, trends, and events of interest. Particularly, Arabic Image Caption Generation is a very challenging topic in the machine learning field. This paper presents, AraCap, a hybrid object-based, attention-enriched image captioning architecture, with a focus on Arabic language. Three models are demonstrated, all of them are implemented and trained on COCO and Flickr30k datasets, and then tested by building an Arabic version of a subset of COCO dataset. The first model is an object-based captioner that can handle one or multiple detected objects. The second is a combined pipeline that uses both object detector and attention-based captioning; while the third one is based on a pure soft attention mechanism. The models are evaluated using multi-lingual semantic sentence similarity techniques to assess the generated captions accuracy against the actual ground truth captions. Results show that similarity scores for Arabic generated captions from all three proposed models outperformed the basic captioning technique.

Read full abstract
  • Procedia Computer Science
  • Jan 1, 2021
  • Imad Afyouni + 2
Open Access
Cite
Save

Similarity of Sentences With Contradiction Using Semantic Similarity Measures

Abstract Short text or sentence similarity is crucial in various natural language processing activities. Traditional measures for sentence similarity consider word order, semantic features and role annotations of text to derive the similarity. These measures do not suit short texts or sentences with negation. Hence, this paper proposes an approach to determine the semantic similarity of sentences and also presents an algorithm to handle negation. In sentence similarity, word pair similarity plays a significant role. Hence, this paper also discusses the similarity between word pairs. Existing semantic similarity measures do not handle antonyms accurately. Hence, this paper proposes an algorithm to handle antonyms. This paper also presents an antonym dataset with 111-word pairs and corresponding expert ratings. The existing semantic similarity measures are tested on the dataset. The results of the correlation proved that the expert ratings are in order with the correlation obtained from the semantic similarity measures. The sentence similarity is handled by proposing two algorithms. The first algorithm deals with the typical sentences, and the second algorithm deals with contradiction in the sentences. SICK dataset, which has sentences with negation, is considered for handling the sentence similarity. The algorithm helped in improving the results of sentence similarity.

Read full abstract
  • The Computer Journal
  • Aug 19, 2020
  • M Krishna Siva Prasad + 1
Cite
Save

Emu: Enhancing Multilingual Sentence Embeddings with Semantic Specialization

We present Emu, a system that semantically enhances multilingual sentence embeddings. Our framework fine-tunes pre-trained multilingual sentence embeddings using two main components: a semantic classifier and a language discriminator. The semantic classifier improves the semantic similarity of related sentences, whereas the language discriminator enhances the multilinguality of the embeddings via multilingual adversarial training. Our experimental results based on several language pairs show that our specialized embeddings outperform the state-of-the-art multilingual sentence embedding model on the task of cross-lingual intent classification using only monolingual labeled data.

Read full abstract
  • Proceedings of the AAAI Conference on Artificial Intelligence
  • Apr 3, 2020
  • Wataru Hirota + 3
Open Access
Cite
Save

Sentence Similarity Calculating Method Based on Word2Vec and Clustering

With the rapid development and all-round popularization of Internet, more and more data is stored in the form of text in the network platform. The massive data makes the redundancy of text information. It is very important to use text similarity technology to remove duplicate data. Therefore, how to effectively improve the accuracy and precision of text similarity calculation is an urgent problem. In this paper, we propose an improved method to calculate sentence semantic similarity. This method uses word2vec model to get the semantic information of the text, uses k-means algorithm to cluster the above results, then uses word2vec model for retraining, and finally gets the sentence similarity. Experimental results indicate that the performance of our algorithm is better improved compared with the traditional word2vec algorithm

Read full abstract
  • DEStech Transactions on Engineering and Technology Research
  • Jan 10, 2020
  • Wan-Li Song
Open Access
Cite
Save

A New Context-Based Sentence Embedding and the Semantic Similarity of Sentences

A New Context-Based Sentence Embedding and the Semantic Similarity of Sentences

Read full abstract
  • International Journal of Knowledge Engineering
  • Jan 1, 2020
  • Dinh-Minh Vu
Open Access
Cite
Save

  • 1
  • 2
  • 1
  • 2

Popular topics

  • Latest Artificial Intelligence papers
  • Latest Nursing papers
  • Latest Psychology Research papers
  • Latest Sociology Research papers
  • Latest Business Research papers
  • Latest Marketing Research papers
  • Latest Social Research papers
  • Latest Education Research papers
  • Latest Accounting Research papers
  • Latest Mental Health papers
  • Latest Economics papers
  • Latest Education Research papers
  • Latest Climate Change Research papers
  • Latest Mathematics Research papers

Most cited papers

  • Most cited Artificial Intelligence papers
  • Most cited Nursing papers
  • Most cited Psychology Research papers
  • Most cited Sociology Research papers
  • Most cited Business Research papers
  • Most cited Marketing Research papers
  • Most cited Social Research papers
  • Most cited Education Research papers
  • Most cited Accounting Research papers
  • Most cited Mental Health papers
  • Most cited Economics papers
  • Most cited Education Research papers
  • Most cited Climate Change Research papers
  • Most cited Mathematics Research papers

Latest papers from journals

  • Scientific Reports latest papers
  • PLOS ONE latest papers
  • Journal of Clinical Oncology latest papers
  • Nature Communications latest papers
  • BMC Geriatrics latest papers
  • Science of The Total Environment latest papers
  • Medical Physics latest papers
  • Cureus latest papers
  • Cancer Research latest papers
  • Chemosphere latest papers
  • International Journal of Advanced Research in Science latest papers
  • Communication and Technology latest papers

Latest papers from institutions

  • Latest research from French National Centre for Scientific Research
  • Latest research from Chinese Academy of Sciences
  • Latest research from Harvard University
  • Latest research from University of Toronto
  • Latest research from University of Michigan
  • Latest research from University College London
  • Latest research from Stanford University
  • Latest research from The University of Tokyo
  • Latest research from Johns Hopkins University
  • Latest research from University of Washington
  • Latest research from University of Oxford
  • Latest research from University of Cambridge

Popular Collections

  • Research on Reduced Inequalities
  • Research on No Poverty
  • Research on Gender Equality
  • Research on Peace Justice & Strong Institutions
  • Research on Affordable & Clean Energy
  • Research on Quality Education
  • Research on Clean Water & Sanitation
  • Research on COVID-19
  • Research on Monkeypox
  • Research on Medical Specialties
  • Research on Climate Justice
Discovery logo
FacebookTwitterLinkedinInstagram

Download the FREE App

  • Play store Link
  • App store Link
  • Scan QR code to download FREE App

    Scan to download FREE App

  • Google PlayApp Store
FacebookTwitterTwitterInstagram
  • Universities & Institutions
  • Publishers
  • R Discovery PrimeNew
  • Ask R Discovery
  • Blog
  • Accessibility
  • Topics
  • Journals
  • Open Access Papers
  • Year-wise Publications
  • Recently published papers
  • Pre prints
  • Questions
  • FAQs
  • Contact us
Lead the way for us

Your insights are needed to transform us into a better research content provider for researchers.

Share your feedback here.

FacebookTwitterLinkedinInstagram

Copyright 2024 Cactus Communications. All rights reserved.

Privacy PolicyCookies PolicyTerms of UseCareers