Discovery Logo
Sign In
Paper
Search Paper
Cancel
Pricing Sign In
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Chat PDF iconChat PDF Star Left icon
  • Citation Generator iconCitation Generator
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link
  • Paperpal iconPaperpal
    External link
  • Mind the Graph iconMind the Graph
    External link
  • Journal Finder iconJournal Finder
    External link
Discovery Logo menuClose menu
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Chat PDF iconChat PDF Star Left icon
  • Citation Generator iconCitation Generator
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link
  • Paperpal iconPaperpal
    External link
  • Mind the Graph iconMind the Graph
    External link
  • Journal Finder iconJournal Finder
    External link

Related Topics

  • Speech Enhancement Algorithm
  • Speech Enhancement Algorithm
  • Speech Enhancement System
  • Speech Enhancement System

Articles published on Speech enhancement

Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
2806 Search results
Sort by
Recency
  • New
  • Research Article
  • 10.36948/ijfmr.2026.v08i01.67991
Whisper-Aware Spectro-Transformer U-Net for Emotion- Preserving Multilingual Speech Enhancement
  • Feb 4, 2026
  • International Journal For Multidisciplinary Research
  • Raghu M + 1 more

Whisper-Aware Spectro-Transformer U-Net (WAST-U-Net), a multilingual, emotion-preserving speech enhancement model optimized for automatic speech recognition (ASR). Extending the U-Former backbone, our architecture integrates Transformer blocks at skip connections, emotion and language embeddings at the bottleneck, and a novel Whisper-WER loss that directly optimizes ASR intelligibility. Unlike traditional models that prioritize noise suppression at the cost of expressiveness, WAST-U-Net enhances speech while preserving speaker emotion and linguistic identity. Evaluated on VoiceBank-DEMAND and a Kannada-English code-mixed dataset, our model achieves state-of-the-art performance across PESQ, STOI, SI-SNR, Whisper-WER, and emotion accuracy. Ablation studies confirm the synergistic contribution of each component. This framework sets a new benchmark for multilingual, emotionally intelligent speech enhancement, paving the way for accessible ASR in noisy, real-world environments.

  • New
  • Research Article
  • 10.1016/j.dsp.2026.105987
Lightweight Speech Enhancement with State-Space Model and Depthwise Separable Convolution
  • Feb 1, 2026
  • Digital Signal Processing
  • Chen Jiang + 4 more

Lightweight Speech Enhancement with State-Space Model and Depthwise Separable Convolution

  • New
  • Research Article
  • 10.3390/app16031439
Parallel Enhancement and Bandwidth Extension of Coded Speech
  • Jan 30, 2026
  • Applied Sciences
  • Jongwook Chae + 3 more

An important use case of speech bandwidth extension (BWE) is generating high-frequency components from band-limited speech processed by a speech codec. Recent works on BWE have demonstrated remarkable capabilities in generating high-quality, high-band components using deep learning techniques. Among them, Streaming SEANet (StrmSEANet) has also been shown to be effective for BWE with reduced delay and computational complexity, making it suitable for real-time speech processing. However, the effect of the coding artifact in the lower band of the input signal has not been sufficiently considered in many deep learning-based BWE methods. In this work, we propose Parallel Enhancement and Bandwidth Extension of coded speech (PEBE), where two lightweight networks, referred to as Compact Streaming SEANet (CompSEANet), for coded speech enhancement (CSE) and BWE are configured in parallel. The CSE and BWE models are separately trained with the task-specific training settings, thereby effectively improving the reconstruction quality of the band-limited speech signals degraded by coding artifacts. Experimental results demonstrate that the proposed PEBE significantly outperforms the baseline AP-BWE, StrmSEANet, and standalone CompSEANet in reconstructing wideband (WB) and fullband speech from Opus-coded narrowband and WB signals. The proposed method achieves the highest scores in the subjective MUSHRA test while providing the fastest inference among all compared methods, with real-time factors (RTF) of 33.95× and 18.38× measured on a Samsung SM-F711 mobile device under single-thread execution.

  • New
  • Research Article
  • 10.1177/18758967251413999
Speech Enhancement using Fully Connected Deep Neural Network for Hindi Speech Corrupted by Nonstationary Noises
  • Jan 16, 2026
  • Journal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology
  • Vijay Kumar Gupta + 4 more

Fully Connected Deep Neural Network (FCDNN) are used for speech enhancement for Hindi speech databases contaminated by a diverse range of background noises. The database includes both stationary and nonstationary noises such as Car Noise, Factory Noise, Machine Gun Noise and Fighter Plane Noise. These noises are added artificially to clean speech signal at varying input Signal-to Noise Ratio (SNR) levels i.e., −5, 0, 5, and 10 db to simulate real-world scenarios with different levels of noise interferences. The background noise, such as Machine Gun and Factory Noise are more non-stationarity compared to Car Noise and Fighter Plane Noises. This distinction underlines the importance of evaluating speech enhancement systems under diverse noise conditions to assess their robustness in real-world applications. The proposed system demonstrates significant improvements in SNR, PESQ and STOI for all four noises. Even with a speech signal corrupted by a highly nonstationary machine gun noise at −5 db input SNR level, an SNR improvement of 13.94 db with PESQ value 2.91 and STOI 0.94 is observed, which shows recovered speech quality and intelligibility is retained. Such findings from the results highlighted the effectiveness of FCDNN-based approaches in removing both stationary and nonstationary background noises from corrupted speech signals. Overall, this research contributes to enhance the quality and intelligibility of speech signals in noisy environments by leveraging the capabilities of deep learning techniques.

  • New
  • Research Article
  • 10.1186/s13636-025-00443-0
A unified deep learning framework for estimating acoustic context parameters from first order ambisonic speech recordings
  • Jan 13, 2026
  • Journal on Audio, Speech, and Music Processing
  • Hanyu Meng + 4 more

Abstract Estimating acoustic context parameters is essential for characterizing acoustic environments, thereby enhancing immersive perception in spatial audio creation and improving speech enhancement and dereverberation algorithms. In this paper, we propose a unified deep learning based framework that estimates various acoustic contexts, including frequency-dependent reverberation time ( $$T_{30}$$ T 30 ), direct-to-reverberant ratio, clarity ( $$C_{50}$$ C 50 ), room geometry, and sound source orientation from first-order Ambisonics (FOA) speech recordings. Our framework employs a novel feature, termed the Spectro-Spatial Covariance Vector (SSCV), which efficiently represents the temporal, spectral, and spatial information of FOA signals. This feature can be effectively utilized by several deep neural networks as back-ends. Experimental results demonstrate that the proposed framework, which incorporates spatial information derived from FOA recordings, significantly outperforms existing methods based solely on spectral information from single-channel audio, achieving more than a 50% reduction in estimation error across all acoustic context estimation tasks. Additionally, we introduce FOA-Conv3D, a novel back-end network that effectively utilizes the SSCV feature through a 3D convolutional encoder. FOA-Conv3D outperforms currently widely applied deep learning frameworks such as convolutional neural network and recurrent convolutional neural network back-end architectures in acoustic parameter and orientation estimation tasks, exhibiting greater robustness under both pink and babble noise conditions. Finally, ablation studies reveal the relative contributions of spectral, interaural level difference, and interaural phase difference cues within the SSCV representation.

  • Research Article
  • 10.3390/electronics15020282
AMUSE++: A Mamba-Enhanced Speech Enhancement Framework with Bi-Directional and Advanced Front-End Modeling
  • Jan 8, 2026
  • Electronics
  • Tsung-Jung Li + 2 more

This study presents AMUSE++, an advanced speech enhancement framework that extends the MUSE++ model by redesigning its core Mamba module with two major improvements. First, the originally unidirectional one-dimensional (1D) Mamba is transformed into a bi-directional architecture to capture temporal dependencies more effectively. Second, this module is extended to a two-dimensional (2D) structure that jointly models both time and frequency dimensions, capturing richer speech features essential for enhancement tasks. In addition to these structural changes, we propose a Preliminary Denoising Module (PDM) as an advanced front-end, which is composed of multiple cascaded 2D bi-directional Mamba Blocks designed to preprocess and denoise input speech features before the main enhancement stage. Extensive experiments on the VoiceBank+DEMAND dataset demonstrate that AMUSE++ significantly outperforms both the backbone MUSE++ across a variety of objective speech enhancement metrics, including improvements in perceptual quality and intelligibility. These results confirm that the combination of bi-directionality, two-dimensional modeling, and an enhanced denoising frontend provides a powerful approach for tackling challenging noisy speech scenarios. AMUSE++ thus represents a notable advancement in neural speech enhancement architectures, paving the way for more effective and robust speech enhancement systems in real-world applications.

  • Research Article
  • 10.1088/2631-8695/ae327c
Adaptive attention transformer mechanism with CED model for end to end real-time single channel speech enhancement
  • Jan 1, 2026
  • Engineering Research Express
  • Anil Kumar Prathipati + 1 more

Abstract Speech enhancement (SE) plays a vital role in applications such as mobile communication and automatic speech recognition. However, existing attention-based SE models often fail to simultaneously capture channel, frequency, and temporal dependencies, limiting their ability to generalize under diverse noise conditions. To address this, we propose AATCEDNet, a novel Adaptive Attention Transformer integrated with a Convolutional Encoder—Decoder (CED) framework for real-time single-channel speech enhancement. Our design introduces Adaptive Attention Transformer Network (AATN), composed of Adaptive Cross-Channel Attention (ACA) module for dynamic channel refinement, and dual-domain transformers combining Adaptive Frequency Attention (AFA) and Adaptive Time Attention (ATA) to effectively model long-range time-frequency dependencies. Furthermore, a dense encoder—decoder with D2Net blocks captures multi-scale features, while a subpixel convolution decoder with dual-path masking ensures accurate spectrogram reconstruction at low computational cost. Evaluations on OpenSLR (Telugu), NOIZEUS, and VoiceBank-DEMAND datasets demonstrate that AATCEDNet consistently outperforms state-of-the-art baselines such as MSCUNet, SCU-Net, and SEGAN. Under unseen noise conditions, AATCEDNet achieves average improvements of +1.7 % STOI and +0.33 PESQ compared to MSCUNet, while reducing model complexity. Ablation studies further confirm the critical roles of ACA and ATA in improving intelligibility. These results highlight AATCEDNet’s ability to deliver real-time, high-quality speech enhancement with strong generalization across diverse noise environments, making it suitable for practical real-time speech-driven systems.

  • Research Article
  • 10.1016/j.dsp.2025.105464
Multi-information-aware speech enhancement through self-supervised learning
  • Jan 1, 2026
  • Digital Signal Processing
  • Xiaotong Tu + 5 more

Multi-information-aware speech enhancement through self-supervised learning

  • Research Article
  • 10.1121/10.0042015
Enhancing binaural speech perception in noise via weighted coherence masking for hearables.
  • Jan 1, 2026
  • The Journal of the Acoustical Society of America
  • Reza Ghanavi + 1 more

A weighted masking method based on the coherent-to-diffuse ratio is presented for robust binaural speech enhancement in real-time hearable devices. The method applies manually tuned weights across custom-defined critical frequency bands to improve the quality and intelligibility of frontal target speech in multi-talker reverberant environments. The algorithm was implemented in real time on a functional hearable prototype and evaluated in a perceptual listening study under realistic binaural hearing conditions. Subjective assessments with normal-hearing participants, including evaluations of audio quality, speech intelligibility, and spatial localization, demonstrated consistent improvements compared to baseline coherence-based filtering methods. Results indicate that the method suppresses diffuse background noise while preserving interaural spatial cues important for listening comfort and spatial awareness in complex acoustic scenes. These findings support the applicability of coherence-weighted masking in real-time binaural enhancement tasks under reverberant, multi-talker conditions, including potential use in hearable and hearing aid technologies. In addition to perceptual listening tests, objective evaluations across multiple reverberant environments demonstrate consistent performance improvements over baseline methods.

  • Research Article
  • 10.1016/j.dsp.2025.105502
MSLD-SENet: Time–frequency multi-dconv channel-shuffled attention with lightweight dilated DenseNet for monaural speech enhancement
  • Jan 1, 2026
  • Digital Signal Processing
  • Yongle Zhang + 4 more

MSLD-SENet: Time–frequency multi-dconv channel-shuffled attention with lightweight dilated DenseNet for monaural speech enhancement

  • Research Article
  • Cite Count Icon 1
  • 10.1016/j.apacoust.2025.111050
FNSE-SBGAN: Far-field speech enhancement with Schrödinger bridge and generative adversarial networks
  • Jan 1, 2026
  • Applied Acoustics
  • Tong Lei + 7 more

FNSE-SBGAN: Far-field speech enhancement with Schrödinger bridge and generative adversarial networks

  • Research Article
  • 10.1504/ijiei.2026.10068454
Real-time Speech Enhancement Using Temporal Envelope Modulation and Hybrid Neural Network Algorithms for Improved Speech-to-Text Conversion
  • Jan 1, 2026
  • International Journal of Intelligent Engineering Informatics
  • B Vijayalakshmi + 1 more

Real-time Speech Enhancement Using Temporal Envelope Modulation and Hybrid Neural Network Algorithms for Improved Speech-to-Text Conversion

  • Research Article
  • 10.1121/10.0042221
Motion-aware sonar denoising for autonomous underwater vehicles self-noise using a speed-conditioned U-Net-transformer dual-branch conditional generative adversarial network.
  • Jan 1, 2026
  • The Journal of the Acoustical Society of America
  • Yufei Wang + 4 more

Passive sonar surveillance by autonomous underwater vehicles (AUVs) is often hindered by non-stationary, nonlinear speed-dependent self-noise. To address this, we propose Speed-UT2-CGAN, a motion-aware sonar denoising framework utilizing a dual-branch conditional generative adversarial network that combines a U-Net convolutional branch for local feature extraction from time-domain audio sequences and a transformer-based attention branch for long-range temporal dependencies. The architecture incorporates AUV speed as an additional conditioning input to dynamically adapt to speed-dependent noise characteristics, and is trained with a combination of adversarial, time-domain, and frequency-domain loss functions to ensure accurate spectral and temporal reconstruction. Experiments on synthetic mixtures combining real AUV self-noise recordings from lake trials with ShipsEar vessel signals demonstrate that Speed-UT2-CGAN significantly outperforms traditional methods, speech enhancement generative adversarial network, and dual-path recurrent neural network, for a single AUV in shallow-water lake trials at 0, 2, and 3 knots, achieving an output average signal-to-noise ratio of 6.6 at -5 dB input and an average correlation coefficient of 0.87. These results confirm the effectiveness of motion-aware speed conditioning for passive sonar enhancement in single-sensor AUV systems, under controlled synthetic-data conditions representative of AUV constant depth, speed, and heading in shallow-water lake environments.

  • Research Article
  • 10.1121/10.0042198
End-to-end audio-visual learning for cochlear implant sound coding simulations in noisy environments.
  • Jan 1, 2026
  • JASA express letters
  • Meng-Ping Lin + 3 more

The cochlear implant (CI) is a successful biomedical device that enables individuals with severe-to-profound hearing loss to perceive sound through electrical stimulation, yet listening in noise remains challenging. Recent deep learning advances offer promising potential for CI sound coding by integrating visual cues. In this study, an audio-visual speech enhancement (AVSE) module is integrated with the ElectrodeNet-CS (ECS) model to form the end-to-end CI system, AVSE-ECS. Simulations show that the AVSE-ECS system with joint training achieves high objective speech intelligibility and improves the signal-to-error ratio by 7.4666 dB compared to the advanced combination encoder strategy. These findings underscore the potential of AVSE-based CI sound coding.

  • Research Article
  • 10.1007/s10772-025-10240-x
Enhancement and reconstruction of dysphonic Kannada speech using LSTM and convolution network
  • Dec 24, 2025
  • International Journal of Speech Technology
  • P Rajeswari + 1 more

Enhancement and reconstruction of dysphonic Kannada speech using LSTM and convolution network

  • Research Article
  • 10.1007/s10772-025-10226-9
New independent adaptive-step-size sub-band recursive decorrelation approach for noise reduction and speech enhancement
  • Dec 12, 2025
  • International Journal of Speech Technology
  • Redha Bendoumia + 3 more

New independent adaptive-step-size sub-band recursive decorrelation approach for noise reduction and speech enhancement

  • Research Article
  • 10.1007/s10772-025-10239-4
Transform-based nonlinear speech enhancement for monaural scenarios
  • Dec 12, 2025
  • International Journal of Speech Technology
  • Navneet Upadhyay + 1 more

Transform-based nonlinear speech enhancement for monaural scenarios

  • Research Article
  • 10.1145/3770672
MmMUSE: An mmWave-based Motion-resilient Universal Speech Enhancement System
  • Dec 2, 2025
  • Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
  • Lingyu Wang + 8 more

Speech enhancement improves the user interaction experience in voice-based smart systems. While microphone-based speech perception is limited by airborne noise, mmWave is immune to such interference. However, user-device motion hinders mmWave-based vocal extraction, dispersing vocal signals and introducing distortions. In this paper, we propose mmMUSE , an mmWave-based motion-resilient universal speech enhancement system that integrates mmWave and audio. To mitigate motion interference, we propose a two-stage method for robust vocal vibration extraction. Moreover, by proposing the Vocal-Noise-Ratio metric to assess the prominence of the vocal vibration, we enable real-time voice activity detection. We also design a complex-valued network that includes an attention-based fusion network for cross-modal complementing and a time-frequency masking network for correcting amplitude and phase of speech to isolate noises. Using datasets from 46 participants, mmMUSE outperforms the state-of-the-art speech enhancement models by 26% in SISDR and 34% in STOI on average. It also achieves SISDR improvements of 16.72 dB, 17.93 dB, 14.93 dB, and 18.95 dB in controlled environments involving intense noise, extensive motion, multiple speakers, and various obstructive materials, respectively. Finally, in real-world scenarios, including running, public spaces, and driving, mmMUSE achieves WER below 10%.

  • Research Article
  • 10.1097/aud.0000000000001769
Audiovisual Speech Perception in Aging Cochlear Implant Users and Age-Matched Nonimplanted Adults.
  • Dec 2, 2025
  • Ear and hearing
  • James W Dias + 2 more

Older typical-hearing adults without a cochlear implant (CI) have been found to exhibit greater multisensory benefits when identifying audiovisual speech than younger normal-hearing adults. The greater multisensory benefits demonstrated by older non-CI users can compensate for unisensory auditory and visual speech deficits, allowing them to identify audiovisual speech at a degree of accuracy like that of younger normal-hearing adults. Although most new CI recipients are 65 yrs of age and older, the reliance of older CI users on such multisensory benefits is unknown. The goal of the current investigation was to evaluate age-related differences in cross-sensory and multisensory benefits in audiovisual speech identification in aging CI users and to examine how they differ from age-matched non-CI users. Twenty middle-aged-to-older CI users (50 to 83 yrs of age) and 35 age-matched non-CI users completed an auditory-visual speech identification task, identifying 288 disyllabic words presented either auditory-alone, visual-alone, or audiovisually. CI users identified speech stimuli streamed directly through their CI device in quiet and in noise (Gaussian) at a +10 and +5 dB signal to noise ratio (SNR). Non-CI users identified speech stimuli delivered through earphones in noise at -5, 0, and +5 dB SNR conditions. Different noise conditions were used for CI users and non-CI users to avoid ceiling and floor effects. From visual, auditory, and audiovisual performance, psychometrics for the visual enhancement of auditory speech (VE), the auditory enhancement of visual speech (AE), and auditory-visual multisensory enhancement (AVE) were calculated. Group differences (in the overlapping +5 dB SNR condition) and effects of age and noise were tested using linear regression and linear mixed-effects regression models. Both CI users and non-CI users demonstrated canonical differences in visual, auditory, and audiovisual speech identification. VE and AVE were greater for CI users than for non-CI users. AVE increased with the age of older CI users and non-CI users, consistent with age-group differences in AVE we observed in a previous study of non-CI users. The results of the current investigation suggest that CI users, like age-matched non-CI users, rely on multisensory integration more as they age. Older CI users may benefit more from audiovisual input than older non-CI users. These perceptual benefits grant older CI users the capacity to identify audiovisual speech to a degree of accuracy closer to that of older non-CI users, despite deficits in the auditory perception of CI users. As a result, the successful use of a CI device may partially depend on the ability of a CI user to integrate information they see with information available from their device, and older CI users may depend on visual input more to successfully use their CI.

  • Research Article
  • 10.1121/10.0041915
Lightweight speech enhancement via learnable prior and Schrödinger bridge generative adversarial networka).
  • Dec 1, 2025
  • The Journal of the Acoustical Society of America
  • Zengqiang Shang + 4 more

This paper introduces a different speech enhancement framework that integrates learnable prior with a Schrödinger bridge generative adversarial network. Although conventional Schrödinger bridge-based speech enhancement methods have shown promising results, they suffer from inefficient transport paths as a result of path crossings, high computational requirements, and degraded speech quality in resource-constrained scenarios. The proposed approach overcomes these limitations by synergistically combining learnable prior and adversarial modeling paradigms. The learnable prior module effectively captures fundamental speech characteristics and dynamically adjusts the initial distribution during training, thereby minimizing path crossings and facilitating more efficient transport paths. The architecture incorporates adversarial training and multi-scale loss functions to enhance speech quality and naturalness. Extensive experimental evaluations demonstrate that this method outperforms state-of-the-art approaches across various metrics, including overall quality (OVRL), speech quality (SIG), background noise quality (BAK), and P808.MOS, while maintaining computational efficiency. Remarkably, this approach demonstrates robust performance even in resource-constrained environments with the nano-sized model (0.04 M parameters) delivering competitive results. Comprehensive ablation studies confirm the efficacy of each component and offer valuable insights into the role of learnable priors in speech enhancement applications.

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • .
  • .
  • .
  • 10
  • 1
  • 2
  • 3
  • 4
  • 5

Popular topics

  • Latest Artificial Intelligence papers
  • Latest Nursing papers
  • Latest Psychology Research papers
  • Latest Sociology Research papers
  • Latest Business Research papers
  • Latest Marketing Research papers
  • Latest Social Research papers
  • Latest Education Research papers
  • Latest Accounting Research papers
  • Latest Mental Health papers
  • Latest Economics papers
  • Latest Education Research papers
  • Latest Climate Change Research papers
  • Latest Mathematics Research papers

Most cited papers

  • Most cited Artificial Intelligence papers
  • Most cited Nursing papers
  • Most cited Psychology Research papers
  • Most cited Sociology Research papers
  • Most cited Business Research papers
  • Most cited Marketing Research papers
  • Most cited Social Research papers
  • Most cited Education Research papers
  • Most cited Accounting Research papers
  • Most cited Mental Health papers
  • Most cited Economics papers
  • Most cited Education Research papers
  • Most cited Climate Change Research papers
  • Most cited Mathematics Research papers

Latest papers from journals

  • Scientific Reports latest papers
  • PLOS ONE latest papers
  • Journal of Clinical Oncology latest papers
  • Nature Communications latest papers
  • BMC Geriatrics latest papers
  • Science of The Total Environment latest papers
  • Medical Physics latest papers
  • Cureus latest papers
  • Cancer Research latest papers
  • Chemosphere latest papers
  • International Journal of Advanced Research in Science latest papers
  • Communication and Technology latest papers

Latest papers from institutions

  • Latest research from French National Centre for Scientific Research
  • Latest research from Chinese Academy of Sciences
  • Latest research from Harvard University
  • Latest research from University of Toronto
  • Latest research from University of Michigan
  • Latest research from University College London
  • Latest research from Stanford University
  • Latest research from The University of Tokyo
  • Latest research from Johns Hopkins University
  • Latest research from University of Washington
  • Latest research from University of Oxford
  • Latest research from University of Cambridge

Popular Collections

  • Research on Reduced Inequalities
  • Research on No Poverty
  • Research on Gender Equality
  • Research on Peace Justice & Strong Institutions
  • Research on Affordable & Clean Energy
  • Research on Quality Education
  • Research on Clean Water & Sanitation
  • Research on COVID-19
  • Research on Monkeypox
  • Research on Medical Specialties
  • Research on Climate Justice
Discovery logo
FacebookTwitterLinkedinInstagram

Download the FREE App

  • Play store Link
  • App store Link
  • Scan QR code to download FREE App

    Scan to download FREE App

  • Google PlayApp Store
FacebookTwitterTwitterInstagram
  • Universities & Institutions
  • Publishers
  • R Discovery PrimeNew
  • Ask R Discovery
  • Blog
  • Accessibility
  • Topics
  • Journals
  • Open Access Papers
  • Year-wise Publications
  • Recently published papers
  • Pre prints
  • Questions
  • FAQs
  • Contact us
Lead the way for us

Your insights are needed to transform us into a better research content provider for researchers.

Share your feedback here.

FacebookTwitterLinkedinInstagram
Cactus Communications logo

Copyright 2026 Cactus Communications. All rights reserved.

Privacy PolicyCookies PolicyTerms of UseCareers