• All Solutions All Solutions Caret
    • Editage

      One platform for all researcher needs

    • Paperpal

      AI-powered academic writing assistant

    • R Discovery

      Your #1 AI companion for literature search

    • Mind the Graph

      AI tool for graphics, illustrations, and artwork

    • Journal finder

      AI-powered journal recommender

    Unlock unlimited use of all AI tools with the Editage Plus membership.

    Explore Editage Plus
  • Support All Solutions Support
    discovery@researcher.life
Discovery Logo
Sign In
Paper
Search Paper
Cancel
Pricing Sign In
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Chat PDF iconChat PDF Star Left icon
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link
Discovery Logo menuClose menu
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Chat PDF iconChat PDF Star Left icon
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link

Related Topics

  • Data Fidelity Term
  • Data Fidelity Term
  • Data Fidelity
  • Data Fidelity
  • Norm Regularization
  • Norm Regularization

Articles published on Convex regularization

Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
239 Search results
Sort by
Recency
  • Research Article
  • 10.1088/1361-6420/ae0e48
VARPROX: a primal-dual variable projection method for the minimization of penalized separable non-linear least squares
  • Oct 15, 2025
  • Inverse Problems
  • A Marmin + 1 more

Abstract We present a methodology for solving non-linear least squares problems that extends the variable projection. We propose to add a non-smooth convex regularization on the non-linear variable in order to handle instability when it is high-dimensional. While conserving the variable projection structure, our method relies on a primal-dual proximal method and consequently, benefits from the full splitting property of all the involved operators. We then extend our methodology to the common case where box constraints are set on the non-linear variables. We report numerical application of our method to image texture analysis where our methodology shows significant improvement over the standard variable projection.

  • Research Article
  • 10.3390/info16080676
An Approximate Algorithm for Sparse Distributionally Robust Optimization
  • Aug 7, 2025
  • Information
  • Ruyu Wang + 3 more

In this paper, we propose a sparse distributionally robust optimization (DRO) model incorporating the Conditional Value-at-Risk (CVaR) measure to control tail risks in uncertain environments. The model utilizes sparsity to reduce transaction costs and enhance operational efficiency. We reformulate the problem as a Min-Max-Min optimization and convert it into an equivalent non-smooth minimization problem. To address this computational challenge, we develop an approximate discretization (AD) scheme for the underlying continuous random vector and prove its convergence to the original non-smooth formulation under mild conditions. The resulting problem can be efficiently solved using a subgradient method. While our analysis focuses on CVaR penalty, this approach is applicable to a broader class of non-smooth convex regularizers. The experimental results on the portfolio selection problem confirm the effectiveness and scalability of the proposed AD algorithm.

  • Research Article
  • 10.1371/journal.pone.0328507
Multi-scale feature pyramid network with bidirectional attention for efficient mural image classification
  • Aug 4, 2025
  • PLOS One
  • Shulan Wang + 3 more

Mural image recognition plays a critical role in the digital preservation of cultural heritage; however, it faces cross-cultural and multi-period style generalization challenges, compounded by limited sample sizes and intricate details, such as losses caused by natural weathering of mural surfaces and complex artistic patterns.This paper proposes a deep learning model based on DenseNet201-FPN, incorporating a Bidirectional Convolutional Block Attention Module (Bi-CBAM), dynamic focal distillation loss, and convex regularization. First, a lightweight Feature Pyramid Network (FPN) is embedded into DenseNet201 to fuse multi-scale texture features (28 × 28 × 256, 14 × 14 × 512, 7 × 7 × 1024). Second, a bidirectional LSTM-driven attention module iteratively optimizes channel and spatial weights, enhancing detail perception for low-frequency categories. Third, a dynamic temperature distillation strategy (T = 3 → 1) balances supervision from teacher models (ResNeXt101) and ground truth, improving the F1-score of rare classes by 6.1%. Experimental results on a self-constructed mural dataset (2,000 images,26 subcategories.) demonstrate 87.9% accuracy (+3.7% over DenseNet201) and real-time inference on edge devices (63ms/frame at 8.1W on Jetson TX2). This study provides a cost-effective solution for large-scale mural digitization in resource-constrained environments.

  • Research Article
  • Cite Count Icon 1
  • 10.1007/s13398-025-01750-z
Convex regularization and subdifferential calculus
  • Jul 2, 2025
  • Revista de la Real Academia de Ciencias Exactas, Físicas y Naturales. Serie A. Matemáticas
  • Rafael Correa + 2 more

This paper deals with the regularization of the sum of functions defined on a locally convex space through their closed-convex hulls in the bidual space. Different conditions guaranteeing that the closed-convex hull of the sum is the sum of the corresponding closed-convex hulls are provided. These conditions are expressed in terms of some ε-subdifferential calculus rules for the sum. The case of convex functions is also studied, and exact calculus rules are given under additional continuity/qualifications conditions. As an illustration, a variant of the proof of the classical Rockafellar theorem on convex integration is proposed.

  • Research Article
  • 10.1287/moor.2023.0377
Lower Complexity Bounds of First-Order Methods for Affinely Constrained Composite Nonconvex Problems
  • Jun 12, 2025
  • Mathematics of Operations Research
  • Wei Liu + 2 more

Many recent studies on first-order methods (FOMs) focus on composite nonconvex nonsmooth optimization with linear and/or nonlinear function constraints. Upper (or worst-case) complexity bounds have been established for these methods. However, little can be claimed about their optimality, as no lower bound is known except for a few special smooth nonconvex cases. In this paper, we make the first attempt to establish lower complexity bounds of FOMs for solving a class of composite nonconvex nonsmooth optimization with linear constraints. Assuming two different first-order oracles, we establish lower complexity bounds of FOMs to produce a (near) [Formula: see text]-stationary point of a problem (and its reformulation) in the considered problem class for any given tolerance [Formula: see text]. Our lower bounds indicate that the existence of a nonsmooth convex regularizer can evidently increase the difficulty of an affinely constrained regularized problem over its nonregularized counterpart. In addition, we show that our lower bound of FOMs with the second oracle is tight, with a difference of up to a logarithmic factor from an upper complexity bound established in a longer arXiv version of this work. Funding: This work was partly supported by the Office of Naval Research [Grant N00014-22-1-2573] and the National Science Foundation [Grants DMS-2053493, DMS-2406896, and IIS-2147253].

  • Research Article
  • Cite Count Icon 1
  • 10.1080/00273171.2025.2503833
Missing Data Handling via EM and Multiple Imputation in Network Analysis using Glasso and Atan Regularization
  • May 14, 2025
  • Multivariate Behavioral Research
  • Kai Jannik Nehler + 1 more

The existing literature on missing data handling in psychological network analysis using cross-sectional data is currently limited to likelihood based approaches. In addition, there is a focus on convex regularization, with the missing handling implemented using different calculations in model selection across various packages. Our work aims to contribute to the literature by implementing a missing data handling approach based on multiple imputation, specifically stacking the imputations, and evaluating it against direct and two-step EM methods. Standardized model selection across the multiple imputation and EM methods is ensured, and the comparative evaluation between the missing handling methods is performed separately for convex regularization (glasso) and nonconvex regularization (atan). Simulated conditions vary network size, number of observations, and amount of missingness. Evaluation criteria encompass edge set recovery, partial correlation bias, and correlation of network statistics. Overall, missing data handling approaches exhibit similar performance under many conditions. Using glasso with EBIC model selection, the two-step EM method performs best overall, closely followed by stacked multiple imputation. For atan regularization using BIC model selection, stacked multiple imputation proves most consistent across all conditions and evaluation criteria.

  • Research Article
  • 10.1002/for.3277
Hierarchical Regularizers for Reverse Unrestricted Mixed Data Sampling Regressions
  • Apr 11, 2025
  • Journal of Forecasting
  • Alain Hecq + 2 more

Abstract Reverse Unrestricted MIxed DAta Sampling (RU‐MIDAS) regressions are used to model high‐frequency responses by means of low‐frequency variables. However, due to the periodic structure of RU‐MIDAS regressions, the dimensionality grows quickly if the frequency mismatch between the high‐ and low‐frequency variables is large. Additionally, the number of high‐frequency observations available for estimation decreases. We propose to counteract this reduction in sample size by pooling the high‐frequency coefficients and further reducing the dimensionality through a sparsity‐inducing convex regularizer that accounts for the temporal ordering among the different lags. To this end, the regularizer prioritizes the inclusion of lagged coefficients according to the recency of the information they contain. We demonstrate the proposed method on two empirical applications, one on realized volatility forecasting with macroeconomic data and another on demand forecasting for a bicycle‐sharing system with ridership data on other transportation types.

  • Research Article
  • Cite Count Icon 1
  • 10.1137/24m167192x
Error Estimates for Weakly Convex Frame-Based Regularization Including Learned Filters
  • Apr 7, 2025
  • SIAM Journal on Imaging Sciences
  • Andrea Ebner + 2 more

Error Estimates for Weakly Convex Frame-Based Regularization Including Learned Filters

  • Research Article
  • 10.1080/02331888.2025.2482070
Tensor train regression with convex regularization
  • Mar 25, 2025
  • Statistics
  • Jiahao Peng + 2 more

Tensor regression has received increasing attention in recent years since more and more data sets are naturally represented in tensor structures. In the regression setting, most existing works are concerned with tensors in CP or Tucker formats. In particular, the statistical rates for regression problems with a tensor coefficient that has a low-rank tensor train (TT) format have not been established. In this paper, we study tensor train regression (with tensor predictors and scalar responses) using convex regularization. The statistical rates of the estimators are established in the high-dimensional setting, for both mean regression and quantile regression. Some numerical experiments and empirical application are presented to show their finite-sample performance. This work provides theoretical guarantees for a stable and efficient alternative for high-dimensional tensor regression based on TT format, which scales well for high-order tensors.

  • Open Access Icon
  • Research Article
  • Cite Count Icon 1
  • 10.1088/1751-8121/adb8ad
Observable asymptotics of regularized Cox regression models with standard Gaussian designs: a statistical mechanics approach
  • Mar 4, 2025
  • Journal of Physics A: Mathematical and Theoretical
  • Emanuele Massa + 1 more

Abstract We study the asymptotic behaviour of the regularized maximum partial likelihood estimator (RMPLE) in the proportional limit, considering an arbitrary convex regularizer and assuming that the covariates X i ∈ R p follow a multivariate Gaussian law with covariance I p / p for each i = 1 , … , n . In order to efficiently compute the estimator under investigation, we propose a modified approximate message passing (AMP) algorithm, that we name COX-AMP, and compare its performance with the coordinate-wise descent (CD) algorithm, which is taken as reference. By means of the Replica method, we derive a set of six replica symmetric (RS) equations that we show to correctly describe the average behaviour of the estimators when the sample size and the number of covariates is large and commensurate. These equations cannot be solved in practice, as the data generating process (that we are trying to estimate) is not known. However, the update equations of COX-AMP suggest the construction of a local field that can in turn be used to accurately estimate all the RS order parameters of the theory solely from the data, without actually solving the RS equations. We emphasize that this approach can be applied when the estimator is computed via any method and is not restricted to COX-AMP. Once the RS order parameters are estimated, we have access to the amount of signal and noise in the RMPLE, but also its generalization error, directly from the data. Although we focus on the Partial Likelihood objective, we envisage broader application of the methodology proposed here, for instance to GLMs with nuisance parameters, which include some non- proportional hazards models, e.g. Accelerated Failure Time models.

  • Open Access Icon
  • Research Article
  • Cite Count Icon 1
  • 10.1088/1361-6420/adb780
Randomized block coordinate descent method for linear ill-posed problems
  • Feb 27, 2025
  • Inverse Problems
  • Qinian Jin + 1 more

Abstract Consider the linear ill-posed problems of the form ∑ i = 1 b A i x i = y , where, for each i, Ai is a bounded linear operator between two Hilbert spaces Xi and Y . When b is huge, solving the problem by an iterative method using the full gradient at each iteration step is both time-consuming and memory insufficient. Although randomized block coordinate descent (RBCD) method has been shown to be an efficient method for well-posed large-scale optimization problems with a small amount of memory, there still lacks a convergence analysis on the RBCD method for solving ill-posed problems. In this paper, we investigate the convergence property of the RBCD method with noisy data under either a priori or a posteriori stopping rules. We prove that the RBCD method combined with an a priori stopping rule yields a sequence that converges weakly to a solution of the problem almost surely. We also consider the early stopping of the RBCD method and demonstrate that the discrepancy principle can terminate the iteration after finite many steps almost surely. For a class of ill-posed problems with special tensor product form, we obtain strong convergence results on the RBCD method. Furthermore, we consider incorporating the convex regularization terms into the RBCD method to enhance the detection of solution features. To illustrate the theory and the performance of the method, numerical simulations from the imaging modalities in computed tomography and compressive temporal imaging are reported.

  • Research Article
  • 10.1088/1361-6420/adb3c4
Solving the acousto-electric tomography by the adaptive Nesterov method of Kaczmarz type
  • Feb 18, 2025
  • Inverse Problems
  • Kai Zhu + 1 more

Abstract Acousto-electric tomography (AET) is a new hybrid imaging technique. It recovers the conductivity of biological tissue using the internal power density distribution and overcomes the shortcomings of classical electrical impedance tomography, where only the boundary measurement data are applied. In this manuscript, we consider the numerical realization of the AET model. Inspired by the work (Jin 2025 Inverse Problems 41 025005), we develop an adaptive Nesterov method of Kaczmarz type. The combination parameters are no longer chosen by line search, but through adaptive procedures with explicit formulas, which greatly improve the computational efficiency. Furthermore, uniformly convex regularization functionals are incorporated to reconstruct the conductivity with sparsity or discontinuity. The convergence property of the method is rigorously proven under reasonable conditions. Numerical simulations are presented to illustrate the efficiency and feasibility of our proposed method. We emphasize that the proposed method is also applicable for other nonlinear inverse problems with multiple operators.

  • Research Article
  • Cite Count Icon 3
  • 10.1088/1361-6420/ada8d3
Adaptive Nesterov momentum method for solving ill-posed inverse problems
  • Jan 23, 2025
  • Inverse Problems
  • Qinian Jin

Abstract Nesterov’s acceleration strategy is renowned in speeding up the convergence of gradient-based optimization algorithms and has been crucial in developing fast first order methods for well-posed convex optimization problems. Although Nesterov’s accelerated gradient method has been adapted as an iterative regularization method for solving ill-posed inverse problems, no general convergence theory is available except for some special instances. In this paper, we develop an adaptive Nesterov momentum method for solving ill-posed inverse problems in Banach spaces, where the step-sizes and momentum coefficients are chosen through adaptive procedures with explicit formulas. Additionally, uniform convex regularization functions are incorporated to detect the features of sought solutions. Under standard conditions, we establish the regularization property of our method when terminated by the discrepancy principle. Various numerical experiments demonstrate that our method outperforms the Landweber-type method in terms of the required number of iterations and the computational time.

  • Research Article
  • 10.3390/electronics14020238
Low-Rank Tensor Recovery Based on Nonconvex Geman Norm and Total Variation
  • Jan 8, 2025
  • Electronics
  • Xinhua Su + 3 more

Tensor restoration finds applications in various fields, including data science, image processing, and machine learning, where the global low-rank property is a crucial prior. As the convex relaxation to the tensor rank function, the traditional tensor nuclear norm is used by directly adding all the singular values of a tensor. Considering the variations among singular values, nonconvex regularizations have been proposed to approximate the tensor rank function more effectively, leading to improved recovery performance. In addition, the local characteristics of the tensor could further improve detail recovery. Currently, the gradient tensor is explored to effectively capture the smoothness property across tensor dimensions. However, previous studies considered the gradient tensor only within the context of the nuclear norm. In order to better simultaneously represent the global low-rank property and local smoothness of tensors, we propose a novel regularization, the Tensor-Correlated Total Variation (TCTV), based on the nonconvex Geman norm and total variation. Specifically, the proposed method minimizes the nonconvex Geman norm on singular values of the gradient tensor. It enhances the recovery performance of a low-rank tensor by simultaneously reducing estimation bias, improving approximation accuracy, preserving fine-grained structural details and maintaining good computational efficiency compared to traditional convex regularizations. Based on the proposed TCTV regularization, we develop TC-TCTV and TRPCA-TCTV models to solve completion and denoising problems, respectively. Subsequently, the proposed models are solved by the Alternating Direction Method of Multipliers (ADMM), and the complexity and convergence of the algorithm are analyzed. Extensive numerical results on multiple datasets validate the superior recovery performance of our method, even in extreme conditions with high missing rates.

  • Research Article
  • 10.1007/s10915-025-03103-9
Fast Reflected Forward-Backward algorithm: achieving fast convergence rates for convex optimization with linear cone constraints
  • Jan 1, 2025
  • Journal of Scientific Computing
  • Radu Ioan Boţ + 2 more

In this paper, we derive a Fast Reflected Forward-Backward (Fast RFB) algorithm to solve the problem of finding a zero of the sum of a maximally monotone operator and a monotone and Lipschitz continuous operator in a real Hilbert space. Our approach extends the class of reflected forward-backward methods by introducing a Nesterov momentum term and a correction term, resulting in enhanced convergence performance. The iterative sequence of the proposed algorithm is proven to converge weakly, and the Fast RFB algorithm demonstrates impressive convergence rates, achieving oleft( frac{1}{k} right) as k rightarrow +infty for both the discrete velocity and the tangent residual at the last-iterate. When applied to minimax problems with a smooth coupling term and nonsmooth convex regularizers, the resulting algorithm demonstrates significantly improved convergence properties compared to the current state of the art in the literature. For convex optimization problems with linear cone constraints, our approach yields a fully splitting primal-dual algorithm that ensures not only the convergence of iterates to a primal-dual solution, but also a last-iterate convergence rate of oleft( frac{1}{k} right) as k rightarrow +infty for the objective function value, feasibility measure, and complementarity condition. This represents the most competitive theoretical result currently known for algorithms addressing this class of optimization problems. Numerical experiments are performed to illustrate the convergence behavior of Fast RFB.

  • Open Access Icon
  • Research Article
  • 10.2478/ijanmc-2024-0040
A Novel Variance Reduction Proximal Stochastic Newton Algorithm for Large-Scale Machine Learning Optimization
  • Dec 1, 2024
  • International Journal of Advanced Network, Monitoring and Controls
  • Dr Mohammed Moyed Ahmed

Abstract This paper introduces the Variance Reduction Proximal Stochastic Newton Algorithm (SNVR) for solving composite optimization problems in machine learning, specifically minimizing F(w) + Ω(w), where F is a smooth convex function and Ω is a non-smooth convex regularizer. SNVR combines variance reduction techniques with the proximal Newton method to achieve faster convergence while handling non-smooth regularizers. Theoretical analysis establishes that SNVR achieves linear convergence under standard assumptions, outperforming existing methods in terms of iteration complexity. Experimental results on the "heart" dataset (N=600, d=13) demonstrate SNVR's superior performance: Convergence speed: SNVR reaches optimal solution in 5 iterations, compared to 14 for ProxSVRG, and >20 for proxSGD and ProxGD. Solution quality: SNVR achieves an optimal objective function value of 0.1919, matching ProxSVRG, and outperforming proxSGD (0.1940) and ProxGD (0.2148). Efficiency: SNVR shows a 10.5% reduction in objective function value within the first two iterations. These results indicate that SNVR offers significant improvements in both convergence speed (180-300% faster) and solution quality (up to 11.9% better) compared to existing methods, making it a valuable tool for large-scale machine learning optimization tasks.

  • Open Access Icon
  • Research Article
  • 10.1016/j.geb.2024.11.005
Regularized Bayesian best response learning in finite games
  • Nov 17, 2024
  • Games and Economic Behavior
  • Sayan Mukherjee + 1 more

Regularized Bayesian best response learning in finite games

  • Research Article
  • 10.1142/s0219876224500440
Improved Low-Rank Matrix Approximation in Multivariate Case
  • Sep 30, 2024
  • International Journal of Computational Methods
  • Pichid Kittisuwan + 1 more

The low-rank matrix approximation (LRMA) algorithm is an important method in signal processing. This algorithm is usually used in many tasks such as the matrix estimation and machine learning algorithms. In the past, many convex regularizers, such as the absolute-value norm, were presented for LRMA. Recently, many works show that LRMA on many nonconvex regularizers, such as the quadratic and logarithmic regularizers, can give better efficiency than the convex regularizer. Furthermore, many traditional works present LRMA in the univariate case, and do not consider in the multivariate case. Therefore, LRMA in the multivariate case on the nonconvex regularizer is presented in this work. Note that our proposed method is based on the singular value decomposition (SVD) algorithm. The relationship between singular values is considered by this multivariate case. We also present the novel nonconvex regularizer for LRMA. The simple solution for our method can be obtained from this regularizer. In many random signals, our proposed method is evaluated with the state-of-the-art algorithms. Experimental results show that the best results can be obtained from the proposed method.

  • Open Access Icon
  • Research Article
  • 10.1016/j.jmaa.2024.128693
Dissipative solutions to the model of a general compressible viscous fluid with the Coulomb friction law boundary condition
  • Jul 18, 2024
  • Journal of Mathematical Analysis and Applications
  • Šárka Nečasová + 2 more

Dissipative solutions to the model of a general compressible viscous fluid with the Coulomb friction law boundary condition

  • Research Article
  • Cite Count Icon 1
  • 10.1109/tnnls.2023.3237170
MFILS: Tri-Selection via Convex and Nonconvex Regularizations.
  • Jul 1, 2024
  • IEEE transactions on neural networks and learning systems
  • Dou El Kefel Mansouri + 4 more

In many real-world applications, data are represented by multiple instances and simultaneously associated with multiple labels. These data are always redundant and generally contaminated by different noise levels. As a result, several machine learning models fail to achieve good classification and find an optimal mapping. Feature selection, instance selection, and label selection are three effective dimensionality reduction techniques. Nevertheless, the literature was limited to feature and/or instance selection but has, to some extent, neglected label selection, which also plays an essential role in the preprocessing step, as label noises can adversely affect the performance of the underlying learning algorithms. In this article, we propose a novel framework termed multilabel Feature Instance Label Selection (mFILS) that simultaneously performs feature, instance, and label selections in both convex and nonconvex scenarios. To the best of our knowledge, this article offers, for the first time ever, a study using the triple and simultaneous selection of features, instances, and labels based on convex and nonconvex penalties in a multilabel scenario. Experimental results are built on some known benchmark datasets to validate the effectiveness of the proposed mFILS.

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • .
  • .
  • .
  • 10
  • 1
  • 2
  • 3
  • 4
  • 5

Popular topics

  • Latest Artificial Intelligence papers
  • Latest Nursing papers
  • Latest Psychology Research papers
  • Latest Sociology Research papers
  • Latest Business Research papers
  • Latest Marketing Research papers
  • Latest Social Research papers
  • Latest Education Research papers
  • Latest Accounting Research papers
  • Latest Mental Health papers
  • Latest Economics papers
  • Latest Education Research papers
  • Latest Climate Change Research papers
  • Latest Mathematics Research papers

Most cited papers

  • Most cited Artificial Intelligence papers
  • Most cited Nursing papers
  • Most cited Psychology Research papers
  • Most cited Sociology Research papers
  • Most cited Business Research papers
  • Most cited Marketing Research papers
  • Most cited Social Research papers
  • Most cited Education Research papers
  • Most cited Accounting Research papers
  • Most cited Mental Health papers
  • Most cited Economics papers
  • Most cited Education Research papers
  • Most cited Climate Change Research papers
  • Most cited Mathematics Research papers

Latest papers from journals

  • Scientific Reports latest papers
  • PLOS ONE latest papers
  • Journal of Clinical Oncology latest papers
  • Nature Communications latest papers
  • BMC Geriatrics latest papers
  • Science of The Total Environment latest papers
  • Medical Physics latest papers
  • Cureus latest papers
  • Cancer Research latest papers
  • Chemosphere latest papers
  • International Journal of Advanced Research in Science latest papers
  • Communication and Technology latest papers

Latest papers from institutions

  • Latest research from French National Centre for Scientific Research
  • Latest research from Chinese Academy of Sciences
  • Latest research from Harvard University
  • Latest research from University of Toronto
  • Latest research from University of Michigan
  • Latest research from University College London
  • Latest research from Stanford University
  • Latest research from The University of Tokyo
  • Latest research from Johns Hopkins University
  • Latest research from University of Washington
  • Latest research from University of Oxford
  • Latest research from University of Cambridge

Popular Collections

  • Research on Reduced Inequalities
  • Research on No Poverty
  • Research on Gender Equality
  • Research on Peace Justice & Strong Institutions
  • Research on Affordable & Clean Energy
  • Research on Quality Education
  • Research on Clean Water & Sanitation
  • Research on COVID-19
  • Research on Monkeypox
  • Research on Medical Specialties
  • Research on Climate Justice
Discovery logo
FacebookTwitterLinkedinInstagram

Download the FREE App

  • Play store Link
  • App store Link
  • Scan QR code to download FREE App

    Scan to download FREE App

  • Google PlayApp Store
FacebookTwitterTwitterInstagram
  • Universities & Institutions
  • Publishers
  • R Discovery PrimeNew
  • Ask R Discovery
  • Blog
  • Accessibility
  • Topics
  • Journals
  • Open Access Papers
  • Year-wise Publications
  • Recently published papers
  • Pre prints
  • Questions
  • FAQs
  • Contact us
Lead the way for us

Your insights are needed to transform us into a better research content provider for researchers.

Share your feedback here.

FacebookTwitterLinkedinInstagram
Cactus Communications logo

Copyright 2026 Cactus Communications. All rights reserved.

Privacy PolicyCookies PolicyTerms of UseCareers