• All Solutions All Solutions Caret
    • Editage

      One platform for all researcher needs

    • Paperpal

      AI-powered academic writing assistant

    • R Discovery

      Your #1 AI companion for literature search

    • Mind the Graph

      AI tool for graphics, illustrations, and artwork

    • Journal finder

      AI-powered journal recommender

    Unlock unlimited use of all AI tools with the Editage Plus membership.

    Explore Editage Plus
  • Support All Solutions Support
    discovery@researcher.life
Discovery Logo
Sign In
Paper
Search Paper
Cancel
Pricing Sign In
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Chat PDF iconChat PDF Star Left icon
  • Citation Generator iconCitation Generator
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link
Discovery Logo menuClose menu
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Chat PDF iconChat PDF Star Left icon
  • Citation Generator iconCitation Generator
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link

Related Topics

  • Riccati Equation
  • Riccati Equation
  • Matrix Riccati
  • Matrix Riccati

Articles published on Algebraic Riccati equation

Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
2757 Search results
Sort by
Recency
  • New
  • Research Article
  • 10.1051/cocv/2025102
Turnpike property of nonzero-sum linear-quadratic differential games
  • Jan 1, 2026
  • ESAIM: Control, Optimisation and Calculus of Variations
  • Jingrui Sun + 2 more

This paper investigates the turnpike properties of deterministic nonzero-sum linear-quadratic (LQ) differential games. Under certain assumptions on the Hamiltonian matrix of the nonzero-sum LQ differential game, we establish the solvability of both the coupled non-symmetric differential Riccati equation (DRE) and the algebraic Riccati equation (ARE). Moreover, we identify the convergence relationship between the DRE and ARE, which is essential for understanding the turnpike properties. Over a finite but sufficiently long time horizon, the open-loop Nash equilibrium is shown to remain exponentially close to the solution of a two-objective optimization problem for the majority of the time horizon.

  • New
  • Research Article
  • 10.1115/1.4070506
H-Infinity Control Design and Entropy Minimization for Second-Order Systems
  • Dec 24, 2025
  • Journal of Dynamic Systems, Measurement, and Control
  • Neeraj Srinivas + 1 more

Abstract A modified Newton–Kleinman method is introduced to solve the algebraic Riccati equations present in the H-infinity controller synthesis problem. The new method utilizes the inherent properties of the mass, stiffness, and damping matrices associated with second-order systems to efficiently compute the solutions to the algebraic Riccati equations for large systems, with an analytical proof of convergence. This allows for efficient H-infinity controller synthesis as compared to traditional methods such as the Hamiltonian approach and the standard Newton–Kleinnman method when applied to the same systems. The entropy formulation of the H-infinity controller synthesis problem is then utilized in conjunction with the new algorithms to develop a data-driven controller that balances the total entropy and the quadratic cost functional associated with the problem.

  • Research Article
  • 10.3390/app16010055
A Discrete-Time FOLQR Framework for Centralized AGC in Multi-Area Interconnected Power Grids
  • Dec 20, 2025
  • Applied Sciences
  • Khidir Ak Mohamed + 2 more

This paper presents a discrete-time, centralized fractional-order linear quadratic regulator FOLQR for automatic generation control (AGC) of three-area interconnected nonreheat thermal systems. The AGC state explicitly includes the area control error (ACE) and tie-line power; a quadratic performance index penalizes ACE, its integral (IACE), and control effort. The continuous-time plant (governor–turbine dynamics and tie-line flows) is discretized at a fixed sampling interval, and a single centralized gain is obtained from the discrete algebraic Riccati equation; the fractional-order extension shapes memory in the feedback to temper rapid transients. Benchmark studies under 0.01 and 0.05 p.u. step-load disturbances show that FOLQR stabilizes the interconnection and consistently lowers peak excursions relative to a conventional discrete LQR (COQAGC) baseline—reducing frequency peaks by about 9–12% and tie-line peaks by 24–60% in the small-step case—while producing smoother actuator commands. Although FOLQR exhibits longer settling times, this trade-off is acceptable FOr multi-area AGC where limiting overshoot and tie-line excursions is operationally more critical than strict settling-time targets. The proposed controller retains a simple centralized, discrete-time structure with a modest computational burden, making it suitable FOr real-time AGC deployment in large interconnected grids and demonstrating for the first time, to our knowledge, a fractional-order LQR applied to a three-area thermal benchmark.

  • Research Article
  • 10.3390/e27121264
The Capacity Gains of Gaussian Channels with Unstable Versus Stable Autoregressive Noise.
  • Dec 18, 2025
  • Entropy (Basel, Switzerland)
  • Charalambos D Charalambous + 3 more

In this paper, we consider Cover's and Pombra's formulation of feedback capacity of additive Gaussian noise (AGN) channels, with jointly Gaussian nonstationary and nonergodic noise. We derive closed-form feedback capacity formulas, using Karush-Kuhn-Tucker (KKT) conditions and convergence properties of difference Riccati equations to limiting algebraic Riccati equations of filtering theory, for unstable and stable autoregressive (AR) noise. Surprisingly, the capacity formulas depend on the parameters of the AR noise, its pole c∈(-∞,∞) and noise variance KW∈(0,∞), and the total transmit power κ∈[0,∞), indicating substantial gains for the unstable noise region ∀c2∈(1,∞),∀κ>κmin=▵KW1+4c2-32c2-12 compared to its complement region. In particular, feedback capacity is distinguished by three regimes, as follows. Regime 1, ∀c2∈(1,∞),∀κ>κmin: the optimal channel input includes an innovations part, the capacity increases as |c|>1 increases, while κmin and the allocated transmit power decrease. Regime 2, ∀c2∈(1,∞),∀κ≤κmin, Regime 3, ∀c∈[-1,1],∀κ∈[0,∞) (complement of Regime 1): the innovations part of the optimal channel is asymptotically zero and the capacity is fundamentally different compared to Regime 1. The differences of capacity formulas for Regimes 1, 2 and 3 are directly related to their operational meaning: (i) Regime 1 is an ergodic capacity while Regimes 2 and 3 are nonergodic capacities; (ii) Regime 1 is achieved by an asymptotically stationary channel input with a non-zero innovations part, while Regimes 2 and 3 are achieved by an asymptotically zero innovations part. The gains of capacity for Regime 1 are attributed to the high correlation of noise samples compared to stable noise and the use of an informative innovations part by the optimal channel input, which make possible the prediction of future noise samples from past samples, unlike memoryless noise. Our results provide answers to certain open questions regarding the validity of capacity formulas of stable noise that appeared in the literature.

  • Research Article
  • 10.1016/j.rico.2025.100631
A fast-converging Newton-based iterative scheme for the algebraic Riccati equation with step-size optimization
  • Dec 1, 2025
  • Results in Control and Optimization
  • Bulugu Ndulu Batume + 1 more

A fast-converging Newton-based iterative scheme for the algebraic Riccati equation with step-size optimization

  • Research Article
  • 10.1142/s0219843625500148
Nonlinear optimal control for three-link biped robots
  • Nov 28, 2025
  • International Journal of Humanoid Robotics
  • G Rigatos + 4 more

The article proposes a nonlinear optimal control approach for the three-link biped robot. In this humanoid robot there is actuation only in the two legs while the third link which is the robot’s torso is unactuated. Because of nonlinearities and the unactuated torso in the robot’s dynamics the treatment of the stabilization and trajectories tracking problem is a non-trivial task. To solve the associated nonlinear optimal control problem, the state-space model of the three-link biped robot undergoes approximate linearization based on Taylor series expansion and the associated Jacobian matrices. For the linearized state-space model of the biped robot a stabilizing optimal (H-infinity) feedback controller is designed. To compute the controller’s feedback gains an algebraic Riccati equation is repetitively solved at each iteration of the control algorithm. The stability properties of the control method are proven through Lyapunov analysis.

  • Research Article
  • 10.1080/00207721.2025.2588658
Convergence rate comparison of two data-driven algorithms to stochastic LQR problems
  • Nov 14, 2025
  • International Journal of Systems Science
  • Zonghan Li + 4 more

This paper investigates the linear quadratic regulation (LQR) problem for discrete-time stochastic systems (DTSSs) with state-dependent multiplicative noise. Policy iteration (PI) and value iteration (VI) algorithms are proposed to solve the generalised algebraic Riccati equation (GARE) corresponding to the stochastic LQR (SLQR) problem. Furthermore, based on the proposed PI and VI algorithms, a comparative analysis of their convergence rates is conducted. Additionally, when the system dynamics are completely unknown, this paper introduces online model-free versions of the PI and VI algorithms using reinforcement learning (RL) technology to find the optimal control strategy for the SLQR problem. Finally, numerical simulations validate the feasibility of the proposed algorithms and the correctness of the theoretical results.

  • Research Article
  • 10.1142/s0219843625500100
Nonlinear Optimal Control of the Underactuated Wheeled Bipedal Robot
  • Nov 13, 2025
  • International Journal of Humanoid Robotics
  • G Rigatos + 5 more

Wheeled bipedal robots are used in several civilian and defense tasks. In this paper, a new nonlinear optimal control method is proposed for solving the problem of control and stabilization of the 3-DOF wheeled bipedal robot. The control problem is nontrivial due to nonlinearities, and underactuation which affect the dynamic model of this robot. To apply the proposed nonlinear optimal control method, the dynamic model of the wheeled bipedal robot undergoes first approximate linearization around a temporary operating point that is updated at each iteration of the control algorithm. The linearization takes place through first-order Taylor series expansion and through the computation of the Jacobian matrices of the system’s state-space description. For the approximately linearized model of the robot, an H-infinity feedback controller is designed. Actually, the H-infinity controller stands for the solution of the optimal control problem for the wheeled bipedal robot under uncertainty and external perturbations. For the computation of the feedback gains of the H-infinity controller, an algebraic Riccati equation is solved at each time-step of the control method. The stability properties of the control algorithm are proven through Lyapunov analysis. First, it is shown that the control scheme achieves H-infinity tracking performance which signifies elevated robustness for the control loop of this robotic system under uncertainties and external perturbations. Next, it is also shown that the control loop of the wheeled bipedal robot is globally asymptotically stable. The proposed control method achieves fast and accurate tracking of setpoints under moderate variations of the control inputs.

  • Research Article
  • 10.1080/23307706.2025.2574721
Nonlinear optimal control for wireless power transfer and EV charging
  • Nov 13, 2025
  • Journal of Control and Decision
  • G Rigatos + 4 more

The article treats the problem of nonlinear optimal control of wireless power transfer systems, consisting of a three-phase DC/AC inverter and a three-phase AC/DC converter (rectifier) and having as application domain the charging of electric vehicles. It is proven that the dynamic model of the wireless power transfer system is differentially flat. To apply the proposed nonlinear optimal control method, the state-space model of the wireless power transfer system undergoes approximate linearisation with the use of first-order Taylor series expansion and through the computation of the associated Jacobian matrices. The linearisation takes place at each sampling instance around a temporary operating point which is defined by the present value of the system's state vector and by the last sampled value of the control inputs vector. For the approximately linearised model of the system, an H-infinity (optimal) feedback controller is designed. To compute the feedback gains of this controller, an algebraic Riccati equation is solved repetitively at each time-step of the control algorithm. The global stability properties of the control scheme are proven through Lyapunov analysis. The nonlinear optimal control scheme achieves fast and precise tracking of setpoints by the state variables of the wireless power transfer system under moderate variations of the control inputs. To apply state estimation-based control of the wireless power transfer, the H-infinity Kalman Filter is used as a robust state observer.

  • Research Article
  • 10.1080/00207179.2025.2580259
Optimal stabilisation control for discrete-time mean-field stochastic Markov jump system
  • Nov 12, 2025
  • International Journal of Control
  • Yongliang Ju + 2 more

This paper investigates the optimal stabilisation control of discrete time mean-field Markov jump system with multiplicative noises in the finite and infinite horizon. The forward and backward stochastic difference equation as well as the newly defined coupled Riccati equation, are used to ascertain the necessary and sufficient conditions for the solvability of the finite horizon problem and then the optimal controller and performance index are derived. For the infinite case, a new Lyapunov function based on the optimal performance index is defined and one coupled algebraic Riccati equation (CARE) is developed, by which the necessary and sufficient stabilisation condition and the infinite optimal solution are derived. The mean-field stochastic Markov jump system is mean square stabilizable if and only if the CARE has a unique positive semi-definite (positive definite) solution. The main techniques employed in this paper are the establishment of the maximum principle and the construction of the Lyapunov function.

  • Research Article
  • 10.1016/j.cnsns.2025.109455
New upper bound on the solution of the continuous coupled algebraic Riccati equation
  • Nov 1, 2025
  • Communications in Nonlinear Science and Numerical Simulation
  • Rourou Zhuang + 1 more

New upper bound on the solution of the continuous coupled algebraic Riccati equation

  • Research Article
  • 10.1016/j.ins.2025.122265
Attaining the stabilizing solution of model unavailable modified algebraic Riccati equation using Q-learning algorithm
  • Nov 1, 2025
  • Information Sciences
  • Jie Gao + 2 more

Attaining the stabilizing solution of model unavailable modified algebraic Riccati equation using Q-learning algorithm

  • Research Article
  • 10.1002/asjc.70002
Reduced inversion methods for solving discrete periodic Riccati matrix equations
  • Oct 27, 2025
  • Asian Journal of Control
  • Yurui Wang + 1 more

Abstract This study is concerned with the issue of solving the discrete periodic Riccati matrix equations (DPREs) in discrete‐time periodic linear systems. Currently, many existing results for solving the DPRE involve matrix inversion operations. In order to diminish the matrix inversion operations, a novel reduced inversion zeroing neural network (RIZNN) model is established by constructing a set of matrix‐valued error equations. Besides, a nonlinear activation function that combines a hyperbolic sine function with an exponential function is designed to accelerate the convergence rate of the RIZNN model. Specifically, with the help of a time‐varying function, a prescribed‐time RIZNN (PT‐RIZNN) model is constructed based on the RIZNN model. The distinctive feature of the PT‐RIZNN model is that the settling time can be prescribed a priori. Moreover, the convergence properties of the proposed models and the superiority of the nonlinear activation function are theoretically proven. Simulation results are supplied to demonstrate the effectiveness of the developed models and the superiority of the nonlinear activation function.

  • Research Article
  • 10.1080/23307706.2025.2556989
Data-driven finite-horizon reinforcement learning optimal control for T–S fuzzy discrete-time systems
  • Oct 17, 2025
  • Journal of Control and Decision
  • Yifan Deng + 2 more

In this paper, we study the finite-horizon reinforcement learning (RL) optimal control problem for Takagi–Sugeno (T–S) discrete-time systems. By applying Bellman optimality theory, a fuzzy finite-horizon RL optimal controller is presented and its analytical solutions are reduced to discrete-time algebraic Riccati equations (AREs). Since AREs are difficult to solve directly, to obtain approximation solutions of AREs, a model-based policy iteration (PI) algorithm and a data-driven value iteration (VI) algorithm are proposed for cases of system dynamics being known and unknown, respectively. It is proved that the proposed two RL algorithms can converge to optimal solutions and the proposed fuzzy finite-horizon RL optimal control method can make controlled systems asymptotically stable in a predefined finite time interval. Finally, we apply the developed fuzzy finite-horizon RL optimal control method to truck-trailer system, the simulation results demonstrate the effectiveness of developed fuzzy finite-horizon RL optimal control method and the proposed two RL algorithms.

  • Research Article
  • 10.1002/rnc.70237
Stochastic Linear Quadratic Optimal Control for Continuous‐Time Systems via Reinforcement Learning
  • Oct 16, 2025
  • International Journal of Robust and Nonlinear Control
  • Jianglin Yu + 2 more

ABSTRACTThis paper aims at solving the infinite‐horizon stochastic linear quadratic (SLQ) optimal control problem online for continuous‐time systems with both additive and multiplicative noises. To eliminate the requirement for prior knowledge of system dynamics, a novel policy iteration approach is proposed, which leverages integral reinforcement learning (RL) techniques to iteratively solve the stochastic algebraic Riccati equation (SARE) using real‐time state and input data. The proposed approach is an off‐policy RL algorithm, where the learning process can be executed by using identical state and input data collected online over fixed time intervals, thereby enabling the optimal control law to be computed. The convergence of the proposed algorithm to the solution of the SARE is verified, and the effectiveness is validated through a numerical example.

  • Research Article
  • 10.1002/acs.4076
Adaptive Dynamic Programming Infinite‐Horizon Optimal Tracking Control for Stochastic Linear Discrete‐Time Systems
  • Oct 14, 2025
  • International Journal of Adaptive Control and Signal Processing
  • Kun Zhang + 2 more

ABSTRACTThis paper investigates stochastic discrete‐time systems with multiplicative state‐dependent and input‐dependent noise via a novel adaptive dynamic programming(ADP) based control method combined with optimal stationary control techniques. Imposing a notably greater difficulty, the tracking control problem without the knowledge of system dynamics and the reference system has been generalized that the system dynamics do not have to be Hurwitz which is more practically relevant. An augmented system has been constructed while a discount factor has been introduced into the cost function. After the discount factor has been brought in to solve the stochastic algebraic Riccati equation(SARE), the linear quadratic tracking(LQT) problem has been proved to be well‐posed. Hence, we develop a second‐order moment formulation to solve the SARE. Based on stochastic adaptive control, a novel on‐policy ADP algorithm has been proposed to solve the LQT problem by only the state and input data. The convergence and stability of the novel ADP algorithm has been rigorously investigated and discussed. Finally, numerical simulations and practical experiments of two distinguished systems are performed to validate the effectiveness and practicability of the proposed ADP methodology.

  • Research Article
  • 10.1109/tcyb.2025.3582377
Data-Driven Adaptive Control for Discrete-Time Linear Systems With Delayed Inputs.
  • Oct 1, 2025
  • IEEE transactions on cybernetics
  • Ai-Guo Wu + 1 more

In this article, the stabilization problem is investigated for input-delayed systems with unknown system dynamics. To solve this problem, a value iteration (VI)-based adaptive dynamic programming (ADP) algorithm is established to learn the state feedback controller from the data along the trajectory of the system. In order to design this control algorithm, the input-delayed system is transformed into a delay-free system at first. Thus, the algebraic Riccati matrix equation (ARE) of the delay-free system is iteratively solved in the absence of system model, and then the controller is designed by using the approximation to the solution of the ARE. In particular, the rank condition of the data-constructed matrices is satisfied by utilizing basis functions, and an initial stabilizing controller is not required in the proposed algorithm. Finally, the effectiveness of the proposed algorithm is illustrated by two practical examples.

  • Research Article
  • 10.1115/1.4069531
Improving Reinforcement Learning Value Iterations in Discrete-Time Linear-Quadratic Optimal Control
  • Sep 30, 2025
  • Journal of Dynamic Systems, Measurement, and Control
  • Lingyi Xu + 1 more

Abstract The paper demonstrates via simulation that the well-known value iteration algorithm of reinforcement learning for the discrete-time linear-quadratic optimal control problem converges very slowly—at most linearly. Despite its slow convergence, the value iteration algorithm still converges even when the initial feedback gain is several orders of magnitude away from the optimal one, and even when the initial feedback gain is not stabilizing, as demonstrated by an example. It is known that the convergence rate of the corresponding policy iteration algorithm is quadratic, assuming the initial feedback gain is stabilizing. We show that the convergence speed of the value iteration algorithm can also be made quadratic by applying ideas from the doubling algorithm used for solving the algebraic Riccati equation. We precisely state a condition required for convergence of the value iteration algorithm, which turns out to be milder than the corresponding condition for the policy iteration algorithm. In addition, we show that the newly proposed value iteration algorithm requires less computational effort than the policy iteration algorithm. With these improvements and observations, we revitalize the value iteration algorithm and demonstrate its superiority over the policy iteration algorithm.

  • Research Article
  • Cite Count Icon 1
  • 10.1109/tcyb.2025.3572104
Reinforcement-Learning-Based Fuzzy Bipartite Consensus for Multiagent Systems: A Novel Scaling Off-Policy Learning Scheme.
  • Sep 1, 2025
  • IEEE transactions on cybernetics
  • Jing Wang + 4 more

The bipartite consensus (BC) issue for nonlinear multiagent systems (NMASs) with unknown system dynamics information is investigated in this article. Initially, the dynamics of NMASs are represented using the Takagi-Sugeno (T-S) fuzzy model. Subsequently, to achieve distributed control, a minmax game policy is introduced, where each agent aims to minimize its performance index while its neighbors attempt to maximize it. Consequently, the BC problem for NMASs is reformulated as a zero-sum game, transforming the controller design into solving a set of game algebraic Riccati equations (GAREs). To solve such equations, a novel scaling off-policy iteration (PI) algorithm is proposed. The key features of the proposed learning algorithm can be outlined in three main aspects: 1) during the learning process, the reliance on system dynamics is relaxed; 2) compared with the PI method, the requirement for initial admissible control policies is eliminated; and 3) a more rapid convergence speed is achieved than traditional value iteration. Finally, the effectiveness and advantages of the proposed method are validated through a simulation example and a series of comparative experiments.

  • Research Article
  • 10.1080/15326349.2025.2540780
Finding an NARE whose minimal nonnegative solution represents first-passage increments in two-dimensional Markov modulated Brownian motion
  • Aug 20, 2025
  • Stochastic Models
  • Jeeho Ryu + 2 more

This article aims to derive the Laplace-Stieltjes transform matrix for the total increment of a one-level process during the first passage of another level process to level zero in so-called the two-dimensional Markov modulated Brownian motion. The process comprises an irreducible continuous-time Markov process with a finite state space, alongside two level processes modulated by the Markov process. These paired level processes can be viewed as a two-dimensional Brownian motion, with Brownian parameters varying based on the Markov process. Due to the infeasibility of explicit computation, we formulate a nonsymmetric algebraic Riccati equation with a minimal nonnegative solution that represents the transform matrix through a matrix exponential function. To our knowledge, this achievement is innovative within the context of the two- dimensional Markov modulated Brownian motion.

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • .
  • .
  • .
  • 10
  • 1
  • 2
  • 3
  • 4
  • 5

Popular topics

  • Latest Artificial Intelligence papers
  • Latest Nursing papers
  • Latest Psychology Research papers
  • Latest Sociology Research papers
  • Latest Business Research papers
  • Latest Marketing Research papers
  • Latest Social Research papers
  • Latest Education Research papers
  • Latest Accounting Research papers
  • Latest Mental Health papers
  • Latest Economics papers
  • Latest Education Research papers
  • Latest Climate Change Research papers
  • Latest Mathematics Research papers

Most cited papers

  • Most cited Artificial Intelligence papers
  • Most cited Nursing papers
  • Most cited Psychology Research papers
  • Most cited Sociology Research papers
  • Most cited Business Research papers
  • Most cited Marketing Research papers
  • Most cited Social Research papers
  • Most cited Education Research papers
  • Most cited Accounting Research papers
  • Most cited Mental Health papers
  • Most cited Economics papers
  • Most cited Education Research papers
  • Most cited Climate Change Research papers
  • Most cited Mathematics Research papers

Latest papers from journals

  • Scientific Reports latest papers
  • PLOS ONE latest papers
  • Journal of Clinical Oncology latest papers
  • Nature Communications latest papers
  • BMC Geriatrics latest papers
  • Science of The Total Environment latest papers
  • Medical Physics latest papers
  • Cureus latest papers
  • Cancer Research latest papers
  • Chemosphere latest papers
  • International Journal of Advanced Research in Science latest papers
  • Communication and Technology latest papers

Latest papers from institutions

  • Latest research from French National Centre for Scientific Research
  • Latest research from Chinese Academy of Sciences
  • Latest research from Harvard University
  • Latest research from University of Toronto
  • Latest research from University of Michigan
  • Latest research from University College London
  • Latest research from Stanford University
  • Latest research from The University of Tokyo
  • Latest research from Johns Hopkins University
  • Latest research from University of Washington
  • Latest research from University of Oxford
  • Latest research from University of Cambridge

Popular Collections

  • Research on Reduced Inequalities
  • Research on No Poverty
  • Research on Gender Equality
  • Research on Peace Justice & Strong Institutions
  • Research on Affordable & Clean Energy
  • Research on Quality Education
  • Research on Clean Water & Sanitation
  • Research on COVID-19
  • Research on Monkeypox
  • Research on Medical Specialties
  • Research on Climate Justice
Discovery logo
FacebookTwitterLinkedinInstagram

Download the FREE App

  • Play store Link
  • App store Link
  • Scan QR code to download FREE App

    Scan to download FREE App

  • Google PlayApp Store
FacebookTwitterTwitterInstagram
  • Universities & Institutions
  • Publishers
  • R Discovery PrimeNew
  • Ask R Discovery
  • Blog
  • Accessibility
  • Topics
  • Journals
  • Open Access Papers
  • Year-wise Publications
  • Recently published papers
  • Pre prints
  • Questions
  • FAQs
  • Contact us
Lead the way for us

Your insights are needed to transform us into a better research content provider for researchers.

Share your feedback here.

FacebookTwitterLinkedinInstagram
Cactus Communications logo

Copyright 2026 Cactus Communications. All rights reserved.

Privacy PolicyCookies PolicyTerms of UseCareers