Articles published on Quasi-Newton method
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
3499 Search results
Sort by Recency
- New
- Research Article
- 10.1017/aer.2025.10125
- Jan 21, 2026
- The Aeronautical Journal
- A.I Ali + 3 more
Abstract This study presents a surrogate-model-assisted Quasi-Newton optimisation framework for simultaneously improving the aerodynamic performance and radar stealth characteristics of an unmanned aerial vehicle (UAV). High-fidelity computational fluid dynamics (CFD) and computational electromagnetics (CEM) simulations are integrated through surrogate models generated via a face-centred central composite design within a design of experiments framework. Quadratic polynomial response surface equations are constructed for key aerodynamic and radar cross-section (RCS) metrics, enabling analytical gradient evaluation. A gradient-based quasi-Newton method with Broyden–Fletcher–Goldfarb–Shanno Hessian updates is employed to minimise a scalarised objective function combining normalised maximum lift coefficient, overall RCS and frontal RCS. Constraints are imposed on the lift-to-drag ratio ( $L/D \geq 10$ ) and static longitudinal stability ( ${C_{m0}} \geq 0$ ). Analytical derivatives from the response surface equations (RSEs) eliminate the need for direct numerical differentiation of CFD/CEM outputs, reducing computational cost and eliminating simulation noise. An interior-point sequential quadratic programming strategy is used to ensure satisfaction of nonlinear constraints during the optimisation process. The optimised UAV design demonstrates a $12{\rm{\% }}$ increase in maximum lift coefficient and a $30{\rm{\% }}$ reduction in both overall and frontal RCS compared to the baseline configuration. The results are confirmed through high-fidelity CFD and RCS simulations and are further validated experimentally in an anechoic chamber, with close agreement across all measured frequencies. The proposed methodology provides an efficient and experimentally verified approach for integrated aerodynamic and stealth optimisation in UAV design.
- New
- Research Article
- 10.1186/s41043-025-01216-3
- Jan 17, 2026
- Journal of health, population, and nutrition
- Dingwei Liu + 4 more
There is a growing interest in the relationship between the global incidence of inflammatory bowel disease (IBD) and climate change. However, the precise relationship between long-term temperature change and IBD incidence on a global scale remains unclear. The Global Burden of Disease (GBD) data set was employed, which comprises the age-standardized incidence rate (ASIR) of IBD from 1990 to 2021. The temperature change index was defined as the 32-year mean temperature combined with the metric of maximum temperature variability. A general linear mixed-effects regression model was employed to investigate the relationship between temperature variability and ASIR of IBD. Furthermore, we projected future changes in the incidence of associated IBD under four shared socioeconomic pathways (SSP: 126, 245, 370 and 585) for the period between 2020 and 2100. Maximum temperature variability increased approximately threefold over the 32-year period from 1990 to 2021. We found that for each 1°C increase in maximum temperature variability, the risk of IBD increases by 12.1%. Compared to the low-emissions scenario (SSP 126), we predict that global IBD incidence rates will change four times faster under the high-emissions climate change scenario (SSP 585) than under the low-emissions scenario (SSP 126) by 2100. Our study suggested an association between global temperature variability and IBD incidence. Acknowledging the constraints of ecological data, this observed relationship indicates a need for further research to explore the underlying mechanisms and confirms the importance of considering climate change in the context of chronic disease prevention.
- New
- Research Article
- 10.12688/f1000research.172826.1
- Jan 16, 2026
- F1000Research
- Farah Ghazi + 2 more
The proposed design of neural network in this article is based on new accurate approach for training by unconstrained optimization, especially update quasi-Newton methods are perhaps the most popular general-purpose algorithms. A limited memory BFGS algorithm is presented for solving large-scale symmetric nonlinear equations, where a line search technique without derivative information is used. On each iteration, the updated approximations of Hessian matrix satisfy the quasi-Newton form, which traditionally served as the basis for quasi-Newton methods. On the basis of the quadratic model used in this article, we add a new update of quasi-Newton form. One innovative features of this form's is its ability to estimate the energy function's or performance function with high order precision with second-order curvature while employ the given function value data and gradient. The global convergence of the proposed algorithm is established under some suitable conditions. Under some hypothesis the approach is established to be globally convergent. The updated approaches can be numerical and more efficient than the existing comparable traditional methods, as illustrated by numerical trials. Numerical results show that the given method is competitive to those of the normal BFGS methods. We show that solving a partial differential equation can be formulated as a multi-objective optimization problem, and use this formulation to propose several modifications to existing methods. Also the proposed algorithm is used to approximate the optimal scaling parameter, which can be used to eliminate the need to optimize this parameter. Our proposed update is tested on a variety of partial differential equations and compared to existing methods. These partial differential equations include the fourth order three dimensions nonlinear equation, which we solve in up to four dimensions, the convection-diffusion equation, all of which show that our proposed update lead to enhanced accuracy.
- New
- Research Article
- 10.3390/s26020525
- Jan 13, 2026
- Sensors
- Attila Biró + 2 more
Wearable inertial measurement units (IMUs) provide an accessible means of monitoring fatigue-related changes in running biomechanics, yet most existing methods rely on limited feature sets, lack personalization, or fail to generalize across individuals. This study introduces a bioinformatics-inspired stride sequence modeling framework that integrates spectral–entropy features, sample entropy, frequency-domain descriptors, and mixed-effects statistical modeling to detect fatigue using a single lumbar-mounted IMU. Nineteen recreational runners completed non-fatigued and fatigued 400 m runs, from which we extracted stride-level features and evaluated (1) population-level fatigue classification via global leave-one-participant-out (LOPO) models and (2) individualized fatigue detection through supervised participant-specific models and non-fatigued-only anomaly detection. Mixed-effects models revealed robust and multidimensional fatigue effects across key biomechanical features, with large standardized effect sizes (Cohen’s d up to 1.35) and substantial variance uniquely explained by fatigue (partial R2 up to 0.31). Global LOPO machine learning models achieved modest accuracy (55%), highlighting strong inter-individual variability. In contrast, personalized supervised Random Forest classifiers achieved near-perfect performance (mean accuracy 97.7%; mean AUC 0.997), and NF-only One-Class SVMs detected fatigue as a deviation from individual baseline patterns (mean AUC 0.967). Entropy and stride-to-stride variability metrics further demonstrated consistent fatigue-linked increases in movement irregularity and reduced neuromuscular control. These findings show that IMU stride sequences contain highly informative, fatigue-sensitive biomechanical signatures, and that combining bioinformatics-inspired sequence analysis with hybrid statistical and personalized AI models enables both robust population-level insights and highly reliable individualized fatigue monitoring. The proposed framework supports future integration into sports analytics platforms, digital coaching systems, and real-time wearable fatigue detection technologies. This highlights the necessity of personalized fatigue-monitoring strategies in wearable systems.
- New
- Research Article
- 10.1007/s10067-025-07915-w
- Jan 8, 2026
- Clinical rheumatology
- Pengyu Zhang + 5 more
A high incidence of cardiovascular disease (CVD) is observed among patients with rheumatoid arthritis (RA); although the systemic inflammatory response index (SIRI) has demonstrated predictive utility across inflammatory and cardiovascular disorders, its relationship with CVD risk in RA remains underinvestigated. The present study was partitioned into a training set, a temporal validation set, and an external validation set derived from the NHANES database and a Chinese clinical cohort. Between-group differences were characterized, and multivariable logistic regression models were fitted across SIRI tertiles within each dataset. Restricted cubic spline functions were plotted to visualize dose-response associations, and receiver operating characteristic analysis was employed to quantify discrimination and predictive probabilities. Additionally, the incremental value of augmenting the Framingham risk score with SIRI was evaluated across all three datasets. Variable importance derived from an XGBoost algorithm and SHAP summary plots were used to quantify contributor-specific risk, whereas net reclassification improvement and integrated discrimination improvement were computed to corroborate model superiority. In training set, CVD prevalence among RA patients was 23.22 % and the mean age was 65.5 years. The cohort comprised 167 male (48.55%) and 177 female (51.45%). Median SIRI values were significantly higher in participants with CVD than in those without (1.19 vs. 0.99, P < 0.001). A positive association between SIRI and CVD risk was detected (P for overall = 0.003, P for nonlinear = 0.022), participants in the highest SIRI tertile exhibited 2-fold increase in CVD risk relative to the lowest tertile (OR = 2.028, 95% CI: 1.343-3.075, P < 0.001). Incorporation of SIRI into the Framingham risk score improved the area under the receiver operating characteristic curve on training set (0.705 vs. 0.688), time validation set (0.693 vs. 0.678) and external validation set (0.793 vs. 0.761) set. Variable importance metrics and SHAP value distributions further indicated that SIRI constitutes a principal determinant of CVD risk in RA. SIRI is significantly and positively associated with CVD risk in RA and effectively enhances the predictive performance of the Framingham risk score when applied to this population. Accordingly, SIRI may serve as an inflammatory biomarker for the early identification and management of CVD risk in RA, thereby informing the development of individualized therapeutic strategies. Key Points • SIRI is significantly and positively associated with CVD risk in RA. • SIRI enhances the predictive performance of the Framingham risk score when applied to RA patients.
- New
- Research Article
- 10.1007/s10439-025-03954-1
- Jan 7, 2026
- Annals of biomedical engineering
- Alex Peebles + 5 more
1) Characterize the magnitude and consistency of dog-leash tension during routine walks in an at-home setting and 2) determine the effects of walking with a dog on-leash on gait intensity and variability. A wireless system was designed to simultaneously measure dog-leash tension and gait kinematics. Twenty human subjects took their dog for one routine walk in their home environment and repeated the same walk without their dog for comparison. Peak force and peak loading rate were computed during periods of non-gait and during each stride and characterized using standard statistical metrics. Gait intensity and variability metrics were compared between trials (walking with a dog versus walking alone) using paired samples t-tests. The largest pulling event recorded was during a period of non-gait, with a force of 412.5 N and loading rate of 2868.7 N/s. Half of our participants experienced leash tension of 125 N or greater when walking their dog. Participants had increased stride time variability (p = 0.035) and decreased power spectral density amplitude in the vertical (p = 0.016) and anteroposterior directions (p = 0.003) when walking with a dog compared to walking alone. Gait intensity metrics were not different between walking conditions. Some dog owners routinely experience dog-pulling forces that pose a risk of injury to musculoskeletal tissue directly and indirectly through accidental falls. There is a large amount of between- and within-subject variability in leash pulling metrics. Walking a dog on-leash challenges dynamic balance, as reflected through increased gait variability.
- New
- Research Article
- 10.1007/s11075-025-02284-6
- Jan 2, 2026
- Numerical Algorithms
- Gonglin Yuan + 2 more
A projection DFP quasi-Newton algorithm and its applications in Muskingum model and machine learning
- New
- Research Article
- 10.24086/cuesj.v10n1y2026.pp6-11
- Jan 1, 2026
- Cihan University-Erbil Scientific Journal
- Diyar M Khalil
Research gaps exist in the area of forecasting since there has been no comparison between different algorithms for training the Feed Forward Neural Network (FFNN) model. One of the main reasons for this existing gap is that it’s hard to decide which training algorithm to choose. This study proposes to fill that gap by identifying the best method to train the FFNN (both shallow and deep learning), a technique for a univariate monthly time series forecast. A total of seven widely known techniques have been studied to compare their efficiency: Quick Propagation Algorithm, Conjugate Gradient Descent Algorithm, Quasi-Newton Algorithm, Limited Memory Quasi-Newton Algorithm, Levenberg Marquardt Algorithm, Online Back Propagation Algorithm (Stochastic Gradient Descent), and lastly the Batch Back Propagation Algorithm. The study was carried out using Alyuda NeuroIntelligence 2.2. Additionally, four statistical measurements (MAPE, RMSE, MAE, and R²) were used for comparison. The results indicate that Conjugate Gradient Descent outperforms the other algorithms, yielding the highest R² and the lowest values for RMSE, MAPE, and MAE, proving its superior effectiveness in training the FFNN for forecasting monthly time series. Whereas, Batch Back-propagation was shown to be the least effective algorithm, with the lowest R² value and highest values for RMSE, MAPE, and MAE.
- New
- Research Article
- 10.70749/ijbr.v3i12.2769
- Dec 30, 2025
- Indus Journal of Bioscience Research
- Muhammad Numair Kashif + 6 more
This study quantitatively assessed glycemic variability and its clinical implications in patients with poorly controlled diabetes using continuous glucose monitoring (CGM). A quantitative observational design was employed, and data were collected from 150 adult patients with poor glycemic control through purposive sampling. Glycemic variability was evaluated using CGM metrics, including mean glucose, standard deviation (SD), coefficient of variation (CV), time in range (TIR), time above range (TAR), and time below range (TBR). Descriptive results showed elevated mean glucose levels (212.4 ± 38.7 mg/dL), high glucose variability (SD = 62.3 ± 15.9 mg/dL, CV = 29.6 ± 6.1%), reduced TIR (41.8 ± 12.5%), and increased TAR (55.6 ± 13.1%). Correlation analysis revealed significant associations between glycemic variability metrics and adverse clinical outcomes, with strong correlations observed between CV and HbA1c (r = 0.55, p < 0.01) and between TIR and HbA1c (r = −0.66, p < 0.01). Regression analysis further demonstrated that CV significantly predicted HbA1c (β = 0.53, p < 0.001), TBR predicted hypoglycemia episodes (β = 0.58, p < 0.001), and glucose SD predicted hospital admissions (β = 0.45, p < 0.001), with the model explaining 41% of the variance (R² = 0.41) in clinical outcomes. Overall, the findings highlight glycemic variability as an independent and significant determinant of adverse clinical outcomes, emphasizing the importance of integrating variability metrics into routine diabetes management.
- New
- Research Article
- 10.1007/s11684-025-1168-9
- Dec 27, 2025
- Frontiers of medicine
- Jinbo Yu + 8 more
This single-center prospective cohort study establishes ultrafiltration rate variability (quantified by coefficient of variation, UFRCV) as an independent predictor of all-cause and cardiovascular mortality in maintenance hemodialysis patients. While absolute ultrafiltration rate thresholds represent established risk factors, dynamic fluid removal fluctuations remain prognostically uncharacterized. We longitudinally monitored ultrafiltration rate patterns during a 90-day observation period in 202 hemodialysis patients (median follow-up: 38.7 months). Stratification by median UFRCV (0.187) revealed significantly reduced survival among patients with elevated variability. This association demonstrated particular clinical significance in elderly individuals (> 60 years), those with recurrent intradialytic hypotension, and subjects exhibiting elevated predialysis systolic blood pressure variability. Notably, UFRCV exhibited stronger mortality prediction in patients with lower mean ultrafiltration volumes (< 2469 mL), indicating that current static ultrafiltration rate targets inadequately reflect dynamic hemodynamic vulnerability. These findings underscore the imperative to integrate ultrafiltration rate variability metrics into personalized volume management frameworks-a parameter currently absent from dialysis adequacy guidelines. Collectively, UFRCV assessment emerges as a critical indicator of subclinical hemodynamic compromise, providing pivotal insights for refining hemodynamic risk stratification in maintenance hemodialysis populations.
- Research Article
- 10.3390/electronics15010081
- Dec 24, 2025
- Electronics
- Yifan Wang + 1 more
To address the imbalance between user evaluation noise and algorithmic autonomy in Interactive Evolutionary Design (IED), this study proposes an optimization method integrating subjective and objective weights to alleviate user fatigue and enhance evolutionary efficiency of Genetic Algorithms (GAs). Building upon the Interactive Genetic Algorithm (IGA), an AHP-CRITIC combined proxy model is introduced, leveraging the bidirectional complementarity of AHP and CRITIC to suppress evaluation noise. An interactive evolutionary design model integrating AHP-CRITIC-IGA is constructed, with armchair design as a case study. Compared to traditional IGA, the combined-weighting-based IGA shows faster convergence and more stable fitness trajectories under the same parameter settings. To avoid over-interpretation, evaluation-noise claims are tied to explicit variability metrics (e.g., fitness standard deviation and inter-rater disagreement) reported in the revised experimental section. The proposed method effectively balances the contradiction between human interaction and algorithmic autonomy in interactive evolutionary design. Furthermore, the framework is inherently generalizable and demonstrates significant potential for adaptation and electronic system design, where optimizing complex, multi-criteria problems under user feedback is paramount. This establishes its relevance to the broader field of intelligent and interactive engineering systems.
- Abstract
- 10.1002/alz70857_097191
- Dec 24, 2025
- Alzheimer's & Dementia
- Adolfo M Garcia
BackgroundAutomated speech and language analysis (ASLA) is gaining momentum in the digital biomarker arena. Yet, limited attempts have been made to validate them against pathological and neurocognitive measures. Here I describe four studies correlating and comparing ASLA metrics with such measures in persons with Alzheimer's dementia (AD), mild cognitive impairment (MCI), and nonfluent/agrammatic variant primary progressive aphasia (nfvPPA).MethodAcross studies, we assessed 60 persons with AD, 52 with MCI, 24 with nfvPPA, and 100 healthy controls (HCs). We extracted speech timing (e.g., pause duration, pause duration variability, articulation rate) and word property (e.g., frequency, semantic granularity, semantic variability) metrics from verbal fluency and paragraph reading tasks. We combined inferential statistics and machine learning to establish these metrics’ (a) sensitivity each disorder, (b) correlations with underlying brain changes, (c) diagnostic comparability with standard measures, and (d) robustness to predict autopsy‐confirmed pathology.ResultsFirst, our fluency results from AD show that word property metrics robustly differentiate persons with AD from HCs (AUC = .89), while capturing temporo‐frontal atrophy and default‐mode‐network hypoconnectivity. Second, in a replication study incorporating speech timing features, we obtained similar classification results (AUC = .83), further showing that these outcomes do not differ significantly from those obtained via standard cognitive (AUC = .81) or neuroimaging (AUC = .89) features (p‐values > .05). A third study showed that similar results emerge when classifying persons with MCI and HCs (AUC = .80), surpassing cognitive measures (AUC = .71; p < .05) and predicting the volume of temporal brain regions typically affected by this disorder. Finally, a study on nfvPPA showed that speech timing metrics can consistently differentiate patients from HCs (AUC = .95), predicting both frontal atrophy patterns and autopsy‐confirmed pathology years before the patients’ death.ConclusionOverall, the evidence suggests that ASLA metrics reveal early‐stage markers of AD and related conditions, predicting syndrome‐specific brain anomalies and underlying pathology. More importantly, results match the sensitivity of standard neuropsychological and neuroimaging measures, meeting the non‐inferiority criterion for emerging digital tests. Given the affordability and scalability of ASLA, these findings solidify their role as a tool to favor equity in dementia screenings and assessments.
- Research Article
- 10.1038/s41746-025-02274-x
- Dec 23, 2025
- NPJ digital medicine
- Giuseppe Lamberti + 12 more
Neuroendocrine neoplasms (NENs) are rare and heterogeneous malignancies requiring multidisciplinary management. Large language models (LLMs) are emerging as decision-support tools, but their role in therapeutic decision-making is largely unexplored. ARTEMIS was a pilot cross-sectional study comparing three configurations-a baseline GPT, a customised GPT with static domain knowledge (GPTs), and a retrieval-augmented GPT (RAG)-against a panel of nine Italian NEN experts using twenty simulated, non-surgical cases. The primary endpoint was non-inferiority for systemic therapy recommendations; secondary endpoints included completeness, explicit uncertainty, parsimony of additional tests, costs, and variability metrics. RAG and GPTs achieved 70.0% agreement versus the expert benchmark (63.8%), meeting the exploratory -10% non-inferiority margin but not the stricter -5% threshold. Baseline GPT reached 60.0% and was not non-inferior. All AI systems consistently produced complete recommendations and expressed uncertainty more often than experts; RAG tended to propose fewer additional tests and lower associated costs. Experts showed greater variability than AI systems, and Ki-67 correlated with disagreement, indicating biological aggressiveness as a source of uncertainty. This exploratory study suggests that LLMs can approximate expert therapeutic reasoning under controlled conditions, but concordance remains limited and external validation in real-world settings is needed before clinical use.
- Research Article
- 10.3390/mca31010001
- Dec 19, 2025
- Mathematical and Computational Applications
- Muhammad Asif + 3 more
Advection–diffusion–reaction-type interface models have wide-ranging applications in environmental science, chemical engineering, and biological systems, particularly in modeling pollutant transport in groundwater, reactive flows, and drug diffusion across biological membranes. This paper presents a novel numerical method for the solution of these models. The proposed method integrates the meshless collocation technique with the finite difference method. The temporal derivative is approximated using a finite difference scheme, while spatial derivatives are approximated using radial basis functions. The interface across the fixed boundary is treated with discontinuous diffusion, advection, and reaction coefficients. The proposed numerical scheme is applied to both linear and non-linear models. The Gauss elimination method is used for the linear models, while the quasi-Newton linearization method is employed to address the non-linearity in non-linear cases. The L∞ error is computed for varying numbers of collocation points to assess the method’s accuracy. Furthermore, the performance of the method is compared with the Haar wavelet collocation method and the immersed interface method. Numerical results demonstrate that the proposed approach is more efficient, accurate, and easier to implement than existing methods. The technique is implemented in MATLAB R2024b software.
- Research Article
- 10.3390/math13244036
- Dec 18, 2025
- Mathematics
- Vladimir Krutikov + 4 more
This paper considers the problem of unconstrained minimization of smooth functions. Despite the high efficiency of quasi-Newton methods such as BFGS, their performance degrades in ill-conditioned problems with unstable or rapidly varying Hessians—for example, in functions with curved ravine structures. This necessitates alternative approaches that rely not on second-derivative approximations but on the topological properties of level surfaces. As a new methodological framework, we propose using a procedure of incomplete orthogonalization in the directions of gradient differences, implemented through the iterative least-squares method (ILSM). Two new methods are constructed based on this approach: a gradient method with the ILSM metric (HY_g) and a modification of the Hestenes–Stiefel conjugate gradient method with the same metric (HY_XS). Both methods are shown to have linear convergence on strongly convex functions and finite convergence on quadratic functions. A numerical experiment was conducted on a set of test functions. The results show that the proposed methods significantly outperform BFGS (2 times for HY_g and 3.5 times for HY_XS in terms of iterations number) when solving ill-posed problems with varying Hessians or complex level topologies, while providing comparable or better performance even in high-dimensional problems. This confirms the potential of using topology-based metrics alongside classical quasi-Newton strategies.
- Research Article
- 10.1080/13854046.2025.2601742
- Dec 17, 2025
- The Clinical Neuropsychologist
- Juliana Wall + 3 more
Objective: The present study aimed to better understand key conceptualizations and operationalizations of intraindividual variability (IIV). We expected that differing types and metrics of IIV would relate to one another and predict outcomes (academic achievement) similarly. Method: The sample comprised 238 young adults. IIV was computed within and across six measures – three related to math and three more generally cognitive; in each case, score was separated from response time. We computed three types of IIV (inconsistency, dispersion, and dispersion of inconsistency), across several metrics (standard deviation, coefficient of variability, residualized standard deviation), and assessed their interrelations, and their prediction of academic achievement. Results: Differing metrics of variability were related to one another, but variably so. For prediction, whether or not inconsistency IIV metrics were significant was highly dependent on the measure they were derived from, with or without the primary score for a given measure also included. For dispersion of inconsistency and dispersion, variability metrics were often significant, though this was eliminated in most cases when score was also included in models. Conclusions: By concurrently examining multiple metrics and types of IIV within the same set of measures, this study highlights the need to (a) clarify the type of IIV utilized and why; (b) clarify the rationale for the kinds of measures used to compute IIV, particularly dispersion; and (c) include score alongside timing. Doing so will likely improve the generalizability of IIV findings, and prompt future research avenues, both psychometric- (e.g. simulations) and clinical-related (e.g. across ages and populations).
- Research Article
- 10.3390/s25247652
- Dec 17, 2025
- Sensors (Basel, Switzerland)
- Vinicius Cavassano Zampier + 3 more
HighlightsOptical motion capture (OMC) methods outperformed inertial measurement units (IMUs) in the accurate detection of gait events. IMU-based methods exhibited higher intra-subject variability to detect heel strike and toe-off events compared with OMC and ground reaction force data.What are the main findings?OMC-based methods (OMC1 and OMC2) showed lower estimation errors (RMSE and Range) for heel strike and toe-off detection compared with IMU-based methods (IMU1 and IMU2).IMU methods presented a higher intra-subject coefficient of variation (CoV) when used to calculate stance phase durations, indicating greater variability.What are the implications of the main findings?OMC remains the preferred choice for applications requiring high temporal accuracy, such as synchronization with EMG or EEG, despite the portability advantages of IMUs.The higher intra-subject variability of IMU-based event detection methods may limit their reliability in clinical and longitudinal studies, particularly its usage to segment biological signals.Gait events (instant of heel strikes and instant of toe-offs) are essential for extracting spatiotemporal parameters and segmenting biological signals (electromyography (EMG) and electroencephalography (EEG)) based on gait cycle. While force platforms and optical motion capture (OMC) are ideal for identifying GE, inertial measurement units (IMUs) are more applicable. This study compared the accuracy and variability from IMU- and OMC-based gait event detection methods compared with gold-standard ground reaction force (GRF) detection. Seventeen healthy adults (31 ± 8 years) walked along a 10 m walkway instrumented with force plates. Foot kinematics were recorded using two retro-reflective markers on each foot and an IMU on the sacrum. Gait events were identified using two OMC-based (OMC1, OMC2) and two IMU-based (IMU1, IMU2) algorithms. Accuracy was evaluated using root-mean-square error (RMSE) relative to GRF, and within-subject variability was assessed using coefficient of variation (CoV). The results from the instant of heel strikes, OMC1 yielded a lower RMSE (14 ms) than IMU1 (50 ms) and IMU2 (61 ms) (p < 0.001). For the instant of toe-offs, OMC1 demonstrated a lower RMSE (17 ms), differing from IMU1 (54 ms) and IMU2 (74 ms) (p < 0.001). IMU2 exhibited greatest variability (CoV = 24 ms) compared with OMC1 (7 ms) and IMU1 (9 ms) (p < 0.001). Our results highlight lower accuracy and higher variability in gait event detection using sacrum-mounted IMUs. Despite its convenience, researchers should consider the limitations of using IMUs for EMG/EEG data segmentation. Future studies validating gait event detection methods should report some type of variability metric.
- Research Article
- 10.1111/tesg.70055
- Dec 15, 2025
- Tijdschrift voor Economische en Sociale Geografie
- Cristian Incaltarau
Abstract In a highly interconnected global landscape, where shocks can have far‐reaching spill‐over effects, the emphasis on enhancing resilience has naturally grown, capturing the interest of policymakers. This study examines regional (NUTS2) economic resilience over the Great Recession and the COVID‐19 pandemic. Our resilience framework incorporates diverse economic dimensions, distinct resilience stages while also accounting for the regional business cycle specificities. Relying on a spatially augmented cross‐sectional panel specification, our findings confirm human capital as a crucial determinant in regional crisis response, particularly during the Great Recession. Additionally, regional economies that performed well over the Great Recession also tended to be more resilient to the COVID‐19 shock, suggesting long‐term benefits from building resilience capacity. The heterogeneous drivers of resilience across the two shocks, their unpredictable nature and the variable resilience metrics may suggest that preparing for specific shocks may be ineffective. Instead, policy interventions should strengthen transversal regional capacities – such as human capital and institutional quality – for better reactions across a wider range of stressors in line with regional development theory.
- Research Article
- 10.1080/10717544.2025.2597906
- Dec 12, 2025
- Drug Delivery
- Jing Wang + 9 more
ABSTRACT To evaluate the comparative effectiveness, safety, and patient acceptance of needle-free injectors and insulin pens through a meta-analysis. A thorough literature search was performed across multiple databases, including CNKI, the Weipu Database, the Wanfang Database, PubMed, Embase, and The Cochrane Library for randomized controlled trials (RCTs) on the use of needle-free injectors and insulin pens in diabetes treatment, with a time frame from database inception until June 2024. Five researchers carefully screened the literature in accordance with the inclusion and exclusion criteria, assessed study quality with validated tools and conducted the meta-analysis using RevMan 5.4. This study included 22 publications encompassing 22 studies, involving a total of 2,580 patients, with 1,290 in the needle-free group and 1,290 in the insulin pen group. The findings suggest that needle-free injectors were superior to insulin pens in improving treatment efficacy, local adverse reactions, and increasing patient acceptability, with statistically significant differences. However, no significant variations were observed in continuous glucose monitoring. Needle-free injectors can improve certain aspects of glycemic control, such as reducing HbA1c, FPG, and 2hPPG; improve local adverse reactions and increase patient acceptability. However, improvements in glycemic variability metrics were not statistically significant, indicating that further research is needed to fully assess their long-term safety and therapeutic effectiveness. In terms of systemic safety, the risk of hypoglycemia in the needle-free group was not significantly different from that in the needle group (P = 0.14). Additionally, our publication bias analysis revealed potential biases in the outcomes related to hypoglycemia and insulin dosage.
- Research Article
- 10.3390/electronics14244832
- Dec 8, 2025
- Electronics
- Adeb Salh + 6 more
The advent of Sixth Generation (6G) wireless communication systems demands unprecedented data rates, ultra-low latency, and massive connectivity to support emerging applications such as extended reality, digital twins, and ubiquitous intelligent services. These stringent requirements call for the use of massive Multiple-Input Multiple-Output (m-MIMO) systems with hundreds or even thousands of antennas, which introduce substantial challenges for signal detection algorithms. Conventional linear detectors, especially the linear Minimum Mean Square Error (MMSE) detectors, face prohibitive computational complexity due to high-dimensional matrix inversions, and their performance remains inherently restricted by the limitations of linear processing. The current research suggested an Iterative Signal Detection (ISD) algorithm with significant limitations being occupied with the combination of Deep Q-Network (DQN) and Quasi-Newton algorithms. The method incorporates the Broyden-Net, which could be faster with less memory training than the model in the case of spatially correlated channels, a Quasi-Newton method, and DQN to improve the m-MIMO detection. The proposed techniques support the computational efficiency of realistic 6G systems and outperform linear detectors. The simulation findings proved that the DQN-improved Quasi-Newton algorithm is more appropriate than traditional algorithms, since it combines the reward design, limited memory updates, and adaptive interference mitigation to shorten convergence time by 60% and increase the confrontation to correlated fading.