Articles published on Stochastic gradient descent
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
3698 Search results
Sort by Recency
- New
- Research Article
- 10.1007/s12031-025-02463-z
- Jan 19, 2026
- Journal of molecular neuroscience : MN
- Himanshi Gupta + 4 more
Huntington's disease (HD) is a rare, inherited neurodegenerative disorder caused by the expanded CAG repeats in the huntingtin gene. The HD domain still lacks detailed knowledge of validated drug targets, limiting the effectiveness of classical methods. To address this gap, we have applied an integrated computational approach, combining machine learning (ML) with transcriptomic analysis, to identify novel therapeutic targets. Differential expression analysis was performed on eight publicly available datasets, comprising 209 healthy control and 193 Huntington's disease patient samples, followed by ML-based screening of differentially expressed genes (DEGs). Feature selection using mRMR and RFE, in combination with four classifiers (Linear SVC, Stochastic Gradient Descent, Logistic regression, and Ridge regression), yielded 138 DEG candidates. Subsequent literature curation, drug target analysis, and gene regulatory network (GRN) construction highlighted several key genes, including TXNIP, TNIP3, HTR1D, ADRB1, and FOXP1, which may play pivotal roles in disease progression. Furthermore, our findings highlight the contribution of non-neuronal mechanisms, such as endothelial dysfunction, vascular neurodegeneration, thermoregulation, metabolic imbalance, and impaired phagocytosis, providing a broader perspective into HD pathophysiology. This comprehensive strategy advances our HD knowledge regarding therapeutic targets, molecular pathways, transcription factors (TFs), and complex gene interactions beyond classical HD processes. In summary, the study successfully identifies a promising set of novel drug targets, indicating potential implications in HD therapy.
- New
- Research Article
- 10.65136/jati.v7i1.133
- Jan 15, 2026
- Journal of Applied Technology and Innovation
- Ahmad Awad + 5 more
Convolutional Neural Networks (CNN) are widely used in today’s world for research on image classification and image identification. In this research, exploration is made into one type of CNN using transfer learning by implementing the DenseNet-161 model into classifying 133 different types of dog breeds in a total of 8,351 images split between training, validation, and testing. Only 7,515 images will be used for training and validation at a ratio of 89:11 respectively. This research aims to identify the accuracies and performances of Rectified Linear Unit (ReLU), Leaky ReLU, and Exponential Linear Unit (ELU) activation functions along with Adaptive Moment Estimation (Adam), Adaptive Gradient Algorithm (Adagrad), and Stochastic Gradient Descent (SGD) optimization functions with learning rates (lr) of 0.001, 0.01, and 0.1.
- New
- Research Article
- 10.1002/gepi.70030
- Jan 15, 2026
- Genetic Epidemiology
- Jingchen Ren + 1 more
ABSTRACTGenome‐wide association studies (GWAS) have been instrumental in identifying genetic variants associated with complex traits and diseases, including Alzheimer's disease (AD). However, traditional GWAS approaches often focus on European populations, which may lead to loss of power and limit the generalizability of findings across diverse ancestries. On the other hand, LS‐Imputation, a nonparametric trait imputation method, leverages GWAS summary statistics and genotype data to impute missing traits, which can then be used for GWAS and other downstream analyses. Although LS‐Imputation has been applied successfully to European populations, its performance in non‐European populations would be hindered by smaller sample sizes, leading to reduced imputation accuracy. To address these limitations, we propose two novel variants of LS‐Imputation‐LS‐Imputation‐Combined and LS‐Imputation‐Transfer—designed to integrate multi‐ancestry GWAS data and enhance imputation performance. LS‐Imputation‐Combined optimally combines GWAS summary statistics from multiple ancestries, while LS‐Imputation‐Transfer sequentially refines imputed trait values across ancestries using stochastic gradient descent. We evaluate these methods using data from the UK Biobank and the Alzheimer's Disease Sequencing Project (ADSP), first applying them to high‐density lipoprotein (HDL) cholesterol levels as a proof‐of‐concept before focusing on imputing AD status in Black individuals for genetic association analysis. Our results demonstrate that integrating multi‐ancestry GWAS data improves trait imputation accuracy, with LS‐Imputation‐Transfer achieving the highest performance.
- New
- Research Article
- 10.2196/73041
- Jan 14, 2026
- JMIRx Bio
- James A Casaletto + 12 more
Abstract Background Spaceflight presents unique environmental stressors, such as microgravity and radiation, that significantly affect biological systems at the molecular, cellular, and organismal levels. Astronauts face an increased risk of developing cancer due to exposure to ionizing radiation and other spaceflight-related factors. Age plays a crucial role in the body’s response to the cellular stresses that lead to cancer, with younger organisms generally exhibiting more efficient response mechanisms than older ones. The vast majority of research investigating breast cancer risk from spaceflight uses cell lines exposed to simulated radiation and microgravity, but cell lines cannot capture the combinatorial response expressed across tissues, organs, and systems to real radiation and microgravity in space. Objective The primary objective of this in silico observational study is to characterize the molecular response to spaceflight of in vivo murine mammary tissue. We use an ensemble of linear binary classifiers to identify the molecular biomarkers enriched in this response using mice flown on the International Space Station. The secondary objective is to determine if age plays a role in this response. Methods The National Aeronautics and Space Administration (NASA) Open Science Data Repository has curated transcriptomic data obtained from 10 BALB/cAnNTac female mice flown on the International Space Station and 33 control mice kept on earth (OSD-511). In this observational study focused on two age groups (old/young), we used an ensemble of 4 machine learning binary classifiers with linear decision boundaries (logistic regression, support vector machine, stochastic gradient descent, and single-layer perceptron) to analyze gene expression profiles to predict age (old vs young) and condition (spaceflight vs ground control). Using the genes our ensemble identified as most predictive, we performed pathway enrichment analysis to investigate the molecular pathways involved in spaceflight-related health risks, particularly in the context of breast cancer. Results The pathway enrichment analyses revealed age-differentiated responses to spaceflight (false discovery rate–adjusted q values<.05). Among the 10 mice flown in space, younger mice exhibited significantly enriched pathways related to lipid metabolism and inflammatory stress signaling. All space-flown mice demonstrated evidence of adaptation in retinoid metabolism and peroxisome proliferator-activated receptor signaling in response to microgravity and radiation relative to their 33 ground control counterparts. Conclusions Spaceflight-induced breast cancer risk manifests through distinct age-specific mechanisms: younger individuals face risk through maladaptive metabolic hyperactivity and oxidative cycling, while older individuals are vulnerable due to impaired stress responses and accumulated metabolic dysfunction. Both age groups ultimately face elevated carcinogenic potential through different but converging pathways. These findings highlight the critical role of age in modulating the response to spaceflight-induced stress and suggest that these molecular pathways may contribute to differential outcomes in tissue homeostasis, metabolic disorders, and breast cancer susceptibility.
- New
- Research Article
- 10.1364/oe.571743
- Jan 13, 2026
- Optics Express
- Lianchuang Ding + 6 more
We propose what we believe to be a new approach to achieve on-demand rapid switching of fiber transverse mode by utilizing a mode-control laser system based on a photonic lantern. The phase conjugate hologram of the transverse mode is loaded onto the liquid crystal spatial light modulator (LC-SLM) to provide an evaluation function for the stochastic parallel gradient descent (SPGD) algorithm of the mode-switching laser system. This system can achieve fundamental mode (FM) and high-order modes (HOMs) with high purity, mode superimposition, and mode switching according to different practical application requirements. The LP 01 , LP 11 , and orbital angular momentum (OAM) with a mode purity greater than 90% are obtained, and the superimposed output of the LP 01 and LP 11 modes is realized. The output can be switched between modes in less than 5 ms. It holds great potential for applications in mode division multiplexing (MDM), industrial materials processing, and high-power fiber lasers.
- New
- Research Article
- 10.1080/02331934.2026.2614724
- Jan 13, 2026
- Optimization
- William Piat + 3 more
In this paper, we aim at solving distributionally robust optimization problems motivated by application in robust machine learning. For this, we propose a novel SGD-type computationally tractable and provably convergent algorithm without any need of convexity/concavity assumptions unlike most works in the literature. To achieve this, the distributionally robust optimization is first approached with a point-wise counterpart at controlled accuracy. Second, to avoid solving the generally intractable inner maximization problem, we use entropic regularization and Monte Carlo integration. The approximation errors induced by these steps are quantified and thus can be controlled by making the regularization parameter decay and the number of integration samples increase at an appropriate rate. This paves the way to minimizing our objective with stochastic (sub)gradient descent whose convergence guarantees to critical points are established. To support these theoretical findings, compelling numerical experiments on simulated and benchmark datasets are carried out and confirm the practical benefits of our approach.
- New
- Research Article
- 10.1038/s41598-025-34413-5
- Jan 7, 2026
- Scientific reports
- Sharan Mourya + 2 more
In this paper, we apply quantum machine learning (QML) to predict the distribution of stock prices of multiple assets using a contextual quantum neural network. Our approach captures recent trends to predict future stock price distributions, moving beyond traditional models that focus on entire historical data. Utilizing the principles of quantum superposition, we introduce a new training technique called the quantum batch gradient update (QBGU), which accelerates the standard stochastic gradient descent (SGD) in quantum applications and improves convergence. Consequently, we propose a quantum multi-task learning (QMTL) architecture, specifically, the share-and-specify ansatz, that integrates task-specific operators controlled by quantum labels, enabling the simultaneous and efficient training of multiple assets on the same quantum circuit as well as enabling efficient portfolio representation with logarithmic overhead in the number of qubits. Through extensive experimentation on S&P 500 data for Apple, Google, Microsoft, and Amazon stocks, we demonstrate that our approach outperforms quantum single-task learning (QSTL) models by effectively capturing inter-asset correlations. Our findings highlight the transformative potential of QML in financial applications, paving the way for more advanced, resource-efficient quantum algorithms in stock price prediction and other complex financial modeling tasks.
- New
- Research Article
- 10.1007/s10463-025-00967-4
- Jan 3, 2026
- Annals of the Institute of Statistical Mathematics
- Michael Kohler + 1 more
Rate of convergence of over-parametrized deep neural network regression estimates learned by stochastic gradient descent
- New
- Research Article
- 10.24086/cuesj.v10n1y2026.pp6-11
- Jan 1, 2026
- Cihan University-Erbil Scientific Journal
- Diyar M Khalil
Research gaps exist in the area of forecasting since there has been no comparison between different algorithms for training the Feed Forward Neural Network (FFNN) model. One of the main reasons for this existing gap is that it’s hard to decide which training algorithm to choose. This study proposes to fill that gap by identifying the best method to train the FFNN (both shallow and deep learning), a technique for a univariate monthly time series forecast. A total of seven widely known techniques have been studied to compare their efficiency: Quick Propagation Algorithm, Conjugate Gradient Descent Algorithm, Quasi-Newton Algorithm, Limited Memory Quasi-Newton Algorithm, Levenberg Marquardt Algorithm, Online Back Propagation Algorithm (Stochastic Gradient Descent), and lastly the Batch Back Propagation Algorithm. The study was carried out using Alyuda NeuroIntelligence 2.2. Additionally, four statistical measurements (MAPE, RMSE, MAE, and R²) were used for comparison. The results indicate that Conjugate Gradient Descent outperforms the other algorithms, yielding the highest R² and the lowest values for RMSE, MAPE, and MAE, proving its superior effectiveness in training the FFNN for forecasting monthly time series. Whereas, Batch Back-propagation was shown to be the least effective algorithm, with the lowest R² value and highest values for RMSE, MAPE, and MAE.
- New
- Research Article
- 10.1007/978-3-032-03398-7_38
- Jan 1, 2026
- Advances in experimental medicine and biology
- Stavros-Theofanis Miloulis + 8 more
The growing interest in improved rehabilitation systems and assistive technologies for individuals with motor impairments necessitates the need for new applications of Deep Learning approaches for Brain-Computer Interface (BCI) implementation. This study investigates the application of Deep Learning techniques, specifically the Hierarchical 3D Convolutional Neural Network (H3DCNN) model, for enhancing classification systems utilizing electroencephalography (EEG) data. As such, topographic maps were extracted from EEG signals in a real motion task experiment integrating 4 different motions. The H3DCNN model was then employed in a step-wise fashion to classify and decode the EEG signals, demonstrating its effectiveness in distinguishing between different movement intentions. Moreover, three different optimizers were implemented, including RMSprop, Adam, and Stochastic Gradient Descent (SGD), to further assess and enhance the model performance. The findings indicate that the integration of advanced deep learning techniques can significantly enhance the accuracy and reliability of BCI systems, with RMSprop and SGD showing superior results in terms of accuracy. Moreover, our results illustrate the possibility of decoding neural mechanisms via deep learning paradigms, paving the way for future developments in BCI applications, thus aiming to improve the quality of life for individuals with motor impairments.
- New
- Research Article
- 10.1007/978-1-0716-4949-7_5
- Jan 1, 2026
- Methods in molecular biology (Clifton, N.J.)
- Amauri Duarte Da Silva + 1 more
This chapter describes the Gradient Descent method to predict the inhibition of protein targets. Protein systems are well-suited to study with artificial intelligence techniques, including machine learning methods. Here, we employ two variants of the Gradient Descent method: Batch Gradient Descent and Stochastic Gradient Descent. The last one is available in the Scikit-Learn library (SGDRegressor class). We can integrate Scikit-Learn methods into pipelines to build regression models addressing protein targets employed for drug discovery. In this work, we adopt a hands-on approach and show how to make a regression model to predict the inhibition of cyclin-dependent kinase 2, a protein target for anticancer drugs. We combine pair interaction data determined using the docking program AutoDock Vina and the SGDRegressor class implemented in the program SAnDReS 2.0 to create models to determine enzyme inhibition. All Jupyter Notebooks and datasets examined in this work are at GitHub: https://github.com/azevedolab/docking#readme . We made the program SAnDReS 2.0 available at https://github.com/azevedolab/sandres .
- New
- Research Article
- 10.1109/tnsre.2026.3652858
- Jan 1, 2026
- IEEE transactions on neural systems and rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society
- Imad Eddine Tibermacine + 2 more
Electroencephalographic (EEG) decoding relies heavily on second-order (covariance) structure that lives on the manifold of symmetric positive-definite (SPD) matrices. Conventional deep networks in Euclidean space ignore this geometry, distorting geodesic relations between covariances; classical Riemannian pipelines respect SPD metrics but typically use fixed projections and a single global tangent embedding, which limits task adaptivity and incurs cubic costs in the channel dimension. We propose a fully geometry-consistent architecture that preserves manifold structure end-to-end while remaining trainable at scale. A compact depthwise-separable convolutional neural network (CNN) produces features whose regularized covariances lie on the SPD manifold. A learnable orthonormal projection, optimized on the Stiefel manifold via Riemannian stochastic gradient descent (SGD) with QR-factorization (QR) retraction, reduces dimensionality without breaking positive-definiteness and preserves an eigenvalue floor. We then perform tangent space graph-SPD aggregation on a scalp $k$ -nearest-neighbor graph-neighbor covariances are transported to the reference tangent space, attention-averaged, and mapped back via the exponential-followed by a log-Euclidean mapping and linear softmax classification. This Stiefel $\!\to $ Graph-SPD $\!\to \log $ chain explains why full geometric consistency matters: it avoids Euclidean shortcuts, keeps all intermediates SPD, and makes log/exp costs cubic in the reduced rank $d$ . In cross-subject evaluation on three public datasets, the model attains ${83}.{2}\%\!/\!{81}.{5}\%\!/\!{79}.{7}\%$ accuracy with improved macro- ${F}_{{1}}$ , strong separability (macro-AUROC $\approx {0}.{90}$ ), and well-calibrated probabilities (ECE $\le {0}.{04}$ ), outperforming strong Euclidean CNNs and Riemannian baselines while remaining computationally pragmatic.
- New
- Research Article
- 10.1016/j.neunet.2026.108570
- Jan 1, 2026
- Neural networks : the official journal of the International Neural Network Society
- Engin Cemal Mengüç + 2 more
A novel backpropagation algorithm based on negated kurtosis loss for training shallow, convolutional, and deep neural networks.
- New
- Research Article
- 10.1016/j.aml.2025.109735
- Jan 1, 2026
- Applied Mathematics Letters
- Naiyu Jiang + 3 more
A stochastic column-block gradient descent method for solving nonlinear systems of equations
- New
- Research Article
- 10.1049/pel2.70156
- Jan 1, 2026
- IET Power Electronics
- Farha Khan + 2 more
ABSTRACT The increasing adoption of electric vehicles (EVs) necessitates efficient and eco‐friendly charging solutions. Solar‐powered EV charging offers a sustainable alternative to grid‐dependent systems by reducing carbon emissions. However, the intermittent nature of solar irradiance demands robust maximum power point tracking (MPPT) algorithms to ensure optimal power extraction. Conventional MPPT methods often face challenges like slow convergence and limited tracking accuracy. To address this, the proposed study introduces a deep learning‐based MPPT framework using long short‐term memory (LSTM) networks for intelligent, data‐driven control of a boost converter in a solar‐powered EV charging system. The LSTM model is optimized employing stochastic gradient descent with momentum and trained using irradiance and temperature hourly data obtained from NASA/POWER for Jaipur city, India. The controller's performance is benchmarked against traditional algorithms, INC, PSO and ANN. Results show that the LSTM‐based MPPT achieved superior tracking efficiency (97.63%), low current ripple (0.21%), and minimal prediction error (RMSE: 0.59%). Afterwards, this LSTM‐tuned solar system is employed to charge a 5 kW EV through a boost and a dual active bridge converter. The entire system is validated in MATLAB/Simulink and implemented in real‐time on an OPAL‐RT OP4512 platform, confirming its effectiveness for intelligent and reliable solar‐powered EV charging.
- New
- Research Article
- 10.3390/mi17010058
- Dec 31, 2025
- Micromachines
- Huizhen Yang + 4 more
To address the limitations of conventional wavefront sensorless adaptive optics (AO) systems regarding iteration efficiency and convergence speed, this study conducts an experimental validation of a model-based wavefront sensorless AO approach. A physical experimental platform was established, which consisted of a light source, a Shack–Hartmann wavefront sensor, a deformable mirror (DM), and an imaging detector. Wavefront aberrations under different turbulence levels were employed as correction objects to evaluate the performance of the model-based wavefront sensorless AO system. For comparative analysis, experimental results obtained by using the classical stochastic parallel gradient descent (SPGD) control algorithm are also presented. Under identical software and hardware conditions, the experimental results show that as the turbulence level increases, the SPGD-based wavefront sensorless AO system requires a larger number of iterations and exhibits a slower convergence. In contrast, the model-based wavefront sensorless AO system demonstrates improved applicability and robustness in correcting large aberrations under strong turbulence levels, maintaining an almost constant convergence speed and achieving better correction performance. These findings offer theoretical insights and technical support for the real-time correction potential of large wavefront aberrations.
- New
- Research Article
- 10.11648/j.ajai.20250902.31
- Dec 29, 2025
- American Journal of Artificial Intelligence
- Huseyin Cekirge
The Cekirge Global σ-Regularized Deterministic Method introduces a non-iterative learning framework in which model parameters are obtained through a single closed-form computation rather than through gradient-based optimization. For more than half a century, supervised learning has relied on gradient descent, stochastic gradient descent, and conjugate gradient descent—methods requiring learning rates, batching rules, random initialization, and stopping heuristics, whose outcomes vary with floating-point resolution, operating-system effects, and hardware drift. As dimensions increase or matrices become ill-conditioned, these iterative processes frequently diverge or yield inconsistent results. The σ-Regularized Deterministic Method replaces this instability with a σ-regularized quadratic formulation whose stationary point is analytically unique; even very small σ values eliminate ill-conditioning and ensure machine-independent reproducibility. Learning is reframed not as a search, but as the direct computation of an equilibrium determined by the structural geometry of the data matrix. To address the common reviewer concern that stability must be demonstrated across progressive system sizes, the method is validated sequentially—from small 5×5 and 8×8 matrices, whose full algebra is explicitly inspectable, through 20×20, 100×100, and ultimately 1000×1000. Across all scales, the deterministic σ-solution remains stable and identical across platforms, whereas gradient-based algorithms begin to degrade even at moderate sizes. In practice, the σ-Regularized Deterministic Method requires only a single algebraic evaluation, eliminating the repeated matrix passes and energy expenditure inherent to iterative algorithms. Its runtime scales linearly with the number of partitions rather than the number of iterations, yielding substantial time and energy savings even in very large systems.
- New
- Research Article
- 10.46519/ij3dptdi.1814384
- Dec 28, 2025
- International Journal of 3D Printing Technologies and Digital Industry
- Özgür Dündar + 1 more
In this study, a portable electrocardiogram (ECG) device was developed using the Arduino Portenta embedded system board and the AD8232 sensor to enable continuous and real-time cardiac monitoring. The designed system acquires ECG signals through surface electrodes and transfers them wirelessly to a computer, where the data are recorded and analyzed in real time using MATLAB. The main objective of this research is to automatically detect cardiac arrhythmias by integrating a compact ECG acquisition system with machine learning (ML) algorithms. The training dataset was obtained from the MIT-BIH Arrhythmia Database on PhysioNet, while test data were collected in the laboratory using the proposed device from 20 individuals (10 healthy and 10 with arrhythmia). ECG signals were segmented into 60-second intervals, preprocessed, normalized, and analyzed to extract time-domain and statistical features. Several feature selection methods (GINI, ReliefF, Information Gain, Chi-square, and FCBF) were applied, and various ML classifiers were trained, including Logistic Regression, Support Vector Machine (SVM), Naïve Bayes, k-Nearest Neighbours (kNN), Decision Tree, Stochastic Gradient Descent (SGD), Random Forest, Gradient Boosting, and Artificial Neural Network (ANN). The results showed that the Neural Network achieved the highest performance with an accuracy of 94.5.0% and an AUC of 99.2%, followed by Logistic Regression and SVM. The integration of a self-designed portable ECG device with intelligent ML algorithms provides a low-cost and efficient solution for real-time arrhythmia detection, supporting early diagnosis and continuous monitoring within the Internet of Medical Things (IoMT) framework.
- Research Article
- 10.53560/ppasa(62-4)845
- Dec 24, 2025
- Proceedings of the Pakistan Academy of Sciences: A. Physical and Computational Sciences
- Muhammad Aqeel + 4 more
All over social media and internet platforms, Roman Urdu content is extremely casual, inconsistent, and linguistically diversified, which makes it hard to interpret through conventional Natural Language Processing (NLP) techniques. This paper proposes a strong topic-classification framework for Roman Urdu, integrating Stochastic Gradient Descent (SGD)-optimized machine learning, dictionary-assisted stemming, and custom lexical normalization in order to overcome those challenges. The method consists of structured preprocessing, reduction of repeated letters, rule-based normalization, extraction of TF-IDF features, and the evaluation of a few classifiers including Logistic Regression (LR), Support Vector Machine (SVM), Naïve Bayes (NB), Decision Tree (DT), K-Nearest Neighbors (KNN), along with the proposed model of SGD. The proposed classifier outperformed all the baseline models with an accuracy of 95%, according to the experimental results on the four-class dataset comprised of Politics, Sports, Education, and Religion. The results depict the importance of stemming and normalization to improve feature quality and reduce orthographic variability in low-resource languages. All things considered, this study provides a repeatable and efficient pipeline for Roman Urdu subject classification and thus lays a concrete foundation for further Roman Urdu NLP research.
- Research Article
- 10.31449/inf.v49i37.10767
- Dec 24, 2025
- Informatica
- Hongmei Liu + 1 more
In computer graphics, photorealistic lighting simulation and efficient rendering technology have always faced the dual challenges of computational complexity and visual fidelity. Traditional global illumination algorithms rely on many ray sampling and iterative calculations. For example, path tracing needs to emit thousands of rays per pixel to converge. However, joint optimization problems of hundreds of dimensions, such as light source parameters and material reflectivity in dynamic scenes, often cause traditional gradient descent methods to fall into local optimization. The L-BFGS algorithm stores historical gradient information through a limited memory strategy and constructs an iterative model approximating the inverse of the Hessian matrix. While maintaining the fast convergence characteristics of second-order optimization, the memory consumption is reduced to the order of O (mn) (m is the number of memory steps), which provides a new idea for large-scale lighting parameter optimization. Experimental results demonstrate that the L-BFGS optimization achieves convergence of the energy function to 10-6 within 500 iterations in scenes with dynamic light sources and complex materials, reducing computation time by 38% compared to traditional BFGS. When integrated into NeRF training, the hybrid L-BFGS strategy reduces geometric reconstruction error to 0.12 mm, improving accuracy by 52% over pure stochastic gradient descent. In real-time rendering, GPU-accelerated L-BFGS optimizes shadow mapping parameters for 256 virtual point lights per frame, maintaining 60 FPS at 4K resolution with 1.2 GB VRAM usage. For mobile AR, a quantized L-BFGS variant achieves material reflection calibration in 8.3 ms with ±0.5% azimuth accuracy, while the Monte Carlo-L-BFGS framework reduces indirect illumination precomputation from 14.6 hours to 2.3 hours with 98.7% visual fidelity. These technological advances provide a new paradigm for integrating movie-level offline and real-time rasterized rendering pipelines and promote the development of efficient visualization in emerging fields such as digital twins and the metaverse.