Articles published on Square matrix
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
257 Search results
Sort by Recency
- Research Article
- 10.1093/biomet/asaf081
- Nov 9, 2025
- Biometrika
- A Dharamshi + 4 more
Abstract Recent work has explored data thinning, a generalization of sample splitting that involves decomposing a (possibly matrix-valued) random variable into independent components. In the special case of an n × p random matrix with independent and identically distributed Np(μ, Σ) rows, Dharamshi et al.(2025a) provides a comprehensive analysis of the settings in which thinning is or is not possible: briefly, if Σ is unknown, then one can thin provided that n > 1. However, in some situations a data analyst may have access only to summary statistics of the data, e.g., due to privacy considerations. While the sample mean follows a Gaussian distribution, the sample covariance follows (up to scaling) a Wishart distribution, for which no thinning strategies have yet been proposed. In this note, we fill this gap: we show that it is possible to generate two or more independent data matrices with independent Np(μ, Σ) rows, based only on the sample mean and sample covariance matrix. These independent data matrices can either be used directly within a train-test paradigm, or can be used to derive independent summary statistics. Furthermore, they can be recombined to yield the original sample mean and sample covariance. The key insight that enables this development is an algorithm that decomposes a Wishart random matrix into a matrix square root with independent and identically distributed Gaussian rows.
- Research Article
- 10.3390/e27100999
- Sep 25, 2025
- Entropy
- Jasper A Vrugt + 1 more
A fundamental limitation of maximum likelihood and Bayesian methods under model misspecification is that the asymptotic covariance matrix of the pseudo-true parameter vector is not the inverse of the Fisher information, but rather the sandwich covariance matrix , where and are the sensitivity and variability matrices, respectively, evaluated at for training data record . This paper makes three contributions. First, we review existing approaches to robust posterior sampling, including the open-faced sandwich adjustment and magnitude- and curvature-adjusted Markov chain Monte Carlo (MCMC) simulation. Second, we introduce a new sandwich-adjusted MCMC method. Unlike existing approaches that rely on arbitrary matrix square roots, eigendecompositions or a single scaling factor applied uniformly across the parameter space, our method employs a parameter-dependent learning rate that enables direction-specific tempering of the likelihood. This allows the sampler to capture directional asymmetries in the sandwich distribution, particularly under model misspecification or in small-sample regimes, and yields credible regions that remain valid when standard Bayesian inference underestimates uncertainty. Third, we propose information-theoretic diagnostics for quantifying model misspecification, including a strictly proper divergence score and scalar summaries based on the Frobenius norm, Earth mover’s distance, and the Herfindahl index. These principled diagnostics complement residual-based metrics for model evaluation by directly assessing the degree of misalignment between the sensitivity and variability matrices, and . Applications to two parametric distributions and a rainfall-runoff case study with the Xinanjiang watershed model show that conventional Bayesian methods systematically underestimate uncertainty, while the proposed method yields asymptotically valid and robust uncertainty estimates. Together, these findings advocate for sandwich-based adjustments in Bayesian practice and workflows.
- Research Article
- 10.1007/s10915-025-03040-7
- Sep 3, 2025
- Journal of Scientific Computing
- Pei-Chang Guo + 2 more
Computing the matrix square root with the doubling algorithm
- Research Article
- 10.1002/rrq.70032
- Jul 1, 2025
- Reading Research Quarterly
- Tianai Zhang
ABSTRACTMuch research has explored the intersection of music and language but has mostly focused on first language (L1) acquisition in early childhood. The few studies on second language (L2) learning and music have mainly concentrated on listening and speaking skills. Therefore, this study aims to address this gap by investigating how music perception relates to L2 reading comprehension regarding the acoustic dimension. This study involves Chinese adult undergraduates who learn English as a second language (ESL). In a supervised classroom, 139 participants completed the background information questionnaire, music perception test, phonological awareness test, and auditory working memory test on electronic devices. They took the English reading comprehension test with pen and paper. They were divided into three groups based on their past and present involvement in musical activities: no training, basic training, and advanced training. Quantitative data were analyzed using a correlation matrix, MANOVA, and partial least squares structural equation modeling (PLS‐SEM). Participants with basic and advanced musical training outperformed their untrained counterparts in phonological awareness. Specifically, musical training enhanced their ability to segment sounds in words and blend word parts. The auditory‐based PLS model revealed that phonological awareness directly predicted L2 reading comprehension, and that music perception affected reading comprehension directly and indirectly through auditory working memory. This study offers valuable evidence to support further investigation into the effectiveness of music‐related interventions for L2 reading.
- Supplementary Content
- 10.1108/jal-08-2024-0196
- Apr 29, 2025
- Journal of Accounting Literature
- Riccardo Cimini + 1 more
Purpose This paper aims to present the instruments and methodologies available to help detect any form of misrepresentation of financial information in annual reports. The paper presents a theoretical framework and makes a call for future research in this field of study. Design/methodology/approach The literature review identifies impression and earnings management as two possible forms of misrepresentation. For the former, we present the specific reporting choices available to mislead the interpretation of the firm’s performance; for the latter, we discuss different model specifications to detect window-dressing behaviors. Findings Insiders can opportunistically represent the firm’s performance by using pictures, violating graph construction principles or exercising reporting discretion. Manipulations of the substance of annual reports are detectable using specifications that, using an evolutionary approach, this paper classifies into “first-generation” models, ratio analysis and “second-generation” models. Originality/value The paper provides novel classifications of the instruments and methodologies that help detect impressions and earnings management in annual reports. Using a matrix, our theoretical framework shows that existing protocols are not suitable for investigating possible relationships between these two forms of misrepresentation in terms of whether impression management and earnings management can be both interchangeable or complementary in annual reports. Thus, future studies investigating both forms of misrepresentation might position their offerings in one of the matrix squares or introduce protocols that can detect both forms of misrepresentation. For practice, the paper has particular implications for standard setters. Specifically, in addition to improving the existing accounting standards to obstruct earnings management, they should also issue a standard that prevents impression management, which is currently lacking.
- Research Article
- 10.3390/app15084155
- Apr 10, 2025
- Applied Sciences
- Sixiang Xu + 3 more
Bilinear pooling, as an aggregation approach that outputs second-order statistics of deep learning features, has demonstrated effectiveness in a wide range of visual recognition tasks. Among major improvements on the bilinear pooling, matrix square root normalization—applied to the bilinear representation matrix—is regarded as a crucial step for further boosting performance. However, most existing works leverage Newton’s iteration to perform normalization, which becomes computationally inefficient when dealing with high-dimensional features. To address this limitation, through a comprehensive analysis, we reveal that both the distribution and magnitude of eigenvalues in the bilinear representation matrix play an important role in the network performance. Building upon this insight, we propose a novel approach, namely RegCov, which regularizes the eigenvalues when the normalization is absent. Specifically, RegCov incorporates two regularization terms that encourage the network to align the current eigenvalues with the target ones in terms of their distribution and magnitude. We implement RegCov across different network architectures and run extensive experiments on the ImageNet1K and fine-grained image classification benchmarks. The results demonstrate that RegCov maintains robust recognition to diverse datasets and network architectures while achieving superior inference speed compared to previous works.
- Research Article
1
- 10.3390/a18030169
- Mar 14, 2025
- Algorithms
- Rasha A Farghali + 4 more
Poisson regression is used to model count response variables. The method has a strict assumption that the mean and variance of the response variable are equal, while, in practice, the case of overdispersion is common. Also, in multicollinearity, the model parameter estimates obtained with the maximum likelihood estimator are adversely affected. This paper introduces a new biased estimator that extends the modified Kibria–Lukman estimator to the Poisson–Inverse-Gaussian regression model to deal with overdispersion and multicollinearity in the data. The superiority of the proposed estimator over the existing biased estimators is presented in terms of matrix and scalar mean square error. Moreover, the performance of the proposed estimator is examined through a simulation study. Finally, on a real dataset, the superiority of the proposed estimator over other estimators is demonstrated.
- Research Article
- 10.1038/s41598-025-85930-2
- Jan 17, 2025
- Scientific Reports
- Anjana Hevaganinge + 16 more
The development of optical sensors for label-free quantification of cell parameters has numerous uses in the biomedical arena. However, using current optical probes requires the laborious collection of sufficiently large datasets that can be used to calibrate optical probe signals to true metabolite concentrations. Further, most practitioners find it difficult to confidently adapt black box chemometric models that are difficult to troubleshoot in high-stakes applications such as biopharmaceutical manufacturing. Replacing optical probes with contactless short-wave infrared (SWIR) hyperspectral cameras allows efficient collection of thousands of absorption signals in a handful of images. This high repetition allows for effective denoising of each spectrum, so interpretable linear models can quantify metabolites. To illustrate, an interpretable linear model called L-SLR is trained using small datasets obtained with a SWIR HSI camera to quantify fructose, viable cell density (VCD), glucose, and lactate. The performance of this model is also compared to other existing linear models, namely Partial Least Squares (PLS) and Non-negative Matrix Factorization (NMF). Using only 50% of the dataset for training, reasonable test performance of mean absolute error (MAE) and correlations (r2) are achieved for glucose (r2 = 0.88, MAE = 37 mg/dL), lactate (r2 = 0.93, MAE = 15.08 mg/dL), and VCD (r2 = 0.81, MAE = 8.6 × 105 cells/mL). Further, these models are also able to handle quantification of a metabolite like fructose in the presence of high background concentration of similar metabolite with almost identical chemical interactions in water like glucose. The model achieves reasonable quantification performance for large fructose level (100–1000 mg/dL) quantification (r2 = 0.92, MAE = 25.1 mg/dL) and small fructose level (< 60 mg/dL) concentrations (r2 = 0.85, MAE = 4.97 mg/dL) in complex media like Fetal Bovine Serum (FBS). Finally, the model provides sparse interpretable weight matrices that hint at the underlying solution changes that correlate to each cell parameter prediction.
- Research Article
- 10.54021/seesv5n2-518
- Nov 12, 2024
- STUDIES IN ENGINEERING AND EXACT SCIENCES
- Nafissa Rezki + 1 more
In this work, we have conducted a comparative study among several machine learning techniques with the aim of selecting the best one for classifying faults affecting the compressor system to enable smart monitoring. This study encompasses various machine learning techniques, including Support Vector Machine, k-nearest neighbor, Decision Tree, Naive Bayes, AdaBoost, and Bag ensembles. To determine the optimal classification technique, we applied three distinct criteria: the confusion matrix, error histogram, and mean square error through cross-validation. Based on these criteria, the results indicate a tie for the top position between two classification models: Decision Tree and Bag ensemble. To solidify our choice of a single model, we employed the new AutoML technique to automatically identify the most suitable machine learning classification model for our case study. We evaluated this approach using process data obtained from an operational industrial centrifugal compressor. Consequently, the results presented in this work affirm that Decision Tree is the superior technique for classifying faults in the 3MCL compressor.
- Research Article
3
- 10.1016/j.patrec.2024.10.015
- Oct 1, 2024
- Pattern Recognition Letters
- Nadeem Iqbal Kajla + 5 more
A histogram-based approach to calculate graph similarity using graph neural networks
- Research Article
- 10.62210/clinscinutr.2024.88
- Aug 26, 2024
- Clinical Science of Nutrition
- Hülya Ulusoy + 5 more
Objective: Evaluating physicians’ attitudes towards malnutrition and clinical nutrition in hospitalized patients are crucial for the implementation of optimal nutritional care process and the prevent of hospital malnutrition. The aim of this study is to develop a scale that evaluates physicians’ attitudes towards malnutrition in hospitalized patients. Methods: Based on the existing literature on clinical nutrition and the clinical experience of experts in this field, a 5-point Likert-type attitude scale consisting of 12 items was developed. Analysis was carried out using Parallel Analysis to determine the number of factors in the Exploratory factor analysis based on the Polychoric correlation matrix and Unweighted Least Squares as the factor extraction method. Results: There are 8 items in the 1st factor (Physician duties) and 4 items in the 2nd factor (Non-Physician duties). The Cronbach Alpha and McDonald’s Omega coefficients of the scale were found to be 0.72 and 0.81 respectively, from the sub-dimensions 0.78 and 0.85 for the 1st Factor, and 0.66 and 0.75 for the 2nd Factor. Conclusion: Attitude scale for the clinical nutrition care process of hospitalized patients for physicians is an instrument with good psychometric properties that measures examination of physicians’ attitudes related to clinical nutrition care process.
- Research Article
9
- 10.1139/cjp-2024-0070
- Jul 17, 2024
- Canadian Journal of Physics
- Martin Houde + 2 more
In this tutorial, we summarize four important matrix decompositions commonly used in quantum optics, namely the Takagi/Autonne, Bloch–Messiah/Euler, Iwasawa, and Williamson decompositions. The first two of these decompositions are specialized versions of the singular-value decomposition when applied to symmetric or symplectic matrices. The third factors any symplectic matrix in a unique way in terms of matrices that belong to different subgroups of the symplectic group. The last one instead gives the symplectic diagonalization of real, positive definite matrices of even size. While proofs of the existence of these decompositions exist in the literature, we review explicit constructions to implement these decompositions using standard linear algebra packages and functionalities such as singular-value, polar, Schur, and QR decompositions, and matrix square roots and inverses.
- Research Article
- 10.1016/j.ipl.2024.106517
- Jun 12, 2024
- Information Processing Letters
- Bhisham Dev Verma + 3 more
Improving compressed matrix multiplication using control variate method
- Research Article
- 10.5269/bspm.65373
- May 31, 2024
- Boletim da Sociedade Paranaense de Matemática
- Mustapha Rachidi + 1 more
This paper concerns another approach for solving the matrix differential equations of the second order $X''(t)=AX'(t)+BX(t)$, where $A$, $B$ are square matrices of order ${d\times d}$. Such approach is based on some properties of matrix square root and the linear difference equation. We establish various new results and explicit formulas for the solutions of this type of matrix differential equations. Moreover, because of the non-commutativity condition between $A$ and $B$, the matrix fundamental system will play an important role. Finally, our results and their robustness, are validated by providing some examples and applications.
- Research Article
1
- 10.1016/j.cam.2024.115927
- Apr 18, 2024
- Journal of Computational and Applied Mathematics
- Pablo Soto-Quiros + 1 more
Fast random vector transforms in terms of pseudo-inverse within the Wiener filtering paradigm
- Research Article
2
- 10.3390/pr12030518
- Mar 4, 2024
- Processes
- Chao Ke + 4 more
The design for the remanufacturing process (DFRP) is a key part of remanufacturing, which directly affects the cost, performance, and carbon emission of used product remanufacturing. However, used parts have various failure forms and defects, which make it hard to rapidly generate the remanufacturing process scheme for simultaneously satisfying remanufacturing requirements regarding cost, performance, and carbon emissions. This causes remanufactured products to lose their energy-saving and emission-reduction benefits. To this end, this paper proposes an integrated design method for the used product remanufacturing process based on the multi-objective optimization model. Firstly, an integrated DFRP framework is constructed, including design information acquisition, the virtual model construction of DFRP solutions, and the multi-objective optimization of the remanufacturing process scheme. Then, the design matrix, sensitivity analysis, and least squares are applied to construct the mapping models between performance, carbon emissions, cost, and remanufacturing process parameters. Meanwhile, a DFRP multi-objective optimization model with performance, carbon emission, and cost as the design objectives is established, and a teaching–learning based adaptive optimization algorithm is employed to solve the optimization model to acquire a DFRP solution satisfying the target information. Finally, the feasibility of the method is verified by the DFRP of the turbine blade as an example. The results show that the optimized remanufacturing process parameters reduce carbon emissions by 11.7% and remanufacturing cost by USD 0.052 compared with the original process parameters, and also improve the tensile strength of the turbine blades, which also indicates that the DFPR method can effectively achieve energy saving and emission reduction and ensure the performance of the remanufactured products. This can greatly reduce the carbon emission credits of the large-scale remanufacturing industry and promote the global industry’s sustainable development; meanwhile, this study is useful for remanufacturing companies and provides remanufacturing process design methodology support.
- Research Article
- 10.13001/ela.2024.7905
- Jan 12, 2024
- The Electronic Journal of Linear Algebra
- Bahar Arslan + 2 more
Matrix functions play an increasingly important role in many areas of scientific computing and engineering disciplines. In such real-world applications, algorithms working in floating-point arithmetic are used for computing matrix functions and additionally input data might be unreliable, e.g., due to measurement errors. Therefore, it is crucial to understand the sensitivity of matrix functions to perturbations, which is measured by condition numbers. However, the condition number itself might not be computed exactly as well due to round-off and errors in the input. The sensitivity of the condition number is measured by the so-called level-2 condition number. For the usual (level-1) condition number, it is well known that structured condition numbers (i.e., where only perturbations are taken into account that preserve the structure of the input matrix) might be much smaller than unstructured ones, which, e.g., suggests that structure-preserving algorithms for matrix functions might yield much more accurate results than general-purpose algorithms. In this work, we present a novel upper bound on the structured level-2 condition number, focusing on perturbation matrices within an automorphism group, a Lie or Jordan algebra, or the space of quasi-triangular matrices. In numerical experiments, we then compare the unstructured level-2 condition number with the structured one for some specific matrix functions such as the matrix logarithm, matrix square root, and matrix exponential.
- Research Article
- 10.1109/access.2024.3437783
- Jan 1, 2024
- IEEE Access
- Feixiang Yang + 1 more
The dynamic matrix square root (DMSR) problem is a recurring nonlinear challenge in many engineering disciplines. Although the original zeroing neural network (OZNN) shows potential for handling such problems, it faces challenges in robustness and convergence. To address these limitations, we developed a simplified activation function, the simplified signal power activation function (SSPAF), which accelerates convergence. Building on this, we introduced a dual integral enhancement term, leading to the design of the rapid dual integral neural dynamic (R-DIND) model to further enhance convergence accuracy, speed, and robustness. Mathematically, the R-DIND model aligns more closely with calculus principles, showcasing unique advantages over existing ZNN models. Theoretical analysis and extensive experiments demonstrated that the R-DIND model has inherent structural advantages over the RNDAC model. Results showed that the R-DIND model improves residual precision from 10 −2 to 10 −6 under the same noise conditions. Furthermore, its robustness and applicability were confirmed when applied to higher-order and more complex matrix problems with harmonic noise.
- Research Article
- 10.1553/etna_vol60s381
- Jan 1, 2024
- ETNA - Electronic Transactions on Numerical Analysis
- Andreas Frommer + 3 more
While preconditioning is a long-standing concept to accelerate iterative methods for linear systems, generalizations to matrix functions are still in their infancy. We go a further step in this direction, introducing polynomial preconditioning for Krylov subspace methods which approximate the action of the matrix square root and inverse square root on a vector. Preconditioning reduces the subspace size and therefore avoids the storage problem together with-for non-Hermitian matrices-the increased computational cost per iteration that arises in the unpreconditioned case. Polynomial preconditioning is an attractive alternative to current restarting or sketching approaches since it is simpler and computationally more efficient. We demonstrate this for several numerical examples.
- Research Article
- 10.3934/era.2024245
- Jan 1, 2024
- Electronic Research Archive
- Zehua Wang + 2 more
<p>The matrix square root is widely encountered in many fields of mathematics. In this paper, based on the properties of M-matrix and quadratic matrix equations, we study the square root of M-matrix, and prove that for a regular M-matrix there always exists a regular M-matrix as its square root. In addition, a structure-preserving doubling algorithm is proposed to compute the square root. Theoretical analysis and numerical experiments are given to show that our method is feasible and is effective under certain conditions.</p>