Dimension reduction for optimal design problems with Kronecker product structure

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Abstract This paper is motivated by the problem of optimal allocation of trials in multi‐environment crop variety testing with a large number of varieties. Optimizing the allocation of trials results in the minimization of a design criterion with a Kronecker product structure in the information matrix. We consider the Kronecker–Bayesian linear criterion, which generalizes this design problem and has the form of the trace of the inverse of a sum of two Kronecker products. We derive a new general formula for the inverse of the sum of two Kronecker products, and we use this result to rewrite the Kronecker–Bayesian criterion in the form of the compound Bayes risk criterion, which can be recognized as a sum of Bayesian linear criteria with the same moment matrix. Based on the convexity and differentiability of the Kronecker–Bayesian linear criterion, we establish optimality conditions for approximate designs. We also propose a dimension reduction approach that provides highly efficient approximations for optimal designs. The proposed method allows for the preselection of an upper bound on the efficiency loss, which is independent of the true optimal design. Optimal or highly efficient designs can be computed under any kind of additional linear constraints, such as cost constraints. We apply our results to the problem of optimizing the allocation of trials in multi‐environment crop variety testing, and we illustrate the behavior of the optimal designs by real data examples. Finally, we consider further applications of the general formula for computing the inverse of the sum of two Kronecker products in control theory or multivariate time series analysis.

Similar Papers
  • Research Article
  • Cite Count Icon 1
  • 10.1093/biomet/asaf072
On testing Kronecker product structure in tensor factor models
  • Oct 16, 2025
  • Biometrika
  • Z Cen + 1 more

We propose a test for testing the Kronecker product structure of a factor loading matrix implied by a tensor factor model with Tucker decomposition in the common component. Through defining a Kronecker product structure set, we define if a tensor time series has a Kronecker product structure, equivalent to the ability to decompose the series according to a tensor factor model. Our test is built on analysing and comparing the residuals from fitting a full tensor factor model, and the residuals from fitting a factor model on a reshaped version of the data. In the most extreme case, the reshaping is the vectorization of the tensor data, and the factor loading matrix in such a case can be general if there is no Kronecker product structure present. Our test is also generalized to testing the Khatri–Rao product structure in a tensor factor model with the CP (canonical polyadic) decomposition. Theoretical results are developed through asymptotic normality results on estimated residuals. Numerical experiments suggest that the size of the tests gets closer to the pre-set nominal value as the sample size or the order of the tensor gets larger, while the power increases with mode dimensions and the number of combined modes. We demonstrate our tests through extensive real data examples.

  • Research Article
  • Cite Count Icon 11
  • 10.1002/nla.666
Kronecker product approximation preconditioners for convection–diffusion model problems
  • Jul 18, 2010
  • Numerical Linear Algebra with Applications
  • Hua Xiang + 1 more

We consider the iterative solution of linear systems arising from four convection–diffusion model problems: scalar convection–diffusion problem, Stokes problem, Oseen problem and Navier–Stokes problem. We design preconditioners for these model problems that are based on Kronecker product approximations (KPAs). For this we first identify explicit Kronecker product structure of the coefficient matrices, in particular for the convection term. For the latter three model cases, the coefficient matrices have a 2 × 2 block structure, where each block is a Kronecker product or a summation of several Kronecker products. We then use this structure to design a block diagonal preconditioner, a block triangular preconditioner and a constraint preconditioner. Numerical experiments show the efficiency of the three KPA preconditioners, and in particular of the constraint preconditioner that usually outperforms the other two. This can be explained by the relationship that exists between these three preconditioners: the constraint preconditioner can be regarded as a modification of the block triangular preconditioner, which at its turn is a modification of the block diagonal preconditioner based on the cell Reynolds number. Copyright © 2009 John Wiley & Sons, Ltd.

  • Research Article
  • Cite Count Icon 1
  • 10.1016/j.camwa.2023.04.043
A Kronecker product linear-cost solver for the high-order generalized-α method for multi-dimensional hyperbolic systems
  • Jul 1, 2023
  • Computers & Mathematics with Applications
  • V.M Calo + 3 more

A Kronecker product linear-cost solver for the high-order generalized-α method for multi-dimensional hyperbolic systems

  • Conference Article
  • Cite Count Icon 2
  • 10.1109/bigcomp.2016.7425800
Robust and scalable matrix completion
  • Jan 1, 2016
  • Yangyang Kang

In the era of big data, the matrix completion (MC) problem has become increasingly popular in machine learning and data mining. Many algorithms, such as singular value thresholding, soft-impute and fixed point continuation, have been proposed for solving this problem. Typically, these existing algorithms require implementing a singular value decomposition of a data matrix at each iteration. Thus, these algorithms are not scalable when the size of the matrix is very large. Motivated by the principle of robust principal component analysis, in this paper we propose a novel MC algorithm, called robust and scalable MC with Kronecker product (RSKP), which models the original data matrix as a low-rank matrix plus a sparse matrix. Furthermore, we represent the low-rank matrix as the Kronecker product of two small-size matrices. Using the Kronecker product makes the model scalable, and introducing the sparse matrix makes the model more robust. We apply our RSKP algorithm to image recovery problems which can be naturally represented by a data matrix with the Kronecker product structure. Experimental results show that our RSKP is efficient and effective in real applications.

  • Research Article
  • Cite Count Icon 1
  • 10.1016/j.jmva.2021.104761
On the asymptotic normality and efficiency of Kronecker envelope principal component analysis
  • Apr 18, 2021
  • Journal of Multivariate Analysis
  • Shih-Hao Huang + 1 more

On the asymptotic normality and efficiency of Kronecker envelope principal component analysis

  • Conference Article
  • 10.23919/eusipco47968.2020.9287415
Efficient Estimation of Kronecker Product of Linear Structured Scatter Matrices under t-distribution
  • Jun 18, 2020
  • Bruno Meriaux + 4 more

This paper addresses structured scatter matrix estimation within the non convex set of Kronecker product structure. The latter model usually involves two matrices, which can be themselves linearly constrained, and arises in many applications, such as MIMO communication, MEG/EEG data analysis. Taking this prior knowledge into account generally improves estimation accuracy. In the framework of robust estimation, the t-distribution is particularly suited to model heavy-tailed data. In this context, we introduce an estimator of the scatter matrix, having a Kronecker product structure and potential linear structured factors. In addition, we show that the proposed method yields a consistent and efficient estimate.

  • Research Article
  • Cite Count Icon 30
  • 10.1093/imaiai/iaaa028
Faster Johnson–Lindenstrauss transforms via Kronecker products
  • Oct 23, 2020
  • Information and Inference: A Journal of the IMA
  • Ruhui Jin + 2 more

The Kronecker product is an important matrix operation with a wide range of applications in signal processing, graph theory, quantum computing and deep learning. In this work, we introduce a generalization of the fast Johnson–Lindenstrauss projection for embedding vectors with Kronecker product structure, the Kronecker fast Johnson–Lindenstrauss transform (KFJLT). The KFJLT reduces the embedding cost by an exponential factor of the standard fast Johnson–Lindenstrauss transform’s cost when applied to vectors with Kronecker structure, by avoiding explicitly forming the full Kronecker products. We prove that this computational gain comes with only a small price in embedding power: consider a finite set of $p$ points in a tensor product of $d$ constituent Euclidean spaces $\bigotimes _{k=d}^{1}{\mathbb{R}}^{n_k}$, and let $N = \prod _{k=1}^{d}n_k$. With high probability, a random KFJLT matrix of dimension $m \times N$ embeds the set of points up to multiplicative distortion $(1\pm \varepsilon )$ provided $m \gtrsim \varepsilon ^{-2} \, \log ^{2d - 1} (p) \, \log N$. We conclude by describing a direct application of the KFJLT to the efficient solution of large-scale Kronecker-structured least squares problems for fitting the CP tensor decomposition.

  • Research Article
  • Cite Count Icon 11
  • 10.1016/j.laa.2020.10.013
Approximation with a Kronecker product structure with one component as compound symmetry or autoregression via entropy loss function
  • Oct 16, 2020
  • Linear Algebra and its Applications
  • Katarzyna Filipiak + 3 more

Approximation with a Kronecker product structure with one component as compound symmetry or autoregression via entropy loss function

  • Research Article
  • Cite Count Icon 11
  • 10.1016/j.laa.2018.08.031
Approximation with a Kronecker product structure with one component as compound symmetry or autoregression
  • Aug 30, 2018
  • Linear Algebra and its Applications
  • Katarzyna Filipiak + 1 more

Approximation with a Kronecker product structure with one component as compound symmetry or autoregression

  • Research Article
  • Cite Count Icon 86
  • 10.1109/tsp.2013.2240157
On Convergence of Kronecker Graphical Lasso Algorithms
  • Apr 1, 2013
  • IEEE Transactions on Signal Processing
  • Theodoros Tsiligkaridis + 2 more

This paper studies iteration convergence of Kronecker graphical lasso (KGLasso) algorithms for estimating the covariance of an i.i.d. Gaussian random sample under a sparse Kronecker-product covariance model and MSE convergence rates. The KGlasso model, originally called the transposable regularized covariance model by Allen ["Transposable regularized covariance models with an application to missing data imputation," Ann. Appl. Statist., vol. 4, no. 2, pp. 764-790, 2010], implements a pair of $\ell_1$ penalties on each Kronecker factor to enforce sparsity in the covariance estimator. The KGlasso algorithm generalizes Glasso, introduced by Yuan and Lin ["Model selection and estimation in the Gaussian graphical model," Biometrika, vol. 94, pp. 19-35, 2007] and Banerjee ["Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data," J. Mach. Learn. Res., vol. 9, pp. 485-516, Mar. 2008], to estimate covariances having Kronecker product form. It also generalizes the unpenalized ML flip-flop (FF) algorithm of Dutilleul ["The MLE algorithm for the matrix normal distribution," J. Statist. Comput. Simul., vol. 64, pp. 105-123, 1999] and Werner ["On estimation of covariance matrices with Kronecker product structure," IEEE Trans. Signal Process., vol. 56, no. 2, pp. 478-491, Feb. 2008] to estimation of sparse Kronecker factors. We establish that the KGlasso iterates converge pointwise to a local maximum of the penalized likelihood function. We derive high dimensional rates of convergence to the true covariance as both the number of samples and the number of variables go to infinity. Our results establish that KGlasso has significantly faster asymptotic convergence than Glasso and FF. Simulations are presented that validate the results of our analysis.

  • Research Article
  • 10.1088/1361-6420/adec12
Efficient decomposition-based algorithms for ℓ1-regularized inverse problems with column-orthogonal and Kronecker product matrices
  • Jul 17, 2025
  • Inverse Problems
  • Brian F Sweeney + 2 more

We consider an ℓ 1 -regularized inverse problem where both the forward and regularization operators have a Kronecker product structure. By leveraging this structure, a joint decomposition can be obtained using generalized singular value decompositions. We show how this joint decomposition can be effectively integrated into the Split Bregman and Majorization–Minimization methods to solve the ℓ 1 -regularized inverse problem. Furthermore, for cases involving column-orthogonal regularization matrices, we prove that the joint decomposition can be derived directly from the singular value decomposition of the system matrix. As a result, we show that framelet and wavelet operators are efficient for these decomposition-based algorithms in the context of ℓ 1 -regularized image deblurring problems.

  • Research Article
  • Cite Count Icon 10
  • 10.1109/tsp.2020.3042946
Matched and Mismatched Estimation of Kronecker Product of Linearly Structured Scatter Matrices Under Elliptical Distributions
  • Dec 15, 2020
  • IEEE Transactions on Signal Processing
  • Bruno Meriaux + 4 more

International audience

  • Research Article
  • Cite Count Icon 20
  • 10.1137/s0895479895295027
Efficient Solution of Constrained Least Squares Problems with Kronecker Product Structure
  • Jan 1, 1998
  • SIAM Journal on Matrix Analysis and Applications
  • Anders Barrlund

A computational method for efficient solution of linear constrained least squares problems with Kronecker product structure is presented. The equality constraints are assumed to be linearly independent. The computational efficiency of the method is analyzed. Conditions for uniqueness of solutions are given.

  • Research Article
  • Cite Count Icon 6
  • 10.1080/03610928608829183
On generalizations of the classical method of confounding to asymmetric factorial experiments
  • Jan 1, 1986
  • Communications in Statistics - Theory and Methods
  • D.T Voss

A Kronecker product structure is identified for a particular class of asymmetric factorial designs in blocks, including the classes of designs generated by several of the generalizations of the classical method in the literature. The Kronecker product structure is utilized to establish orthogonal factorial structure for the class of designs and to identify a Principle of Generalized Interaction.

  • Research Article
  • Cite Count Icon 8
  • 10.1016/j.jeconom.2022.01.005
A test for Kronecker Product Structure covariance matrix
  • Mar 4, 2022
  • Journal of Econometrics
  • Patrik Guggenberger + 2 more

We propose a test for a covariance matrix to have Kronecker Product Structure (KPS). KPS implies a reduced rank restriction on a certain transformation of the covariance matrix and the new procedure is an adaptation of the Kleibergen and Paap (2006) reduced rank test. To derive the limiting distribution of the Wald type test statistic proves challenging partly because of the singularity of the covariance matrix estimator that appears in the weighting matrix. We show that the test statistic has a χ2 limiting null distribution with degrees of freedom equal to the number of restrictions tested. Local asymptotic power results are derived. Monte Carlo simulations reveal good size and power properties of the test. Re-examining fifteen highly cited papers conducting instrumental variable regressions, we find that KPS is not rejected in 56 out of 118 specifications at the 5% nominal size.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.