Using easy coefficients conjecture for rotation symmetric Boolean functions

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Using easy coefficients conjecture for rotation symmetric Boolean functions

Similar Papers
  • Research Article
  • Cite Count Icon 6
  • 10.1016/j.laa.2018.10.010
Block minimal bases ℓ-ifications of matrix polynomials
  • Oct 12, 2018
  • Linear Algebra and its Applications
  • Froilán M Dopico + 2 more

Block minimal bases ℓ-ifications of matrix polynomials

  • Research Article
  • Cite Count Icon 19
  • 10.1016/j.laa.2019.09.006
Root polynomials and their role in the theory of matrix polynomials
  • Sep 12, 2019
  • Linear Algebra and its Applications
  • Froilán M Dopico + 1 more

Root polynomials and their role in the theory of matrix polynomials

  • Research Article
  • 10.1080/09720529.2008.10698177
Minimal polynomial of Cayley graph adjacency matrix for Boolean functions
  • Apr 1, 2008
  • Journal of Discrete Mathematical Sciences and Cryptography
  • Michel Mitton

The subject of this paper is the algebraic study of the adjacency matrix of the Cayley graph of a Boolean function. From the characteristic polynomial of this adjacency matrix we deduce its minimal polynomial.

  • Book Chapter
  • Cite Count Icon 7
  • 10.1007/978-3-0348-5672-0_8
A Hermite Theorem for Matrix Polynomials
  • Jan 1, 1991
  • Harry Dym

An analogue of the Hermite theorem for the number of zeros in a half plane for a scalar polynomial is obtained for a class of m × m matrix polynomials by (finite dimensional) reproducing kernel Krein space methods. The paper, which is largely expository, is partially modelled on an earlier paper with N.J. Young which developed similar analogues of the Schur-Cohn theorem for matrix polynomials. More complete results in a somewhat different formulation have been obtained by Lerer and Tismenetsky by other methods. New proofs of some recent results on the distribution of the roots of certain matrix polynomials which are associated with invertible Hermitian block Hankel and block Toeplitz matrices are presented as an application of the main theorem.

  • Research Article
  • Cite Count Icon 8
  • 10.1016/0095-8956(80)90086-6
The group and the minimal polynomial of a graph
  • Dec 1, 1980
  • Journal of Combinatorial Theory, Series B
  • Giovanni Criscuolo + 3 more

The group and the minimal polynomial of a graph

  • Book Chapter
  • 10.1007/978-0-8176-4529-8_5
Theory of a Single Linear Transformation
  • Jan 1, 2006
  • Anthony W Knapp

This goal of this chapter is to find finitely many canonical representatives of each similarity class of square matrices with entries in a field and correspondingly of each isomorphism class of linear maps from a finite-dimensional vector space to itself.Section 1 frames the problem in more detail. Section 2 develops the theory of determinants over a commutative ring with identity in order to be able to work easily with characteristic polynomials det(?I — A). The discussion is built around the principle of "permanence of identities," which allows for passage from certain identities with integer coefficients to identities with coefficients in the ring in question.Section 3 introduces the minimal polynomial of a square matrix or linear map. The Cayley-Hamilton Theorem establishes that such a matrix satisfies its characteristic equation, and it follows that the minimal polynomial divides the characteristic polynomial. It is proved that a matrix is similar to a diagonal matrix if and only if its minimal polynomial is the product of distinct factors of degree 1. In combination with the fact that two diagonal matrices are similar if and only if their diagonal entries are permutations of one another, this result solves the canonical-form problem for matrices whose minimal polynomial is the product of distinct factors of degree 1.Section 4 introduces general projection operators from a vector space to itself and relates them to vector-space direct-sum decompositions with finitely many summands. The summands of a directsum decomposition are invariant under a linear map if and only if the linear map commutes with each of the projections associated to the direct-sum decomposition.Section 5 concerns the Primary Decomposition Theorem, whose subject is the operation of a linear map L: V ? V with V finite-dimensional. The statement is that if L has minimal polynomial \( P_1 (\lambda )^{l_1 } \cdots P_k (\lambda )^{l_k } \) with the Pj (?) distinct monic prime, then V has a unique direct-sum decomposition in which the respective summands are the kernels of the linear maps \( P_j (L)^{l_j } \), and moreover the minimal polynomial of the restriction of L to the j th summand is \( P_j (\lambda )^{l_j } \).Sections 6–7 concern Jordan canonical form. For the case that the prime factors of the minimal polynomial of a square matrix all have degree 1, the main theorem gives a canonical form under similarity, saying that a given matrix is similar to one in "Jordan form" and that the Jordan form is completely determined up to permutation of the constituent blocks. The theorem applies to all square matrices if the field is algebraically closed, as is the case for C. The theorem is stated and proved in Section 6, and Section 7 shows how to make computations in two different ways.

  • Book Chapter
  • 10.3792/euclid/9781429799980-5
Chapter V. Theory of a Single Linear Transformation
  • Jan 1, 2016
  • Anthony W Knapp

This goal of this chapter is to find finitely many canonical representatives of each similarity class of square matrices with entries in a field and correspondingly of each isomorphism class of linear maps from a finite-dimensional vector space to itself. Section 1 frames the problem in more detail. Section 2 develops the theory of determinants over a commutative ring with identity in order to be able to work easily with characteristic polynomials $\det(X I-A)$. The discussion is built around the principle of “permanence of identities,” which allows for passage from certain identities with integer coefficients to identities with coefficients in the ring in question. Section 3 introduces the minimal polynomial of a square matrix or linear map. The Cayley–Hamilton Theorem establishes that such a matrix satisfies its characteristic equation, and it follows that the minimal polynomial divides the characteristic polynomial. It is proved that a matrix is similar to a diagonal matrix if and only if its minimal polynomial is the product of distinct factors of degree 1. In combination with the fact that two diagonal matrices are similar if and only if their diagonal entries are permutations of one another, this result solves the canonical-form problem for matrices whose minimal polynomial is the product of distinct factors of degree 1. Section 4 introduces general projection operators from a vector space to itself and relates them to vector-space direct-sum decompositions with finitely many summands. The summands of a direct-sum decomposition are invariant under a linear map if and only if the linear map commutes with each of the projections associated to the direct-sum decomposition. Section 5 concerns the Primary Decomposition Theorem, whose subject is the operation of a linear map $L:V\to V$ with $V$ finite-dimensional. The statement is that if $L$ has minimal polynomial $P_1(X)^{l_1}\cdots P_k(X)^{l_k}$ with the $P_j(X)$ distinct monic prime, then $V$ has a unique direct-sum decomposition in which the respective summands are the kernels of the linear maps $P_j(L)^{l_j}$, and moreover the minimal polynomial of the restriction of $L$ to the $j^\mathrm{th}$ summand is $P_j(X)^{l_j}$. Sections 6–7 concern Jordan canonical form. For the case that the prime factors of the minimal polynomial of a square matrix all have degree 1, the main theorem gives a canonical form under similarity, saying that a given matrix is similar to one in “Jordan form” and that the Jordan form is completely determined up to permutation of the constituent blocks. The theorem applies to all square matrices if the field is algebraically closed, as is the case for $\mathbb C$. The theorem is stated and proved in Section 6, and Section 7 shows how to make computations in two different ways.

  • Research Article
  • Cite Count Icon 24
  • 10.1016/j.laa.2010.08.035
Hermitian matrix polynomials with real eigenvalues of definite type. Part I: Classification
  • Sep 29, 2010
  • Linear Algebra and Its Applications
  • Maha Al-Ammari + 1 more

Hermitian matrix polynomials with real eigenvalues of definite type. Part I: Classification

  • Research Article
  • Cite Count Icon 13
  • 10.1016/j.laa.2016.04.005
Bounds for eigenvalues of matrix polynomials with applications to scalar polynomials
  • Apr 12, 2016
  • Linear Algebra and Its Applications
  • A Melman

Bounds for eigenvalues of matrix polynomials with applications to scalar polynomials

  • Research Article
  • Cite Count Icon 1
  • 10.5860/choice.41-0983
Basic matrix algebra with algorithms and applications
  • Oct 1, 2003
  • Choice Reviews Online
  • Robert A Liebler

SYSTEMS OF LINEAR EQUATIONS AND THEIR SOLUTION Recognizing Linear Systems and Solutions Matrices, Equivalence and Row Operations Echelon Forms and Gaussian Elimination Free Variables and General Solutions The Vector Form of the General Solution Geometric Vectors and Linear Functions Polynomial Interpolation MATRIX NUMBER SYSTEMS Complex Numbers Matrix Multiplication Auxiliary Matrices and Matrix Inverses Symmetric Projectors, Resolving Vectors Least Squares Approximation Changing Plane Coordinates The Fast Fourier Transform and the Euclidean Algorithm. DIAGONALIZABLE MATRICES Eigenvectors and Eigenvalues The Minimal Polynomial Algorithm Linear Recurrence Relations Properties of the Minimal Polynomial The Sequence {Ak} Discrete dynamical systems Matrix compression with components DETERMINANTS Area and Composition of Linear Functions Computing Determinants Fundamental Properties of Determinants Further Applications Appendix: The abstract setting Selected practice problem answers Index

  • Research Article
  • Cite Count Icon 4
  • 10.1080/03081087.2012.670235
Minimal polynomial systems for parametric matrices
  • Apr 2, 2012
  • Linear and Multilinear Algebra
  • Amir Hashemi + 2 more

In this article, we study the minimal polynomials of parametric matrices. Using the concept of (comprehensive) Gröbner systems for parametric ideals, we introduce the notion of a minimal polynomial system for a parametric matrix, i.e. we decompose the space of parameters into a finite set of cells and for each cell we give the corresponding minimal polynomial of the matrix. We also present an algorithm for computing a minimal polynomial system for a given parametric matrix.

  • Research Article
  • Cite Count Icon 22
  • 10.1137/19m1255847
Block Krylov Subspace Methods for Functions of Matrices II: Modified Block FOM
  • Jan 1, 2020
  • SIAM Journal on Matrix Analysis and Applications
  • Andreas Frommer + 2 more

We analyze an expansion of the generalized block Krylov subspace framework of [Electron. Trans. Numer. Anal., 47 (2017), pp. 100--126]. This expansion allows the use of low-rank modifications of the matrix projected onto the block Krylov subspace and contains, as special cases, the block GMRES method and the new block Radau--Arnoldi method. Within this general setting, we present results that extend the interpolation property from the nonblock case to a matrix polynomial interpolation property for the block case, and we relate the eigenvalues of the projected matrix to the latent roots of these matrix polynomials. Some error bounds for these modified block FOM methods for solving linear systems are presented. We then show how cospatial residuals can be preserved in the case of families of shifted linear block systems. This result is used to derive computationally practical restarted algorithms for block Krylov approximations that compute the action of a matrix function on a set of several vectors simultaneously. We prove some error bounds and present numerical results showing that two modifications of FOM, the block harmonic and the block Radau--Arnoldi methods for matrix functions, can significantly improve the convergence behavior.

  • Research Article
  • Cite Count Icon 21
  • 10.4064/aa-89-1-53-96
Linear relations between roots of polynomials
  • Jan 1, 1999
  • Acta Arithmetica
  • Kurt Girstmair

Both types are comprised under the name of “linear relations”. One of our objectives consists in convincing the reader that the representation theory of finite groups, applied to the Galois group G = Gal(L/K) of f , is the appropriate framework for questions of this kind. More than 15 years ago we already pointed out this role of representation theory in our paper [11]—it seems, however, that the proper value of this tool has not been recognized by several later researchers (cf. [19], [9], [10], [1], [17]). As an effect, some minor observations of [11] appear as main results in later articles (cf., e.g., [11], Proposition 4, Assertion 3 and [9], Theorem 3). An exception to this tendency is the recent paper [7]. But although it uses representation theory, its viewpoint differs from that of our previous work: The results of [7] are mainly necessary conditions saying that a given relation (such as x1 = x2 + x3) can occur for a certain class of polynomials only. Our paper [11], in contrast, contains a criterion that allows one to decide whether a given relation (1) is possible or not in a specific case (cf. Theorem 1 below). This criterion yields a classification of all possible relations (1) for polynomials f over K = Q of degree n ≤ 15 with G acting primitively on its roots ([11], Theorem 1, and Section 2, ibid.). For example, the relation

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 2
  • 10.1088/1742-6596/1053/1/012032
System of linear fractional differential equations and the Mittag-Leffler functions with matrix variable
  • Jul 1, 2018
  • Journal of Physics: Conference Series
  • Junsheng Duan

The solution for system of linear fractional differential equations is derived in terms of the Mittag-Leffler functions with matrix variable. Three different methods for calculating the Mittag-Leffler functions with matrix variable are obtained with the help of inverse Laplace transform, Jordan canonical matrix and minimal polynomial, respectively. The solution for system of linear first-order differential equations is obtained as a special case. The results show that the Mittag-Leffler functions with matrix variable are powerful tools for solving system of linear fractional differential equations.

  • Research Article
  • Cite Count Icon 22
  • 10.1137/0313032
The Sequential Construction of Minimal Partial Realizations from Finite Input–Output Data
  • May 1, 1975
  • SIAM Journal on Control
  • B M Anderson + 2 more

Any strictly proper transfer function matrix of a continuous or discrete, linear, constant, multivariable system can be written as the product of a numerator polynomial matrix with the inverse of another polynomial matrix, the denominator. Since a realization is easily constructed from the polynomial matrix representation, the minimal partial realization problem is translated to that of extracting -a minimal order partial denominator polynomial matrix from a finite length matrix sequence. It is shown that minimal partial denominator matrices evolve recursively that is, a minimal partial denominator matrix for any finite length sequence is a combination of the minimal partial denominator matrices of its proper subsequences. A computationally efficient algorithm that sequentially constructs a minimal partial denominator matrix for a given finite length sequence is presented. A theorem by Anderson and Brasch leads to a definition of uniqueness for the resulting denominator matrix based upon its invariant factors. Parameters used during execution of the algorithm are shown to be sufficient for enumerating all invariant factor sets in the equivalence class of minimal partial realizations. The results apply to continuous and discrete linear systems including finite state machines.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.