Minimal indices and minimal bases via filtrations
A new way to formulate the notions of minimal basis and minimal indices is developed in this paper, based on the concept of a filtration of a vector space. The goal is to provide useful new tools for working with these important concepts, as well as to gain deeper insight into their fundamental nature. This approach also readily reveals a strong minimality property of minimal indices, from which follows a characterization of the vector polynomial bases in rational vector spaces. The effectiveness of this new formulation is further illustrated by proving several fundamental properties: the invariance of the minimal indices of a matrix polynomial under field extension, the direct sum property of minimal indices, the polynomial linear combination property, and the predictable degree property.
- Research Article
733
- 10.1137/0313029
- May 1, 1975
- SIAM Journal on Control
A minimal basis of a vector space V of n-tuples of rational functions is defined as a polynomial basis such that the sum of the degrees of the basis n-tuples is minimum. Conditions for a matrix G to represent a minimal basis are derived. By imposing additional conditions on G we arrive at a minimal basis for V that is unique. We show how minimal bases can be used to factor a transfer function matrix G in the form $G = ND^{ - 1} $, where N and D are polynomial matrices that display the controllability indices of G and its controller canonical realization. Transfer function matrices G solving equations of the form $PG = Q$ are also obtained by this method; applications to the problem of finding minimal order inverse systems are given. Previous applications to convolutional coding theory are noted. This range of applications suggests that minimal basis ideas will be useful throughout the theory of multivariable linear systems. A restatement of these ideas in the language of valuation theory is given in an Ap...
- Research Article
15
- 10.1080/00207178308933111
- Oct 1, 1983
- International Journal of Control
The algebraic structure of the get of all proper rational vectors contained in a given rational vector space 𝒯(s) is shown to be that of a noothorian ℝpr (s) -motiulo M*. ( Ropf;pr(a) is the ring of proper rational functions.) The proper submodules Mt of M* form an ascending chain of submodules partially ordered by an invariant of Mt defined as the valuation at s= ∞ ofMt , Tho various bases of Mt are examined and classified according to their property of ‘column reduceness at s = ∞’The concept of a prime column reduced at basis of Mt is introduced. It is shown that tho prime bases of Mt can be further classified by their MacMillan degrees and the existence of minimal MacMillan degree bases for Mt is established. A prime and minimal MacMillan degree basis ofMt extends Forney's concept of a minimal polynomial basis of ∞ (s) for the Rpr Mt -module The MacMillan degrees of the columns of such bases form a set of invariants Mt for which are defined as the 𝒯(s) generalized invariant dynamicol indices of Mt and a simple relation is established between (I) the generalized invariant dynamical indices Mt (ii) the orders of zeros at = ∞ s and (iii) the Forney invariant dynamical indices of Finally those results are specialized to the (maximal) noetherian ( Rpr(a)smodule M* it is shown that in this case the ' generalized invariant dynamical indices ' of M*. coincide with the invariant dynamical indices of Forney for 𝒯(s) thus providing an alternative interpretation of the Forney ' invariant dynamical order ' of 𝒯(s) as an absolute minimum of the MacMillan degree of any proper basis for 𝒯(s)
- Research Article
12
- 10.1109/tac.1984.1103456
- Dec 1, 1984
- IEEE Transactions on Automatic Control
The structure of proper and stable bases of rational vector spaces is investigated. We prove that if t(s) is a rational vector space, then among the proper bases of 3(s) there is a subfamily of proper bases which are 1) stable, 2) have no zeros in C\bigcup \{\infty\} and therefore are column (row) reduced at infinity, and 3) their MacMillan degree is minimum among the MacMillan degrees of all other proper bases of 3(s) and it is given by the sum of the MacMillan degrees of their columns taken separately. It is shown that this notion is the counterpart of Forney's concept of a minimal polynomial basis for the family of proper and stable bases of 3(s).
- Research Article
2
- 10.1080/00207179.2013.816869
- Nov 1, 2013
- International Journal of Control
For a general singular system with an associated pencil T(S), a complete classification of the right polynomial vector pairs , connected with the rational vector space, is given according to the proper–nonproper property, characterising the relationship of the degrees of those two vectors. An integral part of the classification of right pairs is the development of the notions of canonical and normal minimal bases for and rational vector spaces, where R(s) is the state restriction pencil of . It is shown that the notions of canonical and normal minimal bases are equivalent; the first notion characterises the pure algebraic aspect of the classification, whereas the second is intimately connected to the real geometry properties and the underlying generation mechanism of the proper and nonproper state vectors . The results describe the algebraic and geometric dimensions of the invariant partitioning of the set of reachability indices of singular systems. The classification of all proper and nonproper polynomial vectors induces a corresponding classification for the reachability spaces to proper–nonproper and results related to the possible dimensions feedback-spectra assignment properties of them are also given. The classification of minimal bases introduces new feedback invariants for singular systems, based on the real geometry of polynomial minimal bases, and provides an extension of the standard theory for proper systems (Warren, M.E., & Eckenberg, A.E. (1975).
- Research Article
16
- 10.1016/j.laa.2015.09.015
- Nov 17, 2015
- Linear Algebra and its Applications
Polynomial zigzag matrices, dual minimal bases, and the realization of completely singular polynomials
- Conference Article
- 10.23919/acc.1983.4788189
- Jun 1, 1983
The paper surveys various generalizations of the classical resultant matrix to give a test for the relative primeness of two polynomial matrices. Additionally minimal bases for rational vector spaces as described by Forney are considered via a homogeneous two variable approach and it is shown that the generalised resultant is actually a test for a minimal basis.
- Research Article
11
- 10.1080/00207177808922466
- Sep 1, 1978
- International Journal of Control
The problem of determining the structure of the basis matrices of all possible controllability subspaces of a controllable pair [Ã, [Btilde]] in the Brunovski (1966) and Luenberger (1967) controllable canonical form is considered. Departing from a characterization of the c.s.'s of [Ã, [Btilde]] given by Warren and Eckberg (1975) it is shown that to every pair A, B in the Brunovski (1966) and Luenberger (1967) controllable canonical form, there corresponds a unique polynomial matrix X(8) which has a canonical structure. Using the results on rational vector spaces obtained by Forney (1975) it is seen that this polynomial matrix qualifies as a minimal basis which uniquely identifies a rational vector space (s). A correspondence between the polynomial n-tuples x(8)∊(8) and the c.s.'s of [Ã, [Btilde]] loads to simple expressions that describe the structure of the bases of all c.s. of [Ã, [Btilde]] of all possible dimensions.
- Conference Article
- 10.1109/cdc.1981.269296
- Dec 1, 1981
By considering a convolutional code as the range space of some linear map over a rational field, we define minimal polynomial bases for a rational vector space as a natural extension of the concept of minimal convolutional encoders. Furthermore, by using the notion of dual spaces, it is shown how relatively straightforward it is to construct a minimal polynomial basis for the direct space. An application of this concept to multivariable linear system theory is also noted by constructing left and right standard matrix factorizations of proper rational matrices which generalize to the multivariable case the classical representation of a proper rational function as a ratio of two relatively prime polynomials with a denominator of degree larger or equal to the one of the numerator.
- Book Chapter
2
- 10.1007/978-1-4615-0895-3_20
- Jan 1, 2002
Let F denote the field of rational functions over some base field. Every subspace V of F n has a polynomial basis. A polynomial basis having minimal possible degrees is called a minimal basis of V. It was shown by G.D. Forney that minimal bases always exist and that these bases are of great importance in multivariable systems theory and convolutional coding theory.KeywordsVector spaces over the rationalsmultivariable systemsvector bundles over the projective line
- Research Article
39
- 10.1145/1644015.1644023
- Dec 1, 2009
- ACM Transactions on Algorithms
We consider the problem of computing exact or approximate minimum cycle bases of an undirected (or directed) graph G with m edges, n vertices and nonnegative edge weights. In this problem, a {0, 1} (−1,0,1}) incidence vector is associated with each cycle and the vector space over F 2 (Q) generated by these vectors is the cycle space of G . A set of cycles is called a cycle basis of G if it forms a basis for its cycle space. A cycle basis where the sum of the weights of the cycles is minimum is called a minimum cycle basis of G . Cycle bases of low weight are useful in a number of contexts, for example, the analysis of electrical networks, structural engineering, chemistry, and surface reconstruction. There exists a set of Θ( mn ) cycles which is guaranteed to contain a minimum cycle basis. A minimum basis can be extracted by Gaussian elimination. The resulting algorithm [Horton 1987] was the first polynomial-time algorithm. Faster and more complicated algorithms have been found since then. We present a very simple method for extracting a minimum cycle basis from the candidate set with running time O ( m 2 n ), which improves the running time for sparse graphs. Furthermore, in the undirected case by using bit-packing we improve the running time also in the case of dense graphs. For undirected graphs we derive an O ( m 2 n /log n + n 2 m ) algorithm. For directed graphs we get an O ( m 3 n ) deterministic and an O ( m 2 n ) randomized algorithm. Our results improve the running times of both exact and approximate algorithms. Finally, we derive a smaller candidate set with size in Ω( m ) ∩ O ( mn ).
- Research Article
72
- 10.13001/1081-3810.1320
- Jan 1, 2009
- The Electronic Journal of Linear Algebra
A standard way of dealing with a regular matrix polynomial P(λ) is to convert it intoan equivalent matrix pencil – a process known as linearization. Two vector spaces of pencils L1(P) and L2(P) that generalize the first and second companion forms have recently been introduced by Mackey, Mackey, Mehl and Mehrmann. Almost all of these pencils are linearizations for P(λ) when P is regular. The goal of this work is to show that most of the pencils in L1(P) and L2(P) are still linearizations when P(λ) is a singular square matrix polynomial, and that these linearizations can be used to obtain the complete eigenstructure of P(λ), comprised not only of the finite and infinite eigenvalues, but also for singular polynomials of the left and right minimal indices and minimal bases. We show explicitly how to recover the minimal indices and bases of the polynomial P(λ) from the minimalindices and bases of linearizations in L1(P) and L2(P). As a consequence of the recovery formulae for minimal indices, we prove that the vector space DL(P) = L1(P) ∩ L2(P) wil l never contain any linearization for a square singular polynomial P(λ). Finally, the results are extended to other linearizations of singular polynomials defined in terms of more general polynomial bases.
- Book Chapter
- 10.1007/978-0-8176-4529-8_2
- Jan 1, 2006
This chapter introduces vector spaces and linear maps between them, and it goes on to develop certain constructions of new vector spaces out of old, as well as various properties of determinants.Sections 1–2 define vector spaces, spanning, linear independence, bases, and dimension. The sections make use of row reduction to establish dimension formulas for certain vector spaces associated with matrices. They conclude by stressing methods of calculation that have quietly been developed in proofs.Section 3 relates matrices and linear maps to each other, first in the case that the linear map carries column vectors to column vectors and then in the general finite-dimensional case. Techniques are developed for working with the matrix of a linear map relative to specified bases and for changing bases. The section concludes with a discussion of isomorphisms of vector spaces.Sections 4–6 take up constructions of new vector spaces out of old ones, together with corresponding constructions for linear maps. The four constructions of vector spaces in these sections are those of the dual of a vector space, the quotient of two vector spaces, and the direct sum and direct product of two or more vector spaces.Section 7 introduces determinants of square matrices, together with their calculation and properties. Some of the results that are established are expansion in cofactors, Cramer’s rule, and the value of the determinant of a Vandermonde matrix. It is shown that the determinant function is well defined on any linear map from a finite-dimensional vector space to itself.Section 8 introduces eigenvectors and eigenvalues for matrices, along with their computation. Also, in this section the characteristic polynomial and the trace of a square matrix are defined, and all these notions are reinterpreted in terms of linear maps.Section 9 proves the existence of bases for infinite-dimensional vector spaces and discusses the extent to which the material of the first eight sections extends from the finite-dimensional case to be valid in the infinite-dimensional case.
- Conference Article
1
- 10.1109/tfsa.1998.721398
- Oct 6, 1998
It is shown that the redundant decomposition of a discrete-time signal by the block polynomial time-frequency transform (PTFT) can be implemented in a very efficient way. First, redundancy of decomposition of a discrete-time signal by a block transform defined by a special singular transformation matrix is discussed and its relation with an oversampled, power and allpass complementary, KN channel filter bank is illustrated. In the considered block transform the singular matrix can be partitioned into K subsets of unitary systems of vectors. Based on the parallels which exist between unitary transforms and filter banks, namely the parallel that any block unitary transform can be shown as a perfect reconstruction filter bank, allow us to relate the considered block transform with an oversampled KN channel filter bank which can be partitioned into K maximally decimated, power and allpass complementary, filter banks. It results in the fact that computation of frequency domain representation of a block of signal of length N, computed at M>N not necessarily uniformly spaced frequencies, can require less computation and can be more efficient than computation of the frequency domain representation which uses fast M-point FFT. It is shown that the fast decomposition of discrete time signal onto bases in vector spaces by the polynomial time-frequency transform is possible in a very similar way.
- Research Article
- 10.14738/tmlai.71.6070
- Feb 28, 2019
- Transactions on Machine Learning and Artificial Intelligence
We defined some elementary terminology. It includes the vector space, linear combination, set of independent vectors, dependent vectors, basis of vector space, and direct sum of subspaces. This theory can help us lower the dimension of a given vector spaces. We apply to multivariate linear multiple regression analysis. It not only simplifies the computation and eases the interpretation, but also reduce the rate of errors. Cook (2010) developed an envelope model for the same reason. The main objective in that model is decomposing the covariance matrix into the sum of two matrices, each of whose column spaces either contains, or is orthogonal to, the subspace containing the mean. In other words, break the covariance matrix into the direct sum of the subspaces.
- Book Chapter
3
- 10.1007/978-3-642-54903-8_30
- Jan 1, 2014
The idea of Relevance Feedback is to take the results that are initially returned from a given query and to use information about whether or not those results are relevant to perform a new query. The most commonly used Relevance Feedback methods aim to rewrite the user query. In the Vector Space Model, Relevance Feedback is usually undertaken by re-weighting the query terms without any modification in the vector space basis. With respect to the initial vector space basisindex terms, relevant and irrelevant documents share some terms at least the terms of the query which selected these documents. In this paper we propose a new Relevance Feedback method based on vector space basis change without any modification on the query term weights. The aim of our method is to build a basis which optimally separates relevant and irrelevant documents. That is, this vector space basis gives a better representation of the documents such that the relevant documents are gathered and the irrelevant documents are kept away from the relevant ones.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.