A vector space basis of the quantum symplectic sphere
In this paper, we present a candidate of a vector space basis for the noncommutative algebra [Formula: see text] of the quantum symplectic sphere for every [Formula: see text]. The algebra [Formula: see text] is defined as a certain subalgebra of the quantum symplectic group [Formula: see text]. A nontrivial application of the Diamond Lemma is used to construct the vector space basis and the conjecture is supported by computer experiments for [Formula: see text].
- Conference Article
1
- 10.1109/tfsa.1998.721398
- Oct 6, 1998
It is shown that the redundant decomposition of a discrete-time signal by the block polynomial time-frequency transform (PTFT) can be implemented in a very efficient way. First, redundancy of decomposition of a discrete-time signal by a block transform defined by a special singular transformation matrix is discussed and its relation with an oversampled, power and allpass complementary, KN channel filter bank is illustrated. In the considered block transform the singular matrix can be partitioned into K subsets of unitary systems of vectors. Based on the parallels which exist between unitary transforms and filter banks, namely the parallel that any block unitary transform can be shown as a perfect reconstruction filter bank, allow us to relate the considered block transform with an oversampled KN channel filter bank which can be partitioned into K maximally decimated, power and allpass complementary, filter banks. It results in the fact that computation of frequency domain representation of a block of signal of length N, computed at M>N not necessarily uniformly spaced frequencies, can require less computation and can be more efficient than computation of the frequency domain representation which uses fast M-point FFT. It is shown that the fast decomposition of discrete time signal onto bases in vector spaces by the polynomial time-frequency transform is possible in a very similar way.
- Book Chapter
3
- 10.1007/978-3-642-54903-8_30
- Jan 1, 2014
The idea of Relevance Feedback is to take the results that are initially returned from a given query and to use information about whether or not those results are relevant to perform a new query. The most commonly used Relevance Feedback methods aim to rewrite the user query. In the Vector Space Model, Relevance Feedback is usually undertaken by re-weighting the query terms without any modification in the vector space basis. With respect to the initial vector space basisindex terms, relevant and irrelevant documents share some terms at least the terms of the query which selected these documents. In this paper we propose a new Relevance Feedback method based on vector space basis change without any modification on the query term weights. The aim of our method is to build a basis which optimally separates relevant and irrelevant documents. That is, this vector space basis gives a better representation of the documents such that the relevant documents are gathered and the irrelevant documents are kept away from the relevant ones.
- Conference Article
1
- 10.1109/cwit.2011.5872104
- May 1, 2011
Summary form only given. In classical coding theory, information transmission is modeled as vector transmission: the transmitter sends a vector, the receiver gathers a vector possibly perturbed by noise, and the coding problem is to design a codebook having a large minimum distance between vectors. In this talk we generalize to the case of network coding and, motivated by the property that linear network coding is vector-space preserving, we model information transmission as vector-space transmission: the transmitter sends a (basis for a) vector space, the receiver gathers a (basis for a) vector space possibly perturbed by noise, and the coding problem is to design a codebook having a large minimum distance between vector spaces. We will show that so-called “lifted” maximum rank distance (MRD) codes such as Gabidulin codes play essentially the same role as that played by maximum distance separable (MDS) codes such as Reed-Solomon codes, both for information transmission in the presence of adversarial errors and for security against a wiretapper. When errors are introduced randomly (rather than chosen by an adversary), we show that a simple matrix-based coding scheme can approach capacity. Finally, we describe how some of these ideas may be useful in the context of lattice-theoretic physical-layer network-coding schemes based on compute-and-forward relaying.
- Research Article
22
- 10.1090/s0002-9939-1966-0194340-1
- Jan 1, 1966
- Proceedings of the American Mathematical Society
1. Because of the nonconstructive nature of the axiom of choice there has been much interest in how much of it is needed for various theories. In the case of the theory of vector spaces it appears that one would want to save at least the following two consequences of AC: (1) Every vector space has a basis and (2) Any two bases of a given vector space are equipollent. The question immediately arises: Have we saved the whole axiom of choice; namely is the axiom of choice a logical consequence of (1) and (2) and the other axioms of some appropriate set theory? This question remains open and the author conjectures a negative solution. However, we are able to show that a reasonable strengthening of (1), which is also a consequence of AC, implies AC, namely the universal generalization of Proposition 2 of [1], which we will call the downward basis principle:
- Research Article
38
- 10.1016/j.ejc.2009.11.014
- Jan 19, 2010
- European Journal of Combinatorics
On the equivalence between real mutually unbiased bases and a certain class of association schemes
- Conference Article
- 10.1109/cecnet.2012.6202259
- Apr 1, 2012
To reduce the computational cost of two-step equalization algorithm brought by extracting the orthogonal basis of equalizer coefficient vector space using Singularity Value Decomposition (SVD), a low-cost implement method of blind two-step equalization algorithm is proposed, which obtains the orthogonal basis of equalizer coefflcient vector space using Gram-Schmidt orthogonalization to the first P columns of the inverse of the measurement auto-correlation matrix. It reduces the computational complexity from O(K <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">3</sup> ) to KP <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> , where P ≪ K. An adaptive implementation of the low-cost method is presented to update the equalizer coefficient vector real time, which has the computational complexity of O(K <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> ). Numerical simulations show that the low-cost method has an advantage of computational simplicity and shares the same performance with the origin one, and the adaptive implementation has higher convergence speed and less steady residual error than the existing adaptive algorithm at present.
- Book Chapter
- 10.1016/b978-1-4832-3208-9.50006-1
- Jan 1, 1968
- Linear Algebra
2 - Further Properties of Vector Spaces
- Research Article
- 10.14738/tmlai.71.6070
- Feb 28, 2019
- Transactions on Machine Learning and Artificial Intelligence
We defined some elementary terminology. It includes the vector space, linear combination, set of independent vectors, dependent vectors, basis of vector space, and direct sum of subspaces. This theory can help us lower the dimension of a given vector spaces. We apply to multivariate linear multiple regression analysis. It not only simplifies the computation and eases the interpretation, but also reduce the rate of errors. Cook (2010) developed an envelope model for the same reason. The main objective in that model is decomposing the covariance matrix into the sum of two matrices, each of whose column spaces either contains, or is orthogonal to, the subspace containing the mean. In other words, break the covariance matrix into the direct sum of the subspaces.
- Book Chapter
2
- 10.1007/978-3-030-37858-5_64
- Jan 1, 2019
This paper presents an approach to Russian text vectorization based on SRSTI classifier. Our approach is based on using SRSTI categories as vector space dimensions. The categories are defined by lists of keywords. We explain our choice of SRSTI as a basis for vector space. We describe the keywords selection process, as well as vector calculation and comparison algorithm. We apply developed algorithm to marked-up SRSTI texts and user social profiles. We also suggest approaches to vector space improvement and evaluate them.
- Research Article
4
- 10.2307/2035388
- Jun 1, 1966
- Proceedings of the American Mathematical Society
Bases in Vector Spaces and the Axiom of Choice
- Research Article
1
- 10.1007/s13370-016-0394-3
- Feb 13, 2016
- Afrika Matematika
In linear network coding the information is encoded in terms of a basis of a vector space and it is received as a basis of a possible altered vector space. In the constant dimension case Koetter and Kschischang introduced a metric on the Grassmannian and proved efficient and correct decoding in terms of this metric. Here we introduce a second order invariant of the code: the minimum dimension of the linear span of 3 different linear subspaces belonging to the code. This is the case \(s=3\) of a family \(d_s\) and \(d'_s\), \(s\ge 3\), of invariants of network codes. We study these invariants in a case recently proposed by Hansen (the set of all osculating spaces of a Veronese embedding of a finite projective space) and for a related case (the set of osculating spaces to curves of positive genus) with a complete description of the case of elliptic curves and the ones related to the Hermitian curve.
- Research Article
733
- 10.1137/0313029
- May 1, 1975
- SIAM Journal on Control
A minimal basis of a vector space V of n-tuples of rational functions is defined as a polynomial basis such that the sum of the degrees of the basis n-tuples is minimum. Conditions for a matrix G to represent a minimal basis are derived. By imposing additional conditions on G we arrive at a minimal basis for V that is unique. We show how minimal bases can be used to factor a transfer function matrix G in the form $G = ND^{ - 1} $, where N and D are polynomial matrices that display the controllability indices of G and its controller canonical realization. Transfer function matrices G solving equations of the form $PG = Q$ are also obtained by this method; applications to the problem of finding minimal order inverse systems are given. Previous applications to convolutional coding theory are noted. This range of applications suggests that minimal basis ideas will be useful throughout the theory of multivariable linear systems. A restatement of these ideas in the language of valuation theory is given in an Ap...
- Book Chapter
- 10.1007/978-3-319-78361-1_10
- Jan 1, 2018
In Chap. 7 we studied the operation of changing a basis for a real vector space. In particular, in the Theorem 7.9.6 and the Remark 7.9.7 there, we showed that any matrix giving a change of basis for the vector space \(\mathbb R^n\) is an invertible \(n\times n\) matrix, and noticed that any \(n\times n\) invertible yields a change of basis for \(\mathbb R^n\).
- Research Article
123
- 10.1016/j.ymssp.2008.09.009
- Oct 17, 2008
- Mechanical Systems and Signal Processing
Similarity of signal processing effect between Hankel matrix-based SVD and wavelet transform and its mechanism analysis
- Research Article
- 10.3389/fams.2022.855862
- Jun 20, 2022
- Frontiers in Applied Mathematics and Statistics
Linear functional analysis historically founded by Fourier and Legendre played a significant role to provide a unified vision of mathematical transformations between vector spaces. The possibility of extending this approach is explored when basis of vector spaces is built Tailored to the Problem Specificity (TPS) and not from the convenience or effectiveness of mathematical calculations. Standardized mathematical transformations, such as Fourier or polynomial transforms, could be extended toward TPS methods, on a basis, which properly encodes specific knowledge about a problem. Transition between methods is illustrated by comparing what happens in conventional Fourier transform with what happened during the development of Jewett Transform, reported in previous articles. The proper use of computational intelligence tools to perform Jewett Transform allowed complexity algorithm optimization, which encourages the search for a general TPS methodology.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.