ON THE KERNEL METHOD COMPARED TO THE MATRIX INVERSION METHOD FOR PASSING FROM A DUAL BASIS TO THE BASIS OF A VECTOR SPACE

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

In this work, we study and compare two methods for passing from a dual basis to the basis of a finite-dimensional vector space by recalling the passage from a basis to its dual. For the inverse transition, we clarify the kernel method for linear forms and the matrix inversion method, the former exploiting the properties of linear forms and orthogonality, while the latter relies on the explicit inversion of transition matrices, with remarks for each approach depending on the context of application. The study shows that, although matrix inversion is a more direct method, the kernel method can offer a more elegant and efficient alternative in certain cases, especially when the vector space has particular structures.

Similar Papers
  • Research Article
  • 10.1585/jspf.75.444
Improved Confinement and Neoclassical Effect.
  • Jan 1, 1999
  • Journal of Plasma and Fusion Research
  • Hiroshi Shirai + 3 more

Matrix Inversion (MI) method is developed in order to estimate the neoclassical ion thermal diffusivity, χNC, correctly. In the MI method, effects of impurity on χNC are treated self-consistently and no approximation on the friction and the viscosity matrix elements is carried out. The comparison of χNC value between the MI method and the Chang-Hinton’s (CH) formula shows that the CH formula over-estimates χNC value twice as large as that of MI method for typical JT-60U plasmas. The different dependence of χNC on the inverse aspect ratio, e, the effective charge number, Zeff, and the normalized ion collisionality, ν*i, between the MI method and the CH formula is presented. The radial electric field. Er, is estimated from the momentum and the heat momentum balance equations parallel to the magnetic field. Profiles of Er are compared between the two types of improved core confinement plasmas with internal transport barrier (ITB) in JT-60U, the “parabolic type ITB” and the “box type ITB”. In the box type ITB, the experimentally estimated thermal diffusivity. χexp, decreases in the core region. However, the Er shear is not so strong. In the box type ITB, a strong Er shear appears inside the ITB layer and the χexp value decreases drastically to the level of χNC. The E×B shearing rate becomes almost the same as the linear growth rate of the drift microinstability inside the ITB layer of the box type ITB.

  • Book Chapter
  • 10.1007/978-3-319-56264-3_3
The Finite-Dimensional Real Vector Space
  • Jan 1, 2017
  • Uwe Mühlich

The chapter introduces the notion of the finite-dimensional real vector space together with fundamental concepts like linear independence, vector space basis, and vector space dimension. The discussion of linear mappings between vector spaces prepares the ground for introducing the dual space and its basis. Finally, inner product space and reciprocal basis are contrasted with dual space and the corresponding dual basis.

  • Research Article
  • 10.30598/barekengvol17iss4pp2207-2212
A DIFFERENTIABLE STRUCTURE ON A FINITE DIMENSIONAL REAL VECTOR SPACE AS A MANIFOLD
  • Dec 19, 2023
  • BAREKENG: Jurnal Ilmu Matematika dan Terapan
  • Edi Kurniadi

There are three conditions for a topological space to be said a topological manifold of dimension : Hausdorff space, second-countable, and the existence of homeomorphism of a neighborhood of each point to an open subset of or -dimensional locally Euclidean. The differentiable structure is given if the intersection of two charts is an empty chart or its transition map is differentiable. In this article, we study a differentiable manifold on finite dimensional real vector spaces. The aim is to prove that any finite-dimensional vector space is a differentiable manifold. First of all, it is proved that a finite dimensional vector space is a topological manifold by constructing a norm as its topology. Given a metric which is induced by a norm. Two norms on a finite dimensional vector space are always equivalent and they are determine the same topology. Secondly, it is proved that the transition map in the finite dimensional vector space is differentiable. As conclusion, we have that any finite dimensional vector space with independent norm topology choice is a differentiable manifold. As a matter of discussion, it can be studied that the vector space of all linear operators of a finite dimensional vector space has a differentiable manifold structure as well.

  • Research Article
  • Cite Count Icon 7
  • 10.1145/321510.321522
Inversion of Matrices by Partitioning
  • Apr 1, 1969
  • Journal of the ACM
  • Marshall C Pease

The inversion of nonsingular matrices is considered. A method is developed which starts with an arbitrary partitioning of the given matrix. The separate submatrices are grouped into sets determined by the nonzero entries of some appropriate group, G , of permutation matrices. The group structure of G then establishes a sequence of operations on these sets of submatrices from which the corresponding representation of the inverse is obtained. Whether the method described is to be preferred to, say, Gauss's algorithm will depend on the capabilities that are required by other parts of the algorithm that is to be implemented in the special-purpose parallel computer. The basic speed, measured by the count of parallel multiplications and divisions, is comparable to that obtained with Gauss's algorithm and is slightly better under certain conditions. The principal difference is that this method uses primarily matrix multiplication, whereas Gauss's algorithm uses primarily row combinations. When the special-purpose computer under design must supply this capability anyway, the method developed here should be considered. Application of the process is limited to matrices for which we can set up a partitioning such that we can guarantee, a priori, that certain of the submatrices are nonsingular. Hence the method is not useful for arbitrary nonsingular matrices. However, it can be applied to certain important classes of matrices, notably those that are “dominated by the diagonal.” Noise covariance matrices are of this type; therefore the method can be applied to them. The inversion of a noise covariance matrix is required in some problems of optimal prediction and control. It is for applications of this sort that the method seems particularly attractive.

  • Research Article
  • Cite Count Icon 56
  • 10.1111/j.1365-2478.1991.tb00341.x
AN ALTERNATIVE STRATEGY FOR NON‐LINEAR INVERSION OF SEISMIC WAVEFORMS1
  • Aug 1, 1991
  • Geophysical Prospecting
  • M S Sambridge + 2 more

A common example of a large‐scale non‐linear inverse problem is the inversion of seismic waveforms. Techniques used to solve this type of problem usually involve finding the minimum of some misfit function between observations and theoretical predictions. As the size of the problem increases, techniques requiring the inversion of large matrices become very cumbersome. Considerable storage and computational effort are required to perform the inversion and to avoid stability problems. Consequently methods which do not require any large‐scale matrix inversion have proved to be very popular. Currently, descent type algorithms are in widespread use. Usually at each iteration a descent direction is derived from the gradient of the misfit function and an improvement is made to an existing model based on this, and perhaps previous descent directions.A common feature in nearly all geophysically relevant problems is the existence of separate parameter types in the inversion, i.e. unknowns of different dimension and character. However, this fundamental difference in parameter types is not reflected in the inversion algorithms used. Usually gradient methods either mix parameter types together and take little notice of the individual character or assume some knowledge of their relative importance within the inversion process.We propose a new strategy for the non‐linear inversion of multi‐offset reflection data. The paper is entirely theoretical and its aim is to show how a technique which has been applied in reflection tomography and to the inversion of arrival times for 3D structure, may be used in the waveform case. Specifically we show how to extend the algorithm presented by Tarantola to incorporate the subspace scheme. The proposed strategy involves no large‐scale matrix inversion but pays particular attention to different parameter types in the inversion.We use the formulae of Tarantola to state the problem as one of optimization and derive the same descent vectors. The new technique splits the descent vector so that each part depends on a different parameter type, and proceeds to minimize the misfit function within the sub‐space defined by these individual descent vectors. In this way, optimal use is made of the descent vector components, i.e. one finds the combination which produces the greatest reduction in the misfit function based on a local linearization of the problem within the subspace. This is not the case with other gradient methods. By solving a linearized problem in the chosen subspace, at each iteration one need only invert a small well‐conditioned matrix (the projection of the full Hessian on to the subspace). The method is a hybrid between gradient and matrix inversion methods. The proposed algorithm requires the same gradient vectors to be determined as in the algorithm of Tarantola, although its primary aim is to make better use of those calculations in minimizing the objective function.

  • Book Chapter
  • 10.1007/978-0-8176-4529-8_2
Vector Spaces over ℚ, ℝ, and ℂ
  • Jan 1, 2006
  • Anthony W Knapp

This chapter introduces vector spaces and linear maps between them, and it goes on to develop certain constructions of new vector spaces out of old, as well as various properties of determinants.Sections 1–2 define vector spaces, spanning, linear independence, bases, and dimension. The sections make use of row reduction to establish dimension formulas for certain vector spaces associated with matrices. They conclude by stressing methods of calculation that have quietly been developed in proofs.Section 3 relates matrices and linear maps to each other, first in the case that the linear map carries column vectors to column vectors and then in the general finite-dimensional case. Techniques are developed for working with the matrix of a linear map relative to specified bases and for changing bases. The section concludes with a discussion of isomorphisms of vector spaces.Sections 4–6 take up constructions of new vector spaces out of old ones, together with corresponding constructions for linear maps. The four constructions of vector spaces in these sections are those of the dual of a vector space, the quotient of two vector spaces, and the direct sum and direct product of two or more vector spaces.Section 7 introduces determinants of square matrices, together with their calculation and properties. Some of the results that are established are expansion in cofactors, Cramer’s rule, and the value of the determinant of a Vandermonde matrix. It is shown that the determinant function is well defined on any linear map from a finite-dimensional vector space to itself.Section 8 introduces eigenvectors and eigenvalues for matrices, along with their computation. Also, in this section the characteristic polynomial and the trace of a square matrix are defined, and all these notions are reinterpreted in terms of linear maps.Section 9 proves the existence of bases for infinite-dimensional vector spaces and discusses the extent to which the material of the first eight sections extends from the finite-dimensional case to be valid in the infinite-dimensional case.

  • Research Article
  • Cite Count Icon 37
  • 10.1016/j.cam.2008.11.012
Generalized matrix inversion is not harder than matrix multiplication
  • Nov 24, 2008
  • Journal of Computational and Applied Mathematics
  • Marko D Petković + 1 more

Generalized matrix inversion is not harder than matrix multiplication

  • Research Article
  • 10.15640/arms.v5n1a2
Piecewise Linear Economic-Mathematical Models with Regard to Unaccounted Factors Influence in 3-Dimensional Vector Space
  • Jan 1, 2017
  • American Review of Mathematics and Statistics
  • Azad Gabil Oglu Aliyev

Piecewise Linear Economic-Mathematical Models with Regard to Unaccounted Factors Influence in 3-Dimensional Vector Space Azad Gabil oglu Aliyev Abstract For the last 15 years in periodic literature there has appeared a series of scientific publications that has laid the foundation of a new scientific direction on creation of piecewise-linear economic-mathematical models at uncertainty conditions in finite dimensional vector space. Representation of economic processes in finitedimensional vector space, in particular in Euclidean space, at uncertainty conditions in the form of mathematical models in connected with complexity of complete account of such important issues as: spatial in homogeneity of occurring economic processes, incomplete macro, micro and social-political information; time changeability of multifactor economic indices, their duration and their change rate. The above-listed one in mathematical plan reduces the solution of the given problem to creation of very complicated economicmathematical models of nonlinear type. In this connection, it was established in these works that all possible economic processes considered with regard to uncertainty factor in finite-dimensional vector space should be explicitly determined in spatial-time aspect. Owing only to the stated principle of spatial-time certainty of economic process at uncertainty conditions in finite dimensional vector space it is possible to reveal systematically the dynamics and structure of the occurring process. In addition, imposing a series of softened additional conditions on the occurring economic process, it is possible to classify it in finite-dimensional vector space and also to suggest a new science-based method of multivariate prediction of economic process and its control in finite-dimensional vector space at uncertainty conditions, in particular, with regard to unaccounted factors influence. Full Text: PDF DOI: 10.15640/arms.v5n1a2

  • Research Article
  • 10.15640/arms.v7n1a3
Bases of software for computer simulation and multivariante prediction of economic even at uncertainty conditions on the base of n-component piecewise-linear economic-mathematical models in m-dimensional vector space
  • Jan 1, 2019
  • AMERICAN REVIEW OF MATHEMATICS AND STATISTICS
  • Azad Gabil Oglu Aliyev

Bases of software for computer simulation and multivariante prediction of economic even at uncertainty conditions on the base of n-component piecewise-linear economic-mathematical models in m-dimensional vector space Azad Gabil oglu Aliyev Abstract For the last 15 years in periodic literature there has appeared a series of scientific publications that has laid the foundation of a new scientific direction on creation of piecewise-linear economic-mathematical models at uncertainty conditions in finite dimensional vector space. Representation of economic processes in finite-dimensional vector space, in particular in Euclidean space, at uncertainty conditions in the form of mathematical models in connected with complexity of complete account of such important issues as: spatial in homogeneity of occurring economic processes, incomplete macro, micro and social-political information; time changeability of multifactor economic indices, their duration and their change rate. The above-listed one in mathematical plan reduces the solution of the given problem to creation of very complicated economic-mathematical models of nonlinear type. In this connection, it was established in these works that all possible economic processes considered with regard to uncertainty factor in finite-dimensional vector space should be explicitly determined in spatial-time aspect. Owing only to the stated principle of spatial-time certainty of economic process at uncertainty conditions in finite dimensional vector space it is possible to reveal systematically the dynamics and structure of the occurring process. In addition, imposing a series of softened additional conditions on the occurring economic process, it is possible to classify it in finite-dimensional vector space and also to suggest a new science-based method of multivariant prediction of economic process and its control in finite-dimensional vector space at uncertainty conditions, in particular, with regard to unaccounted factors influence. Full Text: PDF DOI: 10.15640/arms.v7n1a3

  • Research Article
  • 10.4169/000298910x515811
Dimension, Linear Functionals, and Norms in a Vector Space
  • Oct 1, 2010
  • The American Mathematical Monthly
  • Miyeon Kwon

Using the axiom of choice, we prove a generalized converse of the well-known fact that if X is a finite-dimensional vector space, then any linear functional on X is continuous with respect to all norms defined on X. We also show that an infinite-dimensional real or complex vector space X has exactly 2dim(X) inequivalent norms.

  • Book Chapter
  • 10.1090/conm/794/15941
Finite-dimensional diffeological vector spaces being and not being coproducts
  • Jan 1, 2024
  • Ekaterina Pervova

It is known that for a topological vector space it is possible to be the coproduct of two of its subspaces in the category of vector spaces while not being the coproduct of the same subspaces in the category of topological vector spaces. There are however wide classes of spaces where this cannot occur, notably finite-dimensional spaces (but also some infinite-dimensional ones, for instance, Banach spaces). In contrast, this kind of phenomen occurs easily (and frequently, as we here show) for finite-dimensional diffeological vector spaces, where its numerous instances are readily obtained in any dimension starting from 2 2 . After briefly reviewing what is known on this question in some classical categories, we provide an overview of this phenomenon and some of its implications for finite-dimensional diffeological vector spaces, indicating briefly its connections with some other subjects.

  • Conference Article
  • Cite Count Icon 5
  • 10.1109/icce-tw52618.2021.9603219
A Matrix Inversion Free Method for Computing Katz Centrality of Taipei Metro System Using Neumann Series
  • Sep 15, 2021
  • Chien-Cheng Tseng + 1 more

In this paper, the Katz centrality and Neumann series are used to identify the station importance of Taipei metro system. First, node importance of complex network is computed by the Katz centrality whose solution needs to solve the matrix inversion (MI). To get a MI free computation method, the truncated Neumann series expansion is then employed to approximate the MI. Next, a polynomial graph filtering implementation structure is presented to realize the proposed computation method. Finally, the station importance of Taipei metro system is identified by the conventional MI method and the proposed method. The top-K important stations are demonstrated to show both methods obtain the same results, so the proposed approximation method performs well.

  • Book Chapter
  • 10.3792/euclid/9781429799980-3
Chapter III. Inner-Product Spaces
  • Jan 1, 2016
  • Anthony W Knapp

This chapter investigates the effects of adding the additional structure of an inner product to a finite-dimensional real or complex vector space. Section 1 concerns the effect on the vector space itself, defining inner products and their corresponding norms and giving a number of examples and formulas for the computation of norms. Vector-space bases that are orthonormal play a special role. Section 2 concerns the effect on linear maps. The inner product makes itself felt partly through the notion of the adjoint of a linear map. The section pays special attention to linear maps that are self-adjoint, i.e., are equal to their own adjoints, and to those that are unitary, i.e., preserve norms of vectors. Section 3 proves the Spectral Theorem for self-adjoint linear maps on finite-dimensional inner-product spaces. The theorem says in part that any self-adjoint linear map has an orthonormal basis of eigenvectors. The Spectral Theorem has several important consequences, one of which is the existence of a unique positive semidefinite square root for any positive semidefinite linear map. The section concludes with the polar decomposition, showing that any linear map factors as the product of a unitary linear map and a positive semidefinite one.

  • Book Chapter
  • 10.1007/978-94-007-2636-9_5
Linear Independence and Dimension
  • Jan 1, 2012
  • Jonathan S. Golan

The notions of linear independence and bases are defined and studied for arbitrary vector spaces. The notion of dimension is defined. To study bases for arbitrary vector spaces, the Hausdorff Maximum Principle is introduced and used. The properties of finite-dimensional vector spaces are considered. Finally, independence and complements in the lattice of subspaces of a vector space are studied. Among the examples given are the quaternion algebras, Hamel bases, and the complexification of real vector spaces.

  • Research Article
  • Cite Count Icon 9
  • 10.3390/s19184002
A Novel Recurrent Neural Network-Based Ultra-Fast, Robust, and Scalable Solver for Inverting a "Time-Varying Matrix".
  • Sep 16, 2019
  • Sensors (Basel, Switzerland)
  • Vahid Tavakkoli + 2 more

The concept presented in this paper is based on previous dynamical methods to realize a time-varying matrix inversion. It is essentially a set of coupled ordinary differential equations (ODEs) which does indeed constitute a recurrent neural network (RNN) model. The coupled ODEs constitute a universal modeling framework for realizing a matrix inversion provided the matrix is invertible. The proposed model does converge to the inverted matrix if the matrix is invertible, otherwise it converges to an approximated inverse. Although various methods exist to solve a matrix inversion in various areas of science and engineering, most of them do assume that either the time-varying matrix inversion is free of noise or they involve a denoising module before starting the matrix inversion computation. However, in the practice, the noise presence issue is a very serious problem. Also, the denoising process is computationally expensive and can lead to a violation of the real-time property of the system. Hence, the search for a new ‘matrix inversion’ solving method inherently integrating noise-cancelling is highly demanded. In this paper, a new combined/extended method for time-varying matrix inversion is proposed and investigated. The proposed method is extending both the gradient neural network (GNN) and the Zhang neural network (ZNN) concepts. Our new model has proven that it has exponential stability according to Lyapunov theory. Furthermore, when compared to the other previous related methods (namely GNN, ZNN, Chen neural network, and integration-enhanced Zhang neural network or IEZNN) it has a much better theoretical convergence speed. To finish, all named models (the new one versus the old ones) are compared through practical examples and both their respective convergence and error rates are measured. It is shown/observed that the novel/proposed method has a better practical convergence rate when compared to the other models. Regarding the amount of noise, it is proven that there is a very good approximation of the matrix inverse even in the presence of noise.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon