Abstract

Open AccessSurvey PaperModal Analysis of Fluid Flows: An OverviewCorrections for this articleCorrection: Modal Analysis of Fluid Flows: An OverviewKunihiko Taira, Steven L. Brunton, Scott T. M. Dawson, Clarence W. Rowley, Tim Colonius, Beverley J. McKeon, Oliver T. Schmidt, Stanislav Gordeyev, Vassilios Theofilis and Lawrence S. UkeileyKunihiko TairaFlorida State University, Tallahassee, Florida 32310Search for more papers by this author, Steven L. BruntonUniversity of Washington, Seattle, Washington 98195Search for more papers by this author, Scott T. M. DawsonPrinceton University, Princeton, New Jersey 08544Search for more papers by this author, Clarence W. RowleyPrinceton University, Princeton, New Jersey 08544Search for more papers by this author, Tim ColoniusCalifornia Institute of Technology, Pasadena, California 91125Search for more papers by this author, Beverley J. McKeonCalifornia Institute of Technology, Pasadena, California 91125Search for more papers by this author, Oliver T. SchmidtCalifornia Institute of Technology, Pasadena, California 91125Search for more papers by this author, Stanislav GordeyevUniversity of Notre Dame, Notre Dame, Indiana 46556Search for more papers by this author, Vassilios TheofilisUniversity of Liverpool, Brownlow Hill, England L69 3GH, United KingdomSearch for more papers by this author and Lawrence S. UkeileyUniversity of Florida, Gainesville, Florida 32611Search for more papers by this authorPublished Online:31 Oct 2017https://doi.org/10.2514/1.J056060SectionsRead Now ToolsAdd to favoritesDownload citationTrack citations ShareShare onFacebookTwitterLinked InRedditEmail AboutI. IntroductionSimple aerodynamic configurations under even modest conditions can exhibit complex flows with a wide range of temporal and spatial features. It has become common practice in the analysis of these flows to look for and extract physically important features, or modes, as a first step in the analysis. This step typically starts with a modal decomposition of an experimental or numerical dataset of the flowfield, or of an operator relevant to the system. We describe herein some of the dominant techniques for accomplishing these modal decompositions and analyses that have seen a surge of activity in recent decades [1–8]. For a nonexpert, keeping track of recent developments can be daunting, and the intent of this document is to provide an introduction to modal analysis that is accessible to the larger fluid dynamics community. In particular, we present a brief overview of several of the well-established techniques and clearly lay the framework of these methods using familiar linear algebra. The modal analysis techniques covered in this paper include the proper orthogonal decomposition (POD), balanced proper orthogonal decomposition (balanced POD), dynamic mode decomposition (DMD), Koopman analysis, global linear stability analysis, and resolvent analysis.In the study of fluid mechanics, there can be distinct physical features that are shared across a variety of flows and even over a wide range of parameters such as the Reynolds number and Mach number [9,10]. Examples of common flow features and phenomena include von Kármán shedding [11–17], Kelvin–Helmholtz instability [18–20], and vortex pairing/merging [21–23]. The fact that these features are often easily recognized through simple visual inspections of the flow, even under the presence of perturbations or variations, provides us with the expectation that the features can be extracted through some mathematical procedure [24]. We can further anticipate that these dominant features provide a means to describe in a low-dimensional form that appears as a complex high-dimensional flow. Moreover, as computational techniques and experimental measurements are advancing their ability in providing large-scale high-fidelity data, the compression of a vast amount of flowfield data to a low-dimensional form is ever more important in studying complex fluid flows and developing models for understanding and modeling their dynamical behavior.To briefly illustrate these ideas, let us provide a preview of modal decomposition. In Fig. 1, we present a modal decomposition analysis of two-dimensional laminar separated flow over a flat-plate wing [25,26]. By inspecting the flowfield, we clearly observe the formation of a von Kármán vortex street in the wake as the dominant unsteady feature. A modal decomposition method discussed later (proper orthogonal decomposition [1,27,28]; see Sec. III) can extract the important oscillatory modes of this flow. Moreover, two of these most dominant modes and the mean represent (reconstruct) the flowfield very effectively, as shown in the bottom figure. Additional modes can be included to reconstruct the original flow more accurately, but their contributions are much smaller in comparison to the two unsteady modes shown in this example. What is also encouraging is that the modes seen here share a striking resemblance to the dominant modes for three-dimensional turbulent flow at a much higher Reynolds number of 23,000 with a different airfoil and angle of attack (see Sec. III.B.1).Fig. 1 Modal decomposition of two-dimensional incompressible flow over a flat-plate wing [25,26] (Re=100 and α=30 deg). This example shows complex nonlinear separated flow being well represented by only two POD modes and the mean flowfield. Visualized are the streamwise velocity profiles.We refer to modal decomposition as a mathematical technique to extract energetically and dynamically important features of fluid flows. The spatial features of the flow are called (spatial) modes, and they are accompanied by characteristic values, representing either the energy content levels or growth rates and frequencies. These modes can be determined from the flowfield data or from the governing equations. We will refer to modal decomposition techniques that take flowfield data as input to the analysis as data-based techniques. This paper will also present modal analysis methods that require a more theoretical framework or discrete operators from the Navier–Stokes equations, and we will refer to them as operator-based techniques.The origin of this document lies with an AIAA Discussion Group, titled “Modal Decomposition of Aerodynamic Flows,” formed under the auspices of the Fluid Dynamics Technical Committee. One of the initial charters for this group was to organize an invited session where experts in the areas of modal decomposition methods would provide an introductory crash course on the methods. The intended audience for these talks was the nonspecialist, e.g., a new graduate student or early-career researcher who, in one afternoon, could acquire a compact yet intensive introduction to the modal analysis methods. This session (121-FC-5) appeared at the 2016 AIAA Aviation Conference* (13–17 June 2016 in Washington, D.C.) and provided the foundation for the present overview paper.In this overview paper, we present key modal decomposition and analysis techniques that can be used to study a range of fluid flows. We start by reprising the basics of eigenvalue and singular value decompositions as well as pseudospectral analysis in Sec. II, which serve as the backbone for all decomposition and analysis techniques discussed here. We then present data-based modal decomposition techniques: proper orthogonal decomposition in Sec. III, balanced POD in Sec. IV, and dynamic mode decomposition in Sec. V. These sections are then followed by discussions on operator-based modal analysis techniques. The Koopman analysis is briefly discussed in Sec. VI as a generalization of the DMD analysis to encapsulate nonlinear dynamics using a linear (but infinite-dimensional) operator-based framework. The global linear stability analysis and resolvent analysis are presented in Secs. VII and VIII, respectively. Table 1 provides a brief summary of the techniques to facilitate comparison of the methods before engaging in the details of each method.For each of the methods presented, we provide subsections on overview, description, illustrative examples, and future outlook. We offer in the Appendix an example of how the flowfield data can be arranged into vector and matrix forms in preparation for performing the (data-based) modal decomposition techniques presented here. At the end of the paper, in Sec. IX, we provide concluding remarks on the modal decomposition and analysis methods.II. Eigenvalue and Singular Value DecompositionsThe decomposition methods presented in this paper are founded on the eigenvalue and singular value decompositions of matrices or operators. In this section, we briefly present some important fundamental properties of the eigenvalue and singular value decomposition techniques. We also briefly discuss the concepts of pseudospectra and nonnormality.Eigenvalue decomposition is performed on a square matrix, whereas singular value decomposition can be applied on a rectangular matrix. Analyses based on the eigenvalue decomposition are usually employed when the range and domain of the matrix or operator are the same [29]. That is, the operator of interest can take a vector and map it into the same space. Hence, eigenvalue decomposition can help examine the iterative effects of the operator (e.g, Ak and exp(At)=I+At+12A2t2+⋯).The singular value decomposition, on the other hand, is performed on a rectangular matrix, which means that the domain and range spaces are not necessarily the same. As a consequence, singular value decomposition is not associated with analyzing iterative operators. That is, rectangular matrices cannot serve as propagators. However, singular value decomposition can be applied on rectangular data matrices compiled from dynamical processes (see Sec. II.C and the Appendix for details).The theories and numerical algorithms for eigenvalue and singular value decompositions are not provided here but are discussed extensively in textbooks by Horn and Johnson [30], Golub and Loan [31], Trefethen and Bau [29], and Saad [32]. Numerical programs and libraries to perform eigenvalue and singular value decompositions are listed in Sec. II.D.A. Eigenvalue DecompositionThe eigenvalues and eigenvectors of a matrix (linear operator) capture the directions in which vectors can grow or shrink. For a given matrix A∈Cn×n, a vector v∈Cn and a scalar λ∈C are called an eigenvector and an eigenvalue, respectively, of A if they satisfy Av=λv(1)Note that the eigenvectors are unique only up to a complex scalar. That is, if v is an eigenvector, αv is also an eigenvector (where α∈C). The eigenvectors obtained from computer programs are commonly normalized such that they have unit magnitude. The set of all eigenvalues† of A is called a spectrum of A.Although the preceding expression in Eq. (1) appears simple, the concept of an eigenvector has great significance in describing the effect of premultiplying A on a vector. The aforementioned expression states that, if an operator A is applied to its eigenvector (eigendirection), the operation can be captured solely by the multiplication of a scalar λ, which is the eigenvalue associated with that direction. The magnitude of the eigenvalue tells us whether the operator A will increase or decrease the size of the original vector in that particular direction. If multiplication by A is performed in an iterative manner, the resulting vector from the compound operations can be predominantly described by the eigenvector having the eigenvalue with the largest magnitude, as shown by the illustration in Fig. 2.Fig. 2 Collection of random points (vectors x) stretched in the direction of the dominant eigenvector v1 with iterative operations Ak for matrix A, which has eigenvalues of λ1=1.2 and λ2=0.5.If A has n linearly independent eigenvectors vj with corresponding eigenvalues λj (j=1,…,n), then we have AV=VΛ(2)where V=[v1v2…vn]∈Cn×n and Λ=diag(λ1,λ2,…,λn)∈Cn×n. Postmultiplying V−1 to the preceding equation, we have A=VΛV−1(3)This is called the eigenvalue decomposition. For the eigenvalue decomposition to hold, A needs to have a full set of n linearly independent eigenvectors.‡For linear dynamical systems, we often encounter systems for some state variable x(t)∈Cn described by x˙(t)=Ax(t)(4)with the solution of x(t)=exp(At)x(0)=Vexp(Λt)V−1x(0)(5)where x(0) denotes the initial condition. Here, the eigenvalues characterize the long-term behavior of linear dynamical systems [6,34] for x(t), as illustrated in Fig. 3. The real and imaginary parts of λj represent the growth (decay) rate and the frequency at which the state variable evolves in the direction of the eigenvector vj. For a linear system to be stable, all eigenvalues need to be on the left-hand side of the complex plane, i.e., Re(λj)≤0 for all j.Fig. 3 Dynamic response of a linear system characterized by the eigenvalues (stable: Re(λ)<0, and unstable: Re(λ)>0). Location of example eigenvalues λ are shown by the symbols with corresponding sample solutions exp(λt) in inset plots.For intermediate dynamics, the pseudospectra [33,35,36] can provide insights. The concept of pseudospectra is associated with nonnormality of operators and the sensitivity of the eigenvalues to perturbations. We briefly discuss the pseudospectra in Sec. II.E.For some problems, there can be a mass matrix B∈Cn×n that appears on the left-hand side of Eq. (4): Bx˙=Ax(6)In such a case, we are led to a generalized eigenvalue problem of the form Av=λBv(7)If B is invertible, we can rewrite the preceding equation as B−1Av=λv(8)and treat the generalized eigenvalue problem as a standard eigenvalue problem. However, it may not be desirable to consider this reformulation if B is not invertible§ or if the inversion of B results in ill conditioning (worsening of scaling) of the problem. Note that generalized eigenvalue problems can also be solved with many numerical libraries, which are similar to the standard eigenvalue problem [Eq. (1)]. See the work of Trefethen and Embree [33] and Golub and Loan [31] for additional details on the generalized eigenvalue problems.B. Singular Value DecompositionThe singular value decomposition (SVD) is one of the most important matrix factorizations, generalizing the eigendecomposition to rectangular matrices. The SVD has many uses and interpretations, especially for dimensionality reduction, where it is possible to use the SVD to obtain optimal low-rank matrix approximations [37]. The singular value decomposition also reveals how a rectangular matrix or operator stretches and rotates a vector. As an illustrative example, consider a set of vectors vj∈Rn of unit length that describe a sphere. We can premultiply these unit vectors vj with a rectangular matrix A∈Rm×n as shown in Fig. 4. The semiaxes of the resulting ellipse (ellipsoid) are represented by the unit vectors uj and magnitudes σj. Hence, we can view the singular values to capture the amount of stretching imposed by matrix A in the directions of the axes of the ellipse.Fig. 4 Graphical representation of singular value decomposition transforming a unit radius sphere, described by right singular vectors vj, to an ellipse (ellipsoid) with semiaxes characterized by the left singular vectors uj and magnitude captured by the singular values σj. In this graphical example, we take A∈R3×3.Generalizing this concept for complex A∈Cm×n, vj∈Cn, and uj∈Cm, we have Avj=σjuj(9)In matrix form, the aforementioned relationship can be expressed as AV=UΣ(10)where U=[u1u2…um]∈Cm×m and V=[v1v2…vn]∈Cn×n are unitary matrices¶ and Σ∈Rm×n is a diagonal matrix with σ1≥σ2≥…≥σp≥0 along its diagonal, where p=min(m,n). Now, multiplying V−1=V* from the right side of the preceding equation, we arrive at A=UΣV*(11)which is referred to as the singular value decomposition. In the preceding equation, * denotes the conjugate transpose. The column vectors uj and vj of U and V are called the left and right singular vectors, respectively. Both of the singular vectors can be determined up to a complex scalar of magnitude one (i.e., eiθ, where θ∈[0,2π]).Given a rectangular matrix A, we can decompose the matrix with the SVD in the following graphical manner: ((12)) where we have taken m>n in this example. Sometimes, the components in U enclosed by the broken lines are omitted from the decomposition, as they are multiplied by zeros in Σ. The decomposition that disregards the submatrices in the broken-line boxes are called the reduced SVD (economy-sized SVD), as opposed to the full SVD.In a manner similar to the eigenvalue decomposition, we can interpret the SVD as a means to represent the effect of matrix operation merely through the multiplication by scalars (singular values) given the appropriate directions. Because the SVD is applied to a rectangular matrix, we need two sets of basis vectors to span the domain and range of the matrix. Hence, we have the right singular vectors V that span the domain of A and the left singular vectors U that span the range of A, as illustrated in Fig. 4. This is different from the eigenvalue decomposition of a square matrix: in which case, the domain and the range are (generally) the same. Although the eigenvalue decomposition requires the square matrix to be diagonalizable, the SVD (on the other hand) can be performed on any rectangular matrix.C. Relationship Between Eigenvalue and Singular Value DecompositionsThe eigenvalue and singular value decompositions are closely related. In fact, the left and right singular vectors of A∈Cm×n are also the orthonormal eigenvectors of AA* and A*A, respectively. Furthermore, the nonzero singular values of A are the square roots of the nonzero eigenvalues of AA* and A*A. Therefore, instead of the SVD, the eigenvalue decomposition can be performed on AA* or A*A to solve for the singular vectors and singular values of A. For these reasons, the smaller of the square matrices of AA* and A*A are often chosen to perform the decomposition in a computationally inexpensive manner as compared to the full SVD. This property is taken advantage of in some of the decomposition methods discussed in the following because flowfield data usually yield a rectangular data matrix that can be very high-dimensional in one direction (e.g., the snapshot POD method [28] in Sec. III).D. Numerical Libraries for Eigenvalue and Singular Value DecompositionsEigenvalue and singular value decompositions can be performed with codes that are readily available. We list a few standard numerical libraries to execute eigenvalue and singular value decompositions.MATLAB: In MATLAB®, the command eig finds the eigenvalues and eigenvectors for standard eigenvalue problems as well as generalized eigenvalue problems. The command svd outputs the singular values and the left and right singular vectors. It can also perform the economy-sized SVD. For small- to moderate-sized problems, MATLAB can offer a user-friendly environment to perform modal decompositions. We provide in Table 2 some common examples of eig and svd in use for canonical decompositions.**LAPACK: LAPACK (linear algebra package) offers standard numerical library routines for a variety of basic linear algebra problems, including eigenvalue and singular value decompositions. The routines are written in Fortran 90. See the users’ guide [38].††ScaLAPACK: ScaLAPACK (scalable LAPACK) comprises high-performance linear algebra routines for parallel distributed memory machines. ScaLAPACK solves dense and banded eigenvalue and singular value problems. See the users’ guide [39].‡‡ARPACK: ARPACK (Arnoldi package) is a numerical library, written in FORTRAN 77, that is specialized to handle large-scale eigenvalue problems as well as generalized eigenvalue problems. It can also perform singular value decompositions. The library is available for both serial and parallel computations. See the users’ guide [40].§§E. PseudospectraBefore we transition our discussion to the coverage of modal analysis techniques, let us consider the pseudospectral analysis [33,35], which reveals the sensitivity of the eigenvalue spectra with respect to perturbations to the operator. This is also an important concept in studying transient and input–output dynamics, complementing the stability analysis based on eigenvalues. Concepts from the pseudospectral analysis appear later in the resolvent analysis (Sec. VIII).For a linear system described by Eq. (4) to exhibit stable dynamics, we require all eigenvalues of its operator A to satisfy Re(λj(A))<0, as illustrated in Fig. 3. Although this criterion guarantees the solution x(t) to be stable for large t, it does not provide insights into the transient behavior of x(t). To illustrate this point, let us consider an example of A=VΛV−1 with stable eigenvalues of λ1=−0.1,λ2=−0.2(13)and eigenvectors of v1=[cos(π4−δ),sin(π4−δ)]T,v2=[cos(π4+δ),sin(π4+δ)]T(14)where δ is a free parameter to choose. Observe that, as δ becomes small, the eigenvectors become nearly linearly dependent, which makes the matrix A ill conditioned.Providing an initial condition of x(t0)=[1,0.1]T, we can solve Eq. (5) for different values of δ, as shown in Figs. 5a and 5b. Although all solutions decay to zero due to the stable eigenvalues, the transient growths of x1(t) and x2(t) become noticeable as δ→0. The large transient for small δ is caused by the eigenvectors becoming nearly parallel, which necessitate large coefficients to represent the solution [i.e., x(t)=α1(t)v1+α2(t)v2, where |α1| and |α2|≫1 during the transient]. As such, the solution grows significantly during the transient before the decay from the negative eigenvalues starts to take over the solution behavior as t becomes large. Thus, we observe that the transient behavior of the solution is not controlled by the eigenvalues of A. Nonnormal operators (i.e., operators for which AA*≠A*A) have nonorthogonal eigenvectors and can exhibit this type of transient behavior. Thus, it is important that care is taken when we examine transient dynamics caused by nonnormal operators. In fluid mechanics, the dynamics of shear-dominant flows often exhibit nonnormality.Fig. 5 Representations of a) example of transient growth caused by increasing level of nonnormality from decreasing δ, b) trajectories of x1(t) vs x2(t) exhibiting transient growth, and c–e) pseudospectra expanding for different values of δ. ε pseudospectra are shown with values of log10(ε) placed on the contours, and stable eigenvalues are depicted with ×.To further assess the influence of A on the transient dynamics, let us examine here how the eigenvalues are influenced by perturbations on A. That is, we consider Λε(A)={z∈C: z∈Λ(A+ΔA),where ‖ΔA‖≤ε}(15)This subset of perturbed eigenvalues is known as the ε pseudospectrum of A. It is also commonly known with the following equivalent definition: Λε(A)={z∈C: ‖zI−A‖−1≥ε−1}(16)Note that, as ε→0, we recover the eigenvalues (0 pseudospectrum); and as ε→∞, the subset Λ∞(A) occupies the entire complex domain. To numerically determine the pseudospectra, we can use the following definition based on the minimum singular value of (zI−A): Λε(A)={z∈C: σmin(zI−A)≤ε}(17)which is equivalent to Λε(A), described by Eqs. (15) and (16). If A is normal, the pseudospectrum Λε(A) is the set of points away from Λ0(A) (eigenvalues) by only less than or equal to ε on the complex plane. However, as A becomes nonnormal, the distance between Λ0(A) and Λε(A) may become much larger. As will be discussed later, the resolvent analysis in Sec. VIII considers the pseudospectra along the imaginary axis [6] (i.e., z→ωi, where ω∈R).Let us return to the example given by Eqs. (13) and (14) and compute the pseudospectra for decreasing δ of 0.01, 0.001, and 0.0001, as shown in Figs. 5c–5e, respectively. Here, the contours of the ε pseudospectra are drawn for the same values of ε. With decreasing δ, the matrix A becomes increasingly nonnormal and susceptible to perturbations. The influence of nonnormality on the spectra is clearly visible with the expanding ε pseudospectra. It should be noticed that some of the pseudospectra contours penetrate into the right-hand side of the complex plane, suggesting that perturbations of such magnitude may thrust the system to become unstable even with stable eigenvalues. This nonnormal feature can play a role in destabilizing the dynamics with perturbations or nonlinearity.The transient dynamics of x˙=Ax can be related to how the ε pseudospectrum of A expands from the eigenvalues as parameter ε is varied. The pseudospectra of A can provide a lower bound on the amount of transient amplification by exp(At). If Λε(A) extends a distance η into the right half-plane for a given ε, it can be shown through Laplace transform that ‖exp(At)‖ must be as large as η/ε for some t>0. If we let a constant κ for A be defined as the supremum of this ratio over all ε, the lower bound for the solution can then be shown to take the form of [41] supt≥0‖exp(At)‖≥κ(18)This constant κ is referred to as the Kreiss constant, which provides an estimate of how the solution [Eq. (5)] behaves during the transient. This estimate is not obtained from the eigenanalysis but from the pseudospectral analysis. The same concept applies to time-discretized linear dynamics [42]. Readers can find applications of the pseudospectral analysis to fluid mechanics in the works of Trefethen et al. [35], Trefethen and Embree [33], and Schmid [36].III. Proper Orthogonal DecompositionThe proper orthogonal decomposition is a modal decomposition technique that extracts modes based on optimizing the mean square of the field variable being examined. It was introduced to the fluid dynamics/turbulence community by Lumley [27] as a mathematical technique to extract coherent structures from turbulent flowfields. The POD technique, also known as the Karhunen–Loève procedure [43,44], provides an objective algorithm to decompose a set of data into a minimal number of basis functions or modes to capture as much energy as possible. The method itself is known under a variety of names in different fields: POD, principal component analysis (PCA), Hotelling analysis, empirical component analysis, quasi-harmonic modes, empirical eigenfunction decomposition, and others. Closely related to this technique is factor analysis, which is used in psychology and economics. Roots of the POD can be traced back to the middle of the 19th century to the matrix diagonalization technique, which is ultimately related to the SVD (Sec. II). Excellent reviews on the POD can be found in [1,45], and chapter 3 of [46].In applications of POD to a fluid flow, we start with a vector field q(ξ,t) (e.g., velocity) with its temporal mean q¯(ξ) subtracted and assume that the unsteady component of the vector field can be decomposed in the following manner: q(ξ,t)−q¯(ξ)=∑jajϕj(ξ,t)(19)where ϕj(ξ,t) and aj represent the modes and expansion coefficients, respectively. Here, ξ denotes the spatial vector.¶¶ This expression represents the flowfield in terms of a generalized Fourier series for some set of basis functions ϕj(ξ,t). In the framework of the POD, we seek the optimal set of basis functions for given flowfield data. In early applications of the POD, this typically led to modes that were functions of space and time/frequency [47–51], as will also be discussed in the following.Modern applications of modal decompositions have further sought to split space and time, hence only needing spatial modes. In that context, the preceding equation can be written as q(ξ,t)−q¯(ξ)=∑jaj(t)ϕj(ξ)(20)where the expansion coefficients aj are now time dependent. Note that Eq. (20) explicitly employs a separation of variables, which may not be appropriate for all problems. The application of the two forms listed previously should depend on the properties of the flow and the information one wishes to extract, as discussed by Holmes et al. [52]. In what follows, we will discuss the properties of the POD by assuming that the desire is to extract a spatially dependent set of modes.The POD is one of the most widely used techniques in analyzing fluid flows. There are a large number of variations of the POD technique, with applications including fundamental analysis of fluids flows, reduced-order modeling, data compression/reconstruction, flow control, and aerodynamic design optimization. Because the POD serves as the basis and motivation for the development of other modal decomposition techniques, we provide a somewhat detailed overview of the POD in the following.A. Description1. AlgorithmThe inputs are snapshots of any scalar (e.g., pressure, temperature) or vector (e.g., velocity, vorticity) field q(ξ,t) over one-, two-, or three-dimensional discrete spatial points ξ at discrete times ti.The outputs are a set of orthogonal modes ϕj(ξ) with their corresponding temporal coefficients aj(t) and energy levels λj arranged in the order of their relative amount of energy. The fluctuations in the original field are expressed as a linear combination of the modes and their corresponding temporal coefficients: q(ξ,t)−q¯(ξ)=∑jaj(t)ϕj(ξ)(21)We discuss three main approaches to perform the POD of the flowfield data: namely, the spatial (classical) POD method, the snapshot POD method, and the SVD. In the following, we briefly describe these three methods and discuss how they are related to each other.Spatial (Classical) POD Method. With the POD, we determine the set of basis functions that optimally represents the given flowfield data. First, given the flowfield q(ξ,t), we prepare snapshots of the flowfield stacked in terms of a collection of column vectors x(t). That is, we consider a collection of finite-dimensional data vectors that represents the flowfield: x(t)=q(ξ,t)−q¯(ξ)∈Rn,t=t1,t2,…,tm(22)Here, x(t) is taken to be the fluctuating component of the data vector with its time-averaged value q¯(ξ) removed. Although the data vector can be written as x(ξ,t), we simply write x(t) to emphasize that

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call