The inherent limitations of the fast Fourier transform (FFT) ( I, 2) have led to the introduction of other methods for quantitative analysis of time-domain signals in nuclear magnetic resonance spectroscopy. In particular, methods based on the theory of linear prediction (LP) (3-8) have been shown to produce more accurate spectral estimates (i.e., amplitude, frequency, decay rate, and phase) than standard FFI, especially from free induction decays characterized by low signal-to-noise (S/N) ratios or truncation. The theory of LP assumes a certain functional form for the data (e.g., exponentially damped sinusoids). Provided the NMR data consist of such signals, that spectral estimate will always achieve a higher degree of resolution than FFT. Linear prediction analysis in NMR is practical, in part, because the spectral parameters are determined by a linear-least-squares procedure. This is in contrast to nonlinear-least-squares schemes, which require initial values and iterative solutions ( 9). Barkhuijsen et al. have described a method for applying LP to the analysis of NMR data (3, 4). Their method applies singular value decomposition (SVD) to the noisecorrupted LP data matrix and replaces it with a matrix of lower rank in terms of least squares (LS). The rank of the reduced matrix, in principle, equals the number of sinusoids present. This is reflected in the magnitude of the singular values and therefore requires no a priori information. A similar method uses Householder triangularization (QRD) to solve the LP equations in a more efficient manner, LPQRD (5)) although no further improvement in the spectral estimate is achieved. A more accurate estimate of the spectral parameters in low S/N environments has been demonstrated by Levy et al. (8)) by incorporating a forward-backward approach ( 10) to the linear prediction equations as compared to backward LPSVD. All of the above methods rely on a decomposition of the noise-corrupted LP data matrix. This provides an improved estimate of the NMR signal and reduces the perturbation effect on the “observation” vector from a least-squares viewpoint. A more realistic formulation incorporates the effects of noise into both the LP data matrix and the observation vector simultaneously. In this case the LP equations may be solved using the total least squares (TLS) method of fitting data ( 1 l-13). Consider the case where the data consist of exponentially damped complex sinusoids plus complex Gaussian white noise ( w[ n]) in the form
Read full abstract