Consider a stochastic process X( t) = μ( t) + ϵ( t) where μ( t) is the deterministic part and ϵ( t) the purely nondeterministic part. Assume that X(t) is observed at n equidistant time points, τ units apart, and that a “model” is given that ascribes to μ( t) a specific analytic form f( t, θ), where θ is a parameter vector. If, on physical or other grounds, f( t, θ) can be meaningfully decomposed into nonoverlapping components ƒ(t)θ = ∑ i=1 k ƒ i(t,θ i the recognition problem is: how does our ability to recognize the presence or absence of components (without necessarily any parameter estimation) depend on n, τ, and k? A general purpose procedure on the way to attack this problem is proposed and applied to the case ƒ(t,θ) = ∑,/op. i=1 β ie−α it, α i > 0. Mathematical complexity necessitates numerical evaluation of the analysis, but some seemingly general results have been obtained: (i) for fixed n and k there is an “optimal” τ-interval that increases with n but decreases when k increases; (ii) increase in τ affects asymmetrically the recognizability of the components (the system's behavior changes qualitatively with time scale); (iii) the best unbiased estimators (in the Cramér-Rao sense) are, for finite τ, inconsistent with respect to n but become consistent when τ approaches zero; (vi) for fixed n and optimal τ the coefficients of variation for the best unbiased estimators increase approximately exponentially with k, suggesting substantial limitations regarding the largest number of components that can be recognized in data.