In order to construct an embedding of a nonlinear time series, one must choose an appropriate delay time τ d. Often, τ d is estimated using the autocorrelation function; however, this does not treat the nonlinearity appropriately, and it may yield an incorrect value for τ d. On the other hand, the correct value of τ d can be found from the mutual information, but this process is rather cumbersome computationally. Here, we suggest a simpler method for estimating τ d using the correlation integral. We call this the C–C method, and we test it on several nonlinear time series, obtaining estimates of τ d in agreement with those obtained using the mutual information. Furthermore, some researchers have suggested that one should not choose a fixed delay time τ d, independent of the embedding dimension m, but, rather, one should choose an appropriate value for the delay time window τ w=( m−1) τ, which is the total time spanned by the components of each embedded point. Unfortunately, τ w cannot be estimated using the autocorrelation function or the mutual information, and no standard procedure for estimating τ w has emerged. However, we show that the C–C method can also be used to estimate τ w. Basically τ w is the optimal time for independence of the data, while τ d is the first locally optimal time. As tests, we apply the C–C method to the Lorenz system, a three-dimensional irrational torus, the Rossler system, and the Rabinovich–Fabrikant system. We also demonstrate the robustness of this method to the presence of noise.