We consider the batching approach to interval estimation of the mean of a stochastic process based upon simulation output. In particular, using batch means processes generated from underlying AR(1) processes and M/M/1 queueing systems, we study the effects of undetected autocorrelation on both the probability of coverage and the performance of the Shapiro-Wilk test of normality. We also conduct power comparisons of two tests used to detect first-order autocorrelation in batch means processes: the von Neumann ratio test and a rank version thereof proposed by Bartels (1982). Our results indicate that a large number of batches (perhaps at least 100) be used when testing for the presence of autocorrelation.