An analysis is given for an extensive experimental performance evaluation of Skart, an automated sequential batch means procedure for constructing an asymptotically valid confidence interval (CI) on the steady-state mean of a simulation output process. Skart is designed to deliver a CI satisfying user-specified requirements on absolute or relative precision as well as coverage probability. Skart exploits separate adjustments to the half-length of the classical batch means CI so as to account for the effects on the distribution of the underlying Student's t-statistic that arise from skewness (nonnormality) and autocorrelation of the batch means. Skart also delivers a point estimator for the steady-state mean that is approximately free of initialization bias. In an experimental performance evaluation involving a wide range of test processes, Skart compared favorably with other steady-state simulation analysis methods—namely, its predecessors ASAP3, WASSP, and SBatch, as well as ABATCH, LBATCH, the Heidelberger–Welch procedure, and the Law–Carson procedure. Specifically, Skart exhibited competitive sampling efficiency and closer conformance to the given CI coverage probabilities than the other procedures, especially in the most difficult test processes.