In this paper, a denoising temporal convolutional recurrent autoencoder (DTCRAE) is proposed to improve the performance of the temporal convolutional network (TCN) on time series classification (TSC). The DTCRAE consists of a TCN encoder and a Gated Recurrent Unit (GRU) decoder. Training the DTCRAE for TSC includes two phases, an unsupervised pre-training phase based on a DTCRAE and a supervised training phase for developing a TCN classifier. Computational studies are conducted to prove the effectiveness of DTCRAEs for TSC based on three datasets, the Sequential MNIST, Permuted MNIST, and Sequential CIFAR-10. Computational results demonstrate that the pre-trained DTCRAE provides a better initial structure for a TCN classifier, in terms of its higher precisions, recalls, F1-scores, and accuracies. The sensitivity analysis on the validation set shows that the pre-trained DTCRAE is robust to changes of the batch size, noisy rate, and dropout rate. DTCRAEs offer best TSC accuracies on two of three datasets and an accuracy comparable to the best one on another dataset by benchmarking against a number of state-of-the-art algorithms. Results verify the advantage of applying DTCRAEs to enhance the TSC performance of the TCN.