Nowadays, labeling time series datasets requires specialized knowledge and is time-consuming. Moreover, transfer learning faces a gap between the source and target domains. Therefore, we propose a Time Series representation learning framework based on Time-Wavelet Contrasting (TS-TWC) in this paper. It is pre-trained on unlabeled samples and fine-tuned on a small amount of labeled data. The features in the wavelet domain are used as a complement and are called wavelet series. First, the time series and its wavelet series are augmented by an attention-based augmentation structure. Then, a Time-Wavelet contrasting module contrasts the time series with its augmented data, as well as the time series views with their corresponding wavelet series views. In addition, a triple-view contrasting module uses the Daubechies and Haar wavelet bases to increase the number of views for contrastive learning. The aim is to reduce the pre-training batch size and improve the learning effect. In the fine-tuning and inference stages, this module also includes a tri-view fusion structure to assist in learning or extracting discriminative representations. Finally, our model is tested on five pairs of datasets using transfer learning. Experiments show that the proposed framework is effective in learning transferable representations in pre-training and obtaining discriminative representations in fine-tuning. It outperforms the state-of-the-art models on most of the metrics.