Source Camera Identification (SCI) achieves high accuracy on matching identification, in which the training and testing sample sets are derived from the same statistical distribution. However, in practice the training and testing sets, namely, the source and test domains, may consist of digital images that are double compressed by various software and applications with different quantization tables. Unfortunately, existing methods are inadequate in performance under such circumstances, such that we aim to find an algorithm that can fill the gap between the training and testing sets. In this work, we propose an algorithm, tri-transfer Learning (TTL), which is a cross-pollination of transfer learning and tri-training. For TTL, the transfer learning module transfers the knowledge learned from the training sets to improve the identification performance on testing. Compared with other methods, TTL uses a semi-supervised approach requiring only a small number of training samples and has better performance than other methods. The tri-training module, which is a variation of the co-training, facilitates knowledge transferring by assigning pseudo-labels to unlabelled instances and adds target instances with labels to the training set in batches. Combining the two modules, our framework can gain higher efficiency and performance than other state-of-art methods on mismatched camera model identification which is supported by experiments based on the open-source Dresden Image Database.