Voice Conversion (VC) can manipulate the source speaker's identity of speech signal to make it sound like some specific target speaker, which makes it harder for a human being or a speaker verification/identification system to trace the real identity of the source speaker. However, extracting features of the source speaker from converted audio is challenging since the features of the target speaker are dominant in the converted audio, which hinders the extraction of the features of the source speaker. In this paper, to extract features of the source speaker from audios processed by VC methods, a speaker filtration block is designed, which uses mask estimation to identify source speakers from manipulated speech signals by filtering out the features of the target speaker in converted audio. Extensive experiments are conducted to evaluate the effectiveness of the proposed model in tracing source speakers of audios converted by ADAIN-VC, AGAIN-VC, VQMIVC, and FREEVC. Experimental results demonstrate the effectiveness of the proposed model by comparing to competitive baselines in speaker verification/identification scenarios. Notably, it has good performance even when being applied to unknown VC methods. Furthermore, the experiments also show that, training audios generated by multiple VC methods can improve the performance on the traceability of the source speaker.