Change detection, as a crucial task in the field of Earth observation, aims to identify changed pixels between multi-temporal remote-sensing images captured at the same geographical area. However, in practical applications, there are challenges of pseudo changes arising from diverse imaging conditions and different remote-sensing platforms. Existing methods either overlook the different imaging styles between bi-temporal images, or transfer the bi-temporal styles via domain adaptation that may lose ground details. To address these problems, we introduce the disentangled representation learning that mitigates differences of imaging styles while preserving content details to develop a change detection framework, named Content Cleansing Network (CCNet). Specifically, CCNet embeds each input image into two distinct subspaces: a shared content space and a private style space. The separation of style space aims to mitigate the discrepant style due to different imaging condition, while the extracted content space reflects semantic features that is essential for change detection. Then, a multi-resolution parallel structure constructs the content space encoder, facilitating robust feature extraction of semantic information and spatial details. The cleansed content features enable accurate detection of changes in the land surface. Additionally, a lightweight decoder for image restoration enhances the independence and interpretability of the disentangled spaces. To verify the proposed method, CCNet is applied to five public datasets and a multi-temporal dataset collected in this study. Comparative experiments against eleven advanced methods demonstrate the effectiveness and superiority of CCNet. The experimental results show that our method robustly addresses the issues related to both temporal and platform variations, making it a promising method for change detection in complex conditions and supporting downstream applications.