Knowledge transfer is critical in making use of data from multi-source domains, but most existing techniques are not privacy-preserving. Nowadays, data leakage, together with the advancement of big-data-driven Artificial Intelligence, has raised huge concerns over data security. The neglect of privacy makes such approaches impractical. For addressing intrusion detection tasks, the Deep Autoencoding Gaussian Mixture Model (DAGMM) concatenates and jointly optimizes a compression and an estimation network in an unsupervised manner. However, DAGMM still suffers from the lack of diversely distributed intrusion samples in real-life scenarios where organizations are neither willing nor legally allowed to engage in data sharing. Given the increasing public concern over data privacy and scandals, federated learning which only allows model parameter sharing is thus proposed to enhance model performance while preserving data privacy. Moreover, it also addresses the competitive concerns on the part of organizations when sharing data with their rivals. This study proposes a Federated Deep Autoencoding Gaussian Mixture Model (F-DAGMM) to build up privacy-preserving knowledge transfer, to further support inter-organizational cooperation and high-level decision making. A two-phase federated optimization strategy is proposed to address the performance degradation caused by the significant differences in the individual clients’ data distributions. Extensive experiments demonstrate the superiority of the proposed F-DAGMM.