<abstract><p>Federated learning (FL) has attracted a lot of interests as a promising machine learning approach to protect user privacy and data security. It requires the clients to send model parameters to the server rather than private datasets, thus protecting privacy to a significant extent. However, there are several types of heterogeneities (data, model, objective and systems) in FL scenario, posing distinct challenges to the canonical FL algorithm (FedAvg). In this work, we propose a novel FL framework that integrates knowledge distillation and Bayesian inference to address this multi-dimensional heterogeneity problem. On the client side, we approximate the local likelihood function using a scaled multi-dimensional Gaussian probability density function (PDF). Moreover, each client is allowed to design customized model according to the requirement through knowledge distillation. On the server side, a multi-Gaussian product mechanism is employed to construct and maximize the global likelihood function, greatly enhancing the accuracy of the aggregated model in the case of data heterogeneity. Finally, we show in extensive empirical experiments on various datasets and settings that global model and local model can achieve better performance and require fewer communication rounds to converge compared with other FL techniques.</p></abstract>