Traditional machine learning approaches often need a central server, where raw datasets or model updates are trained or aggregated in a centralized way. However, these approaches are vulnerable to many attacks, especially by the malicious server. Recently, a new distributed machine learning paradigm, called Swarm Learning (SL), has been proposed to support no-central-server based decentralized training. In each training round, each participant node has a chance to be selected to serve as a temporary server. Thus, these participant nodes do not need to share their private datasets to ensure a fair and secure model aggregation in a central server. To the best of our knowledge, there are no existing solutions about the security threats in swarm learning. In this paper, we investigate how to implant backdoor attacks against swarm learning to illustrate its potential security risk. Experiment results confirm the effectiveness of our method with high attack accuracies in different scenarios. We also study several defense methods to alleviate these backdoor attacks.