To inhibit the spread of rumorous information and its severe impacts, fact checking aims at retrieving relevant evidence to verify the veracity of a given statement. Fact checking methods typically use knowledge graphs (KGs) as external repositories and develop reasoning mechanism to retrieve evidence for verifying the statement. Existing fact checking methods have focused on verifying the statement of a single claim expressed by a clause. However, as real-world rumorous information is usually complex and a textual statement is often composed of multiple clauses (i.e. represented as multiple claims instead of a single one), multi-claim fact checking is not only necessary but more important for practical applications. Multi-claim statements imply rich contextual information and modeling the interactions of multiple claims can facilitate better verification. In this paper, we propose a knowledge enhanced learning and semantic composition model for multi-claim fact checking. Our model consists of two modules, KG-based learning enhancement and multi-claim semantic composition. To fully utilize the contextual information implied in multiple claims, the KG-based learning enhancement module learns the dynamic context-specific representations via selectively aggregating relevant attributes of entities. To robustly verify multiple claims robustly, the multi-claim semantic composition module learns a unified representation for multiple claims by modeling inter-claim interactions, and then verify them as a whole on the basis of this. We conduct experimental studies to validate our proposed method, and the experimental results on three typically datasets confirmed the efficacy of our model for multi-claim fact checking.