Over the past few decades, cognitive science has identified several forms of reasoning that make essential use of conceptual knowledge. Despite significant theoretical and empirical progress, there is still no unified framework for understanding how concepts are used in reasoning. This paper argues that the theory of conceptual spaces is capable of filling this gap. Our strategy is to demonstrate how various inference mechanisms which clearly rely on conceptual information—including similarity, typicality, and diagnosticity-based reasoning—can be modeled using principles derived from conceptual spaces. Our first topic analyzes the role of expectations in inductive reasoning and their relation to the structure of our concepts. We examine the relationship between using generic expressions in natural language and common-sense reasoning as a second topic. We propose that the strength of a generic can be described by distances between properties and prototypes in conceptual spaces. Our third topic is category-based induction. We demonstrate that the theory of conceptual spaces can serve as a comprehensive model for this type of reasoning. The final topic is analogy. We review some proposals in this area, present a taxonomy of analogical relations, and show how to model them in terms of distances in conceptual spaces. We also briefly discuss the implications of the model for reasoning with concepts in artificial systems.