Few-shot learning aims to recognize novel concepts with only a few examples. To this end, previous studies resort to acquiring a strong inductive bias via meta-learning on a group of similar tasks, which however needs a large labeled base dataset to sample training tasks. In this paper, we show that such inductive bias can be learned from a flat collection of unlabeled images, and instantiated as transferable representations among seen and unseen classes. Specifically, we propose a novel unsupervised Part Discovery Network (PDN) to learn transferable representations from unlabeled images, which automatically selects the most discriminative part from an input image and then maximizes its similarities to the global view of the input and other neighbors with similar semantics. To better leverage the learned representations for few-shot learning, we further propose Part-Aligned Similarity (PAS), the key of which is to measure image similarities based on a set of discriminative and aligned parts. We conduct extensive studies on five popular few-shot learning datasets to evaluate our approach. The experimental results show that our approach outperforms previous unsupervised methods by a large margin and is even comparable with state-of-the-art supervised methods.