Reasoning over knowledge graphs (KGs) has received increasing attention recently due to its promising applications in many areas, such as semantic search and recommendation systems. Subsequently, most reasoning models are inherently transductive and ignore uncertainties of KGs, making it difficult to generalize to unseen entities. Moreover, existing approaches usually require each entity in the KG to have sufficient training samples, which leads to the overfitting of the entity having few instances. In fact, long-tail distributions are quite widespread in KGs, and newly emerging entities will tend to have only a few related triples. In this work, we aim at studying knowledge graph reasoning under a challenging setting where only limited training samples are available. Specifically, we propose a Bayesian inductive reasoning method and incorporate meta-learning techniques in few-shot learning to solve data deficiency and uncertainties. We design a Bayesian graph neural network as a meta-learner to achieve Bayesian inference, which can extrapolate meta-knowledge from observed KG to emerging entities. We conduct extensive experiments on two large-scale benchmark datasets, and the results demonstrate considerable performance improvement with the proposed approach over other baselines.