Domain generalization (DG) aims to learn transferable knowledge from multiple source domains and generalize it to the unseen target domain. To achieve such expectation, the intuitive solution is to seek domain-invariant representations via generative adversarial mechanism or minimization of cross-domain discrepancy. However, the widespread imbalanced data scale problem across source domains and category in real-world applications becomes the key bottleneck of improving generalization ability of model due to its negative effect on learning the robust classification model. Motivated by this observation, we first formulate a practical and challenging imbalance domain generalization (IDG) scenario, and then propose a straightforward but effective novel method generative inference network (GINet), which augments reliable samples for minority domain/category to promote discriminative ability of the learned model. Concretely, GINet utilizes the available cross-domain images from the identical category and estimates their common latent variable, which derives to discover domain-invariant knowledge for unseen target domain. According to these latent variables, our GINet further generates more novel samples with optimal transport constraint and deploys them to enhance the desired model with more robustness and generalization ability. Considerable empirical analysis and ablation studies on three popular benchmarks under normal DG and IDG setups suggests the advantage of our method over other DG methods on elevating model generalization. The source code is available in GitHub https://github.com/HaifengXia/IDG.