• A global-aware sample module (GSM) is proposed to capture the global description of each sample. • A global-aware task module (GTM) is used to obtain the global description across tasks. • A feature fusion module incorporating both global task and global sample information is used for label propagation. Few-shot learning remains a challenging problem because it needs to classify unseen categories with only a few samples as limited supervision. The tasks and samples can be extremely different in various few-shot problems, which makes it even more difficult. Due to the local connectivity in CNN, it can not capture the global description of the samples and the features are not discriminative enough from a global viewpoint. Meanwhile, a sample usually reveals similar features in different tasks, which does not consider the global information of the task and weakens the discrimination of features. To address the above issue, we proposed a Dual Global-Aware method for label Propagation (DGAP) to encode two kinds of global description to enhance the discriminative power of the learned features. On the sample level, the global-aware sample module (GSM) is employed to get the contextual description and enhance the feature representation capability of each sample. On the task level, the global-aware task module (GTM) is used to embed the features in the current task to a more appropriate and discriminative position in the feature space which is task-oriented. In the end, a feature fusion module is adopted to combine the features obtained from both global sample and global task respects. Based on the label propagation method, the proposed DGAP improves the performance approximately 2–5% over the baseline on different benchmarks (mini-Imagenet and tiered-Imagenet) across different structures (Conv4 structure and ResNet12 structure), which reaches the state-of-the-art.