Few-shot learning aims to transfer the knowledge learned from seen categories to unseen categories with a few references. It is also an essential challenge to bridge the gap between humans and deep learning models in real-world applications. Despite extensive previous efforts to tackle this problem by finding an appropriate similarity function, we emphasize that most existing methods have merely considered a single low-resolution representation pair utilized in similarity calculations between support and query samples. Such representational limitations could induce the instability of category predictions. To better achieve metric learning stabilities, we present a novel method dubbed Fork Attention Adapter (FA-adapter), which can seamlessly establish the dense feature similarity with the newly generated nuanced features. The utility of the proposed method is more performant and efficient via the two-stage training phase. Extensive experiments demonstrate consistent and substantial accuracy gains on the fine-grained CUB, Aircraft, non-fine-grained mini-ImageNet, and tiered-ImageNet benchmarks. By comprehensively studying and visualizing the learned knowledge from different source domains, we further present an extension version termed FA-adapter++ to boost the performance in fine-grained scenarios.