In few-shot learning (FSL), meta-learning approach (MLA) mainly focuses on learning transferable knowledge from plenty of auxiliary FSL tasks to facilitate fast generalization to a new task. For a given FSL task, due to the inter-class distribution discrepancy, each class necessitates a specific embedding (i.e., a mapping function) to map samples into an ideal semantic space where samples from this class can be well separately from other classes. Moreover, these embeddings may vary with different tasks. Hence, one crucial knowledge for MLA is how to separately construct optimal embeddings for each class based on a few training samples given in a FSL task. However, most existing MLAs rarely consider this and thus show limited generalization capacity. To mitigate this problem, instead of directly construct class-adaptive embeddings, we present a new MLA that aims at learning to class-adaptively manipulate the features of samples for accurate classification in a new FSL task. In a specific, for a new FSL task, the proposed MLA first learns to generate some class-specific weights based on training samples via exploiting the inter-class distribution discrepancy between this class and the others. Then, the generated weights are utilized to compute the Hadamard product of features produced by a task-agnostic embedding module. By doing this, the proposed MLA can dynamically enhance or depress some specific semantic dimensions of sample features depending on the distribution of each class for accurate classification, and thus equals to constructing class-adaptive embeddings for each class but in a simpler way which can appropriately avoid over-fitting and is scalable to cases with extensive classes. To show its efficacy, we test the proposed MLA on four benchmark FSL datasets under various settings and report superior performance over existing state-of-the-arts.