Pre-trained vision-language models, particularly those utilizing CLIP, have advanced various visual tasks. Parameter-Efficient Fine-Tuning (PEFT) on such models is a mainstream trend for downstream tasks. Despite advancements, long-tailed distribution still hampers image recognition performance in current PEFT schemes. Therefore, this paper proposes Token Embeddings Augmentation (TEA) to tackle long-tailed learning under PEFT paradigm. Based on patch token semantic mining, TEA uncovers category-specific semantic details within patch tokens to enhance token embeddings, named Patch-based Embeddings Augmentation (PEA). Then, a Probability Gate (PG) strategy is designed to effectively enrich semantic information of tail categories using enhanced embeddings. A Token Embeddings Consistency (TEC) loss is further introduced to prioritize category semantic information within tokens. Extensive experiments on multiple long-tailed distribution datasets show that our method improves the performance of various PEFT methods with different classification loss functions, especially for tail categories. Our optimal approach achieves the state-of-the-art results on multiple datasets with negligible parameters or inference latency, thus enhancing the practicality of PEFT in long-tailed distributions.
Read full abstract