Abstract

Large pre-trained models based on Vision Transformers (ViTs) contain nearly billions of parameters, demanding substantial computational resources and storage space. This restricts their transferability across different tasks. Recent approaches try to use adapter fine-tuning to address this drawback. However, there is still potential to improve the number of tunable parameters and the accuracy in these methods. To address this challenge, we propose an adapter fine-tuning module called Lv-Adapter, which consists of a linear layer and vector. This module enables targeted parameter fine-tuning of pretrained models by learning both the prior knowledge of pre-trained task and the information from downstream specific task, to adapt to various downstream tasks in image and video tasks while transfer learning. Compared to full fine-tuning methods, Lv-Adapter has several appealing advantages. Firstly, by adding only about 3% extra parameters to ViT, Lv-Adapter achieves comparable accuracy to full fine-tuning methods and even significantly surpasses them on action recognition benchmarks. Secondly, Lv-Adapter is a lightweight module that can be plug-and-play in different transformer models due to its simplicity. Finally, to validate these claims, extensive experiments were conducted on five image and video datasets in this study, providing evidence for the effectiveness of Lv-Adapter. When only 3.5% of the extra parameters are updated, it respectively achieves a relative boost of about 13% and 24% compared to the fully fine-tuned model on SSv2 and HMDB51.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call