Recent advances show that Transformer-based models and object detection-based models play an indispensable role in VQA. However, object detection-based models have significant limitations due to their redundant and complex detection box generation process. In contrast, Visual and Language Pre-training (VLP) models can achieve better performance, but require high computing power. To this end, we present Weight-Sharing Hybrid Attention Network (WHAN), a lightweight Transformer-based VQA model. In WHAN, we replace the object detection network with Transformer encoder and use LoRA to solve the problem that the language model cannot adapt to interrogative sentences. We propose Weight-Sharing Hybrid Attention (WHA) module with parallel residual adapters, which can significantly reduce the trainable parameters of the model and we design DWA and BVA modules that can allow the model to perform attention operations from different scales. Experiments on VQA-v2, COCO-QA, GQA, and CLEVR datasets show that WHAN achieves competitive performance with far fewer trainable parameters.