Vehicle detection in congested urban scenes is essential for traffic control and safety management. However, the dense arrangement and occlusion of multi-scale vehicles in such environments present considerable challenges for detection systems. To tackle these challenges, this paper introduces a novel object detection method, dubbed the triple focus diffusion network (TFDNet). Firstly, the gradient convolution is introduced to construct the C2f-EIRM module, replacing the original C2f module, thereby enhancing the network’s capacity to extract edge information. Secondly, by leveraging the concept of the Asymptotic Feature Pyramid Network on the foundation of the Path Aggregation Network, the triple focus diffusion module structure is proposed to improve the network’s ability to fuse multi-scale features. Finally, the SPPF-ELA module employs an Efficient Local Attention mechanism to integrate multi-scale information, thereby significantly reducing the impact of background noise on detection accuracy. Experiments on the VisDrone 2021 dataset reveal that the average detection accuracy of the TFDNet algorithm reached 38.4%, which represents a 6.5% improvement over the original algorithm; similarly, its mAP50:90 performance has increased by 3.7%. Furthermore, on the UAVDT dataset, the TFDNet achieved a 3.3% enhancement in performance compared to the original algorithm. TFDNet, with a processing speed of 55.4 FPS, satisfies the real-time requirements for vehicle detection.