Over the past few decades, advances in satellite and aerial imaging technology have made it possible to acquire high-quality remote sensing images. As one of the most popular research directions of computer vision, remote sensing object detection is widely researched due to the wide application in military and civil fields. The algorithms based on convolutional neural network have made great achievements in the field of object detection. However, plenty of small and densely distributed remote sensing objects against complex background pose some challenges to object detection. In this work, an efficient anchor-free based remote sensing object detector based on YOLO (You Only Look Once) is constructed. Firstly, the backbone network is simplified for the high efficiency of detection. In order to extract the features of densely distributed objects effectively, the detection scales are adjusted based on the backbone network. Secondly, aiming at the shortcomings of CBAM, the improved CJAM (Coordinate Joint Attention Mechanism) is proposed to deal with object detection under complex background. In addition, feature enhancement modules DPFE (Dual Path Feature Enhancement) and IRFE (Inception-ResNet-Feature Enhancement) as well as PRes2Net (Parallel Res2Net) are proposed. We combine CJAM with the above modules to create DC-CSP_n, CSP-CJAM-IRFE, and CJAM-PRes2Net for better feature extraction. Thirdly, a lightweight auxiliary network is constructed to integrate the low-level and intermediate information extracted from remote sensing images into the high-level semantic information of the backbone network. The auxiliary network allows the detector to locate the target efficiently. Fourthly, Swin Transformer is introduced into the ‘Neck’ part of the network so that the network can effectively grasp the global information. The mAP on DOTA1.5 and VEDAI datasets, which both contain a large number of small objects, reached 77.07% and 63.83%, respectively. Compared with advanced algorithms such as YOLO V4, YOLO V5s, YOLO V5l, and YOLO V7, our approach achieves the highest mAP.