Remote sensing image object detection is a challenging task with a focus on detection accuracy. In the object detection task of remote sensing images, scenes often contain objects with varying scales and dense arrays, leading to issues such as low accuracy, missed detections, and false alarms. To address these challenges, this paper introduces MCSC-Net, a rotated object detector, designed to enhance detection performance for objects of different scales in densely arranged scenarios. The proposed detector features a novel neck network named MCFN-V3, which incorporated a cross-fusion structure. MCFN-V3 facilitates simultaneous feature extraction in upper and lower layers with feedback to the middle layer. Through the cross-fusion of feature layers, it achieves the integration of multi-scale feature information, thereby enhancing the network's detection capabilities for objects at multiple scales. Additionally, the paper presents the STC module to address the problem of poor feature correlation in high-resolution image processing by CNN networks. This module improves feature expressiveness and range, strengthening inter-feature relationships within layers. Furthermore, in the target positioning stage, the paper transforms the angle regression problem into a classification task, making the network more suitable for detecting rotated objects in dense scenes. To assess the effectiveness of the algorithm, experiments were conducted on two publicly available remote sensing datasets, DOTA and UCAS-AOD. Our method achieved a multiclass average precision (mAP) of 77.4 % on DOTA and an outstanding performance of 97.0 % on UCAS-AOD. These experimental results validate the effectiveness of the proposed approach.