The current mainstream object detection networks perform well in RGB visible images, but they require high computational resource and degrade in performance when applied to low-resolution infrared images. To address above issues, we propose a lightweight algorithm YOLO-SGF based on you-only-look-once version8 (YOLOv8). Firstly, the lightweight cross-scale feature map fusion network GCFVoV designed as neck to solve poor detection accuracy and maintain low complexity in lightweight networks. And a lightweight GCVF module in GCFVoV neck uses GSConv and Conv to process deep and shallow features respectively, which maximally preserves implicit connections between each channel and integrates multi-scale features. Secondly, we utilize ShuffleNetV2-block1 in combination with C2f for feature extraction, making the algorithm more lightweight and effectively. Finally, we propose the FIMPDIoU loss function, which focuses on overlooked objects in complex backgrounds and adjusts the prediction boxes using ratios specific to different sizes of objects. Compared with YOLOv8 in our infrared dataset, YOLO-SGF reduces the computational space complexity by 50 % and time complexity by 42 %, increases FPS32 by 36.3 % and improves mAP@0.5 ∼ 0.95 by 1.1 % in object detection. Our algorithm enhances the capability of object detection in infrared images especially in nighttime, low light, and occluded conditions. YOLO-SGF enables deployment on embedded edge devices with limited computing power, and provides a new idea for lightweight networks.