Recently, significant improvements have been achieved for object detection algorithm by increasing the size of convolutional neural network (CNN) models, but the resulting increase of computational complexity poses an obstacle to practical applications. And some of the lightweight methods fail to consider the characteristics of object detection into and suffer a huge loss of accuracy. In this paper, we design a multi-scale feature lightweight network structure and specific convolution module for object detection based on depthwise separable convolution, which not only reduces the computational complexity but also improves the accuracy by using the specific position information in object detection. Furthermore, in order to improve the detection accuracy for small objects, we construct a multi-channel position-aware map and propose training based on knowledge distillation for object detection to train the lightweight model effectively. Last, we propose a training strategy based on a key-layer guiding structure to balance performance with training time. The experimental results show that on the COCO dataset that takes the state-of-the-art object detection algorithm, YOLOv3, as the baseline, our model size is compressed to 1/11 while accuracy drops by 7.4 mmAP, and the computational latency on the GPU and ARM platforms are reduced to 43.7% and 0.29%, respectively. Compared with the state-of-the-art lightweight object detection model, MNet V2 + SSDLite, the accuracy of our model increases by 3.5 mmAP while the inferencing time stays nearly the same. On the PASCAL VOC2007 dataset, the accuracy of our model increases by 5.2 mAP compared to the state-of-the-art lightweight algorithm based on knowledge distillation. Therefore, in terms of accuracy, parameter count, and real-time performance, our algorithm has better performance than lightweight algorithms based on knowledge distillation or depthwise separable convolution.