The real-time and accurate detection of three-dimensional (3D) objects based on LiDAR is a focal problem in the field of autonomous driving environment perception. Compared to two-stage and anchor-based 3D object detection methods that suffer from inference latency challenges, single-stage anchor-free 3D object detection approaches are more suitable for deployment in autonomous driving vehicles with the strict real-time requirement. However, they face the issue of insufficient spatial awareness, which can result in detection errors such as false positives and false negatives, thereby increasing the potential risks of autonomous driving. In response to this, we focus on enhancing the spatial awareness of CenterPoint, a widely used single-stage anchor-free 3D object detector in the industry. Considering the limited allocation of computational resources and the performance bottleneck caused by pillar encoder, we propose an efficient SSDCM backbone to strengthen feature representation and extraction. Furthermore, a simple BGC neck is devised to weight and exchange contextual information in order to deeply fuse multi-scale features. Combining improved backbone and neck networks, we construct a single-stage anchor-free 3D object detection model with spatial awareness enhancement, named CenterPoint-Spatial Awareness Enhancement (CenterPoint-SAE). We evaluate CenterPoint-SAE on two large-scale and challenging autonomous driving datasets, nuScenes and Waymo. It achieves 53.3% mAP and 62.5% NDS on nuScenes detection benchmark, and runs inference at a speed of 11.1 FPS. Compared to the baseline, the upgraded networks deliver a performance improvement of 1.6% mAP and 1.2% NDS at minor cost. Notably, on Waymo dataset, our method achieves competitive detection performance compared to two-stage and point-based methods.