3D object detection in LiDAR point clouds is crucial for computer vision tasks such as autonomous driving. The two-stage approach with point cloud completion achieves remarkable performance by generating semantic surface points for foreground objects or learning bird’s eye view shape heat map labels. However, these methods require additional completion datasets, leading to substantial computation and memory demands. In this context, we propose a Boundary Points Guided 3D (BPG3D) object detection method that complements point cloud boundary information without the need for additional data. Specifically, we generate Region of Interest (RoI) boundary points to aggregate the neighbor voxel information at the RoI boundary during the refinement stage to complement the missing boundary information. Meanwhile, we design a Dual Feature Selection (DFS) module to adaptively fuse RoI grid point features and RoI boundary point features for bounding box refinement with negligible computational cost. Additionally, inspired by tensor decomposition theory, we use low-rank tensors to reconstruct high-rank tensors in the point cloud feature encoder to enhance contextual semantic information. The proposed method achieves 65.81% mAP on KITTI Test Set, obtaining a good trade-off between accuracy and efficiency.