Object detection is a crucial task of autonomous driving. This paper addresses an effective algorithm for pedestrian detection of the panoramic depth map transformed from the 3D-LiDAR data. Firstly, the 3D point clouds are transformed into panoramic depth maps, and then the panoramic depth maps are enhanced. Secondly, the grounds of the 3D point clouds are removed. The remaining point clouds are clustered, filtered and projected onto the previously generated panoramic depth maps, and new panoramic depth maps are obtained. Finally, the new panoramic depth maps are jointed to generate depth maps with different sizes, which are used as input of the improved PVANET for pedestrian detection. The 2D image of the panoramic depth map applied to the proposed algorithm is transformed from 3D point cloud, effectively containing the panorama of the sensor, and is more suitable for the environment perception of autonomous driving. Compared with the detection algorithm based on RGB images, the proposed algorithm cannot be affected by light, and can maintain the normal average precision of pedestrian detection at night. In order to increase the robustness of detecting small objects like pedestrians, the network structure based on the original PVANET is modified in this paper. A new dataset is built by processing the 3D-LiDAR data and the model trained on the new dataset perform well. The experimental results show that the proposed algorithm achieves high accuracy and robustness in pedestrian detection under different illumination conditions. Furthermore, when trained on the new dataset, the model exhibits average precision improvements of 2.8–5.1 % over the original PVANET, making it more suitable for autonomous driving applications.