3D object detection, as the core of the autonomous vehicle perception module, is essential for efficient transportation and comfortable experiences. However, the challenge of 3D object detection under adverse weather conditions hinders the advancement of autonomous vehicles to higher levels. Hence, achieving accurate 3D object detection under adverse weather conditions is increasingly crucial as it forms the foundation for trajectory planning and driving strategy making in autonomous vehicles, thereby revolutionizing transportation modes for both goods and passengers. Advances in Light Detection and Ranging (LiDAR) technology have facilitated the development of 3D object detection in the past few years. Adverse weather, which inevitably occurs in real-world driving scenarios, could degrade measurement accuracy and point density of LiDAR and lead to particle interference. Detecting accurate 3D bounding boxes from sparse, incomplete point clouds with particle interference is difficult. Therefore, this research presents a novel geometric information constraint network for 3D object detection tasks from LiDAR point clouds under adverse weather (GIC-Net). In this study, we focus on how to incorporate geometric location information and line geometric feature information into the network against adverse weather. Further, we propose a geometric location constrained backbone module (GLC) to reduce rain and snow particle interference and ensure sufficient receptive fields. Then, we design a line geometric feature constraint module (LGFC) to add line constraints of 3D bounding boxes into the training process. Finally, a line loss function is designed, and features from the GLC and LGFC modules are fed into the multi-task detection head for accurate 3D bounding box prediction. Experiments on the Canadian Adverse Driving Conditions (CADC) autonomous vehicle dataset demonstrate the superiority of our method over six other state-of-the-art methods under adverse weather, which is at least 13.32 %, 4.67 %, and 10.44 % mAP higher than the other compared methods in the car, truck, and pedestrian classes respectively. Also, we further verify the better generalization ability of our network compared to other methods.