3D reconstruction from a single image is one of the core computer vision problems. Thanks to the development of deep learning, 3D reconstruction of a single image has demonstrated impressive progress in recent years. Existing researches use Chamfer distance as a loss function to guard the training of the neural network. However, the Chamfer loss will give equal weights to all points inside the 3D point clouds. It tends to sacrifice fine-grained and thin structures to avoid incurring a high loss, which will lead to visually unsatisfactory results. This paper proposes a framework that can recover a detailed three-dimensional point cloud from a single image by focusing more on boundaries (edge and corner points). Experimental results demonstrate that the proposed method outperforms existing techniques significantly, both qualitatively and quantitatively, and has fewer training parameters.